Fusion18 Manual
Fusion18 Manual
Reference Manual
Fusion 18.6
Welcome
Welcome to Fusion for Mac, Linux and Windows!
Fusion is the world’s most advanced compositing software for visual effects artists,
broadcast and motion graphic designers and 3D animators. With over 30 years of
development, Fusion has been used on over 1000 major Hollywood blockbuster
feature films! Fusion features an easy and powerful node based interface so you
can construct complex effects simply by connecting various types of processing
together. That’s super easy and extremely fast! You get a massive range of features
and effects included, so you can create exciting broadcast graphics, television
commercials, dramatic title sequences and even major feature film visual effects!
Fusion Studio customers can also use DaVinci Resolve Studio to get a complete set of
editing, advanced color correction and professional Fairlight audio post production
tools. Clips in the DaVinci Resolve timeline can be shared with Fusion so you can
collaborate on your complex compositions within your VFX team and then render
the result back directly into the DaVinci Resolve timeline. We hope you enjoy reading
this manual and we can’t wait to see the work you produce with Fusion.
Grant Petty
CEO Blackmagic Design
Contents
1 Fusion Fundamentals������������������������������������������������������� 5
2 2D Compositing������������������������������������������������������������� 409
3 3D Compositing������������������������������������������������������������� 592
Warranty������������������������������������������������������������������������ 1712
Navigation Guide
For ease of use navigating this manual, each table of contents (TOC) listed on this manual are
hyperlinked, and by clicking on each title or page number, you will be taken to the appropriate
part of the manual. On the right hand side of each page includes a hyperlink tab. As you hover the
pointer over the tab and by clicking on the tab you will be taken to one of the TOC page.
Chapter 26
3D Camera Tracking
Hyperlink Tab
This chapter presents an overview of using the Camera Tracker
3D
node and the workflow it involves. Camera tracking is used to
create a virtual camera in Fusion’s 3D environment based on the
movement or a live-action camera in a clip. You can then use the
Compositing
virtual camera to composite 3D models, text, or 2D images into a
live-action clip that has a moving camera.
For more information on other types of tracking in Fusion, see Chapter 22, “Using the Tracker Node,”
in the Fusion Reference Manual.
Contents
Introduction to Tracking �������������������������������� 650 Matching the Live-Action Camera ������������� 658
What Is 3D Camera Tracking? ���������������������� 650 Running the Solver�������������������������������������������� 659 PART 3 — CONTENTS
How Camera Tracking Works ������������������������� 651 How Do You Know When to Stop?����������������� 659
The Camera Tracking Workflow �������������������� 651 Using Seed Frames ��������������������������������������������� 660 25 3D Compositing Basics �������������������������������������������������������������������������������������������������� 593
Chapter Number
Clips That Don’t Work Well Cleaning Up Camera Solves ���������������������������� 661
26 3D Camera Tracking ������������������������������������������������������������������������������������������������������� 649
for Camera Tracking ������������������������������������������ 652
Exporting a 3D Scene for Efficiency ���������� 664
Outputting from the Camera Tracker ������ 653 27 Particle Systems �������������������������������������������������������������������������������������������������������������� 668
Unalign the 3D Scene Transforms ���������������� 664
2D View ������������������������������������������������������������������� 653
Setting the Ground Plane �������������������������������� 664
3D View ������������������������������������������������������������������� 653
Setting the Origin ����������������������������������������������� 665
Auto-Tracking in the Camera Tracker ������� 655
Setting the Scale�������������������������������������������������� 666
Increasing Auto-Generated
Tracking Points����������������������������������������������������� 655 Realign the Scene ������������������������������������������������ 666
Masking Out Objects ����������������������������������������� 656 Viewing the Exported Results ������������������������ 666
Page Number
Contents
1 Fusion Fundamentals 5
Fusion 18.6
Menu Descriptions
For ease of use navigating this manual, each menu item is listed here, and by clicking on the name of the
menu function, you will be taken to the appropriate part of the manual that describes that function.
Main TOC
Fusion
Show Toolbar – Page 33
Toggles the Fusion toolbar on or off.
Reset Composition
Resets a Fusion composition to its initial state.
Menu Description
PART 1 — CONTENTS
13 Bins��������������������������������������������������������������������������������������������������������������������������������������� 327
15 Preferences������������������������������������������������������������������������������������������������������������������������� 361
Chapter 1
Introduction
to Compositing
in Fusion
This introduction is designed explicitly to help users who are new to
Fusion get started learning this exceptionally powerful environment
for creating and editing visual effects and motion graphics
right from within DaVinci Resolve or using the stand-alone
Fusion Studio application.
This documentation covers both the Fusion Page inside DaVinci Resolve and the
stand-alone Fusion Studio application.
Contents
What Is Fusion?��������������������������������������������������������������������������������������������������������������� 7
In its purest form, Fusion is a collection of image-processing engines called nodes. These nodes
represent effects like blurs and color correctors, as well as images, 3D models, and spline masks.
Similar to effects you may be familiar with, each node includes a set of parameters that can be
adjusted and animated over time. Stringing different nodes together in a graphical user interface
called a node tree allows you to create sophisticated visual effects. The nodes, node trees, and all
settings you create are saved in a document called a Composition, or “comp” for short.
The Fusion page in DaVinci Resolve, showing viewers, the Node Editor, and the Inspector
If you use the Fusion page to create any kind of effect or composite, a badge appears on that clip in
the Timeline to show that clip has a composition applied to it.
To create an effect in the Fusion page of DaVinci Resolve, you need only park the playhead over a clip
in the Edit or Cut page and then click the Fusion page button. Your clip is immediately available as a
MediaIn node in the Fusion page, ready for you to add a variety of stylistic effects. You can paint out
an unwanted blemish or feature, build a quick composite to add graphics or text, or accomplish any
other visual effect you can imagine, built from the Fusion page’s toolkit of effects.
Alternatively, in DaVinci Resolve, you have the option of editing together all the clips you want to use,
superimposing and lining up every piece of media you’ll need with the correct timing, before selecting
them and creating a Fusion clip. A Fusion clip functions as a single item in the Edit or Cut page
timeline, but once in the Fusion page, each piece of media you’ve assembled is revealed in a fully built
Fusion composition, ready for you to start adding nodes to customize for whatever effect you need
to create.
Whichever way you want to work, all this happens on the very same timeline as editing, grading, and
audio post, for a seamless back and forth as you edit, refine, and finish your projects.
3D Compositing
Fusion has powerful 3D nodes that include extruded 3D text, simple geometry, and the ability to
import 3D models. Once you’ve assembled a 3D scene, you can add cameras, lighting, and material
shaders, and then render the result with depth-of-field effects and auxiliary channels to integrate with
more conventional layers of 2D compositing, for a sophisticated blending of 3D and 2D operations in
the very same node tree.
Particles
Fusion also has an extensive set of nodes for creating particle systems that have been used in
major motion pictures, with particle generators capable of spawning other generators, 3D particle
generation, complex simulation behaviors that interact with 3D objects, and endless options for
experimentation and customization. You can create particle system simulations for VFX or more
abstract particle effects for motion graphics.
Text
The Text tools in Fusion are exceptional, giving you layout and animation options in both 2D and 3D.
Furthermore, within DaVinci Resolve, these Text tools have been incorporated into the Edit and Cut
pages as Fusion Titles. These title templates are compositions saved from Fusion as macros with
published controls that are visible in the Edit or Cut page Inspector for easy customization, even if
you’re working with people who don’t know Fusion.
Fusion is a deep, production-driven product that’s had decades of development, so its feature set is
deep and comprehensive. You won’t learn it in an hour, but much of what you’ll find won’t be so very
different from other compositing applications you may have used. And if you’ve familiarized yourself
with the node-based grading workflow of the DaVinci Resolve Color page, you’ve already got a leg up
on understanding the central operational concept of compositing in Fusion.
Exploring the
Fusion Interface
This chapter provides an orientation on the Fusion user interface,
providing a quick tour of what tools are available, where to find
things, and how the different panels fit together to help you build
and refine compositions in this powerful node‑based environment.
Contents
The Fusion User Interface�������������������������������� 13 The Fusion RAM Cache for Playback�������������� 31
Time Ruler Controls in the Fusion Page�������� 21 Vertical Node Editor Layouts���������������������������� 39
However, Fusion doesn’t have to be that complicated, and in truth, you can work very nicely with only
the viewer, Node Editor, and Inspector open for a simplified experience.
The work area showing the Node Editor, the Spline Editor, and the Keyframes Editor
Interface Toolbar
At the very top of Fusion is a toolbar with buttons that let you show and hide different parts of
the user interface (UI). Buttons with labels identify which parts of the UI can be shown or hidden.
In DaVinci Resolve’s Fusion page, if you right-click anywhere within this toolbar, you have the option of
displaying this bar with or without text labels.
— Media Pool/Effects Library Full Height: Lets you set the area used by the Media Pool
(DaVinci Resolve only) and/or Effects Library to take up the full height of your display, giving you
more area for browsing at the expense of a narrower Node Editor and viewer area. At half-height,
the Media Pool/Templates/Effects Library are restricted to the top half of the UI along with the
viewers (you can only show one at a time), and the Node Editor takes up the full width of your display.
— Media Pool: (DaVinci Resolve only): Shows and hides the Media Pool, from which you can drag
additional clips into the Node Editor to use them in your Fusion page composition.
— Effects Library: Opens or hides the repository of all node tools available to use in Fusion.
From here, you can click nodes to add them after the currently selected node in the Node Editor,
or you can drag and drop nodes to any part of the node tree you like.
— Clips: (DaVinci Resolve only): Opens and closes the Thumbnail timeline, which lets you
navigate your program, create and manage multiple versions of compositions, and reset the
current composition.
— Nodes: Opens and closes the Node Editor, where you build and edit your compositions.
— Console (Fusion Studio only): The Console is a window in which you can see the error, log, script,
and input messages that may explain something Fusion is trying to do in greater detail. The
Console is also where you can read FusionScript outputs, or input FusionScripts directly.
— Spline: Opens and closes the Spline Editor, where you can edit the curves that interpolate
keyframe animations to customize and perfect their timing. Each keyframed parameter appears
hierarchically within the effect in which it appears in a list to the left.
— Keyframes: Opens and closes the Keyframes Editor, which shows each clip and effects node in
your Fusion composition as a layer. You can use the Keyframes Editor to edit and adjust the timing
of keyframes that have been added to various effects in your composition. You can also use the
Keyframes Editor to slide the relative timing of clips that have been added to Fusion, as well as
to trim their In and Out points. A spreadsheet can be shown and hidden within which you can
numerically edit keyframe values for selected effects.
— Metadata (DaVinci Resolve only): Hides or shows the Metadata Editor, which lets you read and
edit the available clip and project metadata associated with any piece of media within a composite.
— Inspector: Shows or hides the Inspector, which shows you all the editable parameters and
controls that correspond to selected nodes in the Node Editor. You can show the parameters for
multiple nodes at once, and even pin the parameters of nodes you need to continue editing so
that they’re displayed even if those nodes aren’t selected.
— Inspector Height: Lets you open the Inspector to be half height (the height of the viewer area)
or full height (the height of your entire display). Half height allows more room for the Node Editor,
Spline Editor, and/or Keyframes Editor, but full height lets you simultaneously edit more node
parameters or have enough room to display the parameters of multiple nodes at once.
To make it easier to keep track of which panel has focus, a highlight appears at the top edge of
whichever panel has focus. In DaVinci Resolve, you must turn on “Show focus indicators in the
User Interface” in the UI Settings panel of the User Preferences to see the highlight.
Viewers
The viewer area displays either one or two viewers at the top of the Fusion page, and this is
determined via the Viewer button at the far right of the Viewer title bar. Each viewer can show a single
node’s output from anywhere in the node tree. You assign which node is displayed in which viewer.
This makes it easy to load separate nodes into each viewer for comparison. For example, you can
load a Keyer node into the left viewer and the final composite into the right viewer, so you can see the
image you’re adjusting and the final result at the same time.
Dual viewers let you edit an upstream node in one while seeing
its effect on the overall composition in the other.
Ordinarily, each viewer shows 2D nodes from your composition as a single image. However,
when you’re viewing a 3D node, you have the option to set that viewer to one of several 3D views.
A perspective view gives you a repositionable stage on which to arrange the elements of the
world you’re creating. Alternatively, a quad view lets you see your composition from four angles,
making it easier to arrange and edit objects and layers within the XYZ axes of the 3D space in which
you’re working.
TIP: In Perspective view, you can hold down the middle and right mouse buttons, then
drag in the viewer to pivot the view around the center of the world. All other methods of
navigating viewers work the same.
The viewers have a variety of capabilities you can use to compare and evaluate images. This section
provides a short overview of viewer capabilities to get you started.
When using Fusion Studio, nothing is loaded into either of the viewers until you assign a node to
one of them.
When a node is being viewed, a View Indicator button appears at the bottom left. This is the same
control that appears when you hover the pointer over a node. Not only does this control let you know
which nodes are loaded into which viewer, but they also expose little round buttons for switching
between viewers.
You can also select the node that is currently showing in the viewer, and press the viewer number
again (1 or 2 respectively) to clear the viewer.
Viewer Controls
A series of buttons and pop-up menus in the viewer’s title bar provides several quick ways of
customizing the viewer display.
— Zoom menu: Lets you zoom in on the image in the viewer to get a closer look, or zoom out to get
more room around the edges of the frame for rotoscoping or positioning different layers. Choose
Fit to automatically fit the overall image to the available dimensions of the viewer.
— Split Wipe button and A/B Buffer menu: You can actually load two nodes into a single viewer
using that viewer’s A/B buffers by choosing a buffer from the menu and loading a node into the
viewer. Turning on the Split Wipe button (press Forward Slash) shows a split wipe between the
two buffers, which can be dragged left or right via the handle of the onscreen control, or rotated
by dragging anywhere on the dividing line on the onscreen control. Alternatively, you can switch
between each full-screen buffer to compare them (or to dismiss a split-screen) by pressing Comma
(A buffer) and Period (B buffer).
— SubView type: (These aren’t available in 3D viewers.) Clicking the icon itself enables or disables
the current “SubView” option you’ve selected, while using the menu lets you choose which
SubView is enabled. This menu serves one of two purposes. When displaying ordinary 2D nodes,
it lets you open up SubViews, which are viewer “accessories” within a little pane that can be used
to evaluate images in different ways. These include an Image Navigator (for navigating when
zoomed far into an image), Magnifier, 2D viewer (a mini-view of the image), 3D Histogram scope,
Color Inspector, Histogram scope, Image Info tooltip, Metadata tooltip, Vectorscope, or Waveform
scope. The Swap option (Shift-V) lets you switch what’s displayed in the viewer with what’s being
displayed in the Accessory pane. When displaying 3D nodes, this button lets you have access to an
additional mini 3D viewer.
— Node name: The name of the currently viewed node is displayed at the center of the
viewer’s title bar.
— ROI controls: Clicking the icon itself enables or disables RoI (Region of Interest) limiting in the
viewer, while using the menu lets you choose the region of the RoI. RoI lets you define the region
of the viewer in which pixels actually need to be updated. When a node renders, it intersects
the current RoI with the current Domain of Definition (DoD) to determine what pixels should be
affected. When enabled, you can position a rectangle to restrict rendering to a small region of the
image, which can significantly speed up performance when you’re working on very high resolution
— Option menu: This menu contains various settings that pertain to the viewers in Fusion.
— Snap to Pixel: When drawing or adjusting a polyline mask or spline, the control points will
snap to pixel locations.
— Show Controls: Toggles whatever onscreen controls are visible for the currently selected node.
— Region: Provides all the settings for the Region of Interest in the viewer.
— Smooth Resize: This option uses a smoother bilinear interpolated resizing method when
zooming into an image in the viewer; otherwise, scaling uses the nearest neighbor method
and shows noticeable aliasing artifacts. However, this is more useful when you zoom in at a
pixel level since there is no interpolation.
— Show Square Pixels: Overrides the auto aspect correction when using formats with
non-square pixels.
— Checker Underlay: Toggles a checkerboard underlay that makes it easy to see
areas of transparency.
— Normalized Color Range: Allows for the visualization of brightness values outside of the normal
viewing range, particularly when working with floating-point images or auxiliary channels.
— Gain/Gamma: Exposes a simple pair of Gain and Gamma sliders that let you adjust the
viewer’s brightness.
— 360 View: Used to properly display spherical imagery in a variety of formats, selectable
from this submenu.
— Stereo: Used to properly display stereoscopic imagery in a variety of formats, selectable
from this submenu.
— For DaVinci Resolve users, the duration displayed in the Time Ruler range depends on what’s
currently selected in the Edit or Cut page timeline.
— In Fusion Studio, the Time Ruler depends on the Global Start and End values set in the Fusion
Studio Preferences > Defaults.
The transport controls under the Time Ruler include playback controls, audio monitoring, as well as
number fields for the composition duration and playback range. Additional controls enable motion
blur and proxy settings.
The Time Ruler displaying ranges for a clip in the Timeline via yellow marks (the playhead is red)
If you’ve created a Fusion clip or a compound clip, then the “working range” reflects the entire
duration of that clip.
The Time Ruler displaying ranges for a Fusion clip in the Timeline
Render Range
The render range determines the range of frames that are visible in the Fusion page and that are used
for interactive playback, disk caches, and previews. Frames outside the default render range are not
visible in the Fusion page and are not rendered or played.
You can modify the duration of the render range for preview and playback only. Making the range
shorter or longer does not trim the clip in the Edit or Cut page Timelines.
You can change the render range in the Time Ruler by doing one of the following:
— Hold down the Command key and drag a new range within the Time Ruler.
— Drag either the start or end yellow line to modify the start or end of the range.
— Right-click within the Time Ruler and choose Set Render Range from the contextual menu.
You can return the render range to the In and Out points of the timeline clip by
doing one of the following.
— Right-click within the Time Ruler and choose Auto Render Range.
— Click back in the Edit or Cut page, and then return to the Fusion page.
The Time Ruler displaying ranges for a clip in the Timeline via yellow marks (the playhead is red)
You can change the global range by doing one of the following:
— To change the global range for all new compositions, choose Fusion Studio > Preferences on
macOS or File > Preferences on Windows or Linux. In the Global and Default Settings panel, enter
a new range in the Global range fields.
— To change the Global range for the current composition, enter a new range in the Global Start and
End fields to the left of the transport controls.
— Dragging a node from the Node Editor to the Time Ruler automatically sets the Global and Render
Range to the extent of the node.
Render Range
The render range determines the range of frames used for interactive playback, disk caches, and
previews. Frames outside the render range are not rendered or played, although you can still drag the
playhead to these frames to see the unused frames.
To preview or render a specific range of a composition, you can modify the render range in a
variety of ways.
You can set the render range in the Time Ruler by doing one of the following:
— Hold down the Command key and drag a new range within the Time Ruler.
— Right-click within the Time Ruler and choose Set Render Range from the contextual menu to set
the Render Range based on the selected Node’s duration.
— Enter new ranges in the Range In and Out fields to the left of the transport controls.
— Drag a node from the Node Editor to the Time Ruler to set the range to the duration of that node.
TIP: Holding the middle mouse button and dragging in the Time Ruler lets you scroll the
visible range.
Controlling Playback
There are six transport controls underneath the Time Ruler in the Fusion page. These buttons include
Composition First Frame, Play Reverse, Stop, Play Forward, Composition Last Frame, and Loop.
Navigation Shortcuts
Many standard transport control keyboard shortcuts you may be familiar with work in Fusion, but
some are specific to Fusion’s particular needs.
To move the playhead in the Time Ruler using the keyboard, do one of the following:
— Spacebar: Toggles forward playback on and off.
— JKL: Basic JKL playback is supported, including J to play backward, K to stop, and L to play forward.
— Back Arrow: Moves 1 frame backward.
— Forward Arrow: Moves 1 frame forward.
— Shift-Back Arrow: Moves to the clip’s Global Start frame.
— Shift-Forward Arrow: Moves to the clip’s Global End frame.
— Command-Back Arrow: Jumps to the Render Range In point.
— Command-Forward Arrow: Jumps to the Render Range Out point.
Moving the playhead in multi-frame increments can be useful when rotoscoping. Moving the playhead
in sub-frame increments can be useful when rotoscoping or inspecting interlaced frames one field at
a time (0.5 of a frame).
Looping Options
The Loop button can be toggled to enable or disable looping during playback. You can right-click this
button to choose the looping method that’s used:
— Playback Loop: The playhead plays to the end of the Time Ruler and starts from the
beginning again.
— Ping-pong Loop: When the playhead reaches the end of the Time Ruler, playback reverses
until the playhead reaches the beginning of the Time Ruler, and then continues to ping-pong
back and forth.
Audio Monitoring
Playing a composition in DaVinci Resolve’s Fusion page will play the audio from the Edit or Cut page
Timeline. You can choose to hear the audio or mute it using the Audio toolbar button to the left of the
transport controls. The audio waveforms are displayed in the Keyframes Editor to assist in the timing
of your animations.
TIP: If the Mute button is enabled on any Timeline tracks, audio from those tracks will not
be heard in Fusion.
For Fusion Studio, audio can be loaded using the Loader node’s Audio tab. The audio functionality is
included in Fusion Studio for scratch track (aligning effects to audio and clip timing) purposes. Final
renders should almost always be performed without audio. Audio can be heard if it is brought in
through a Loader node.
When setting ranges and entering frame numbers to move to a specific frame, numbers can be
entered in sub-frame increments. You can set a range to be –145.6 to 451.75 or set the playhead to
115.22. This can be very helpful when animating parameters because you can set keyframes where
they actually need to occur, rather than on a frame boundary, so you get more natural animation.
Having sub-frame time lets you use time remapping nodes or just scale keyframes in the Spline view
and maintain precision.
Rendering for final output is always done at the highest quality, regardless of these settings.
High Quality
As you build a composition, often the quality of the displayed image is less important than the
speed at which you can work. The High Quality setting gives you the option to either display
images with faster interactivity or at final render quality. When you turn off High Quality, complex
and time‑consuming operations such as area sampling, anti-aliasing, and interpolation are
skipped to render the image to the viewer more quickly. Enabling High Quality forces a full-quality
render to the viewer that’s identical to what is output during final delivery.
Motion Blur
The Motion Blur button is a global setting. Turning off Motion Blur temporarily disables motion
blur throughout the composition, regardless of any individual nodes for which it’s enabled. This
can significantly speed up renders to the viewer. Individual nodes must first have motion blur
enabled before this button has any effect.
Proxy
The Proxy setting is a draft mode used to speed processing while you’re building your composite.
Turning on Proxy reduces the resolution of the images that are rendered to the viewer, speeding
render times by causing only one out of every x pixels to be processed, rather than processing
every pixel. The value of x is decided by adjusting a slider in the Proxy section in the Fusion >
Fusion Settings > General panel.
Auto Proxy
The Auto Proxy setting is a draft mode used to speed processing while you’re building your
composite. Turning on Auto Proxy reduces the resolution of the image while you click and drag to
adjust a parameter. Once you release that control, the image snaps back to its original resolution.
This lets you adjust processor-intensive operations more smoothly, without the wait for every
frame to render at full quality causing jerkiness. You can set the auto proxy ratio by adjusting a
slider in the Proxy section of the Fusion > Fusion Settings > General panel.
Selective Updates
When working in Fusion, only the tools needed to display the images in the viewer are updated.
The Selective Update options select the mode used during previews and final renders.
The options are available in the Proxy section of the Fusion > Fusion Settings > General panel. The
three options are:
— Update All (All): Forces all the nodes in the current node tree to render. This is primarily used
when you want to update all the thumbnails displayed in the Node Editor.
— Selective (Some): Causes only nodes that directly contribute to the current image to be
rendered. So named because only selective nodes are rendered. This is the default setting.
— No Update (None): Prevents rendering altogether, which can be handy for making many
changes to a slow-to-render composition.
Controlling Playback
There are eight transport controls underneath the Time Ruler in Fusion Studio. These buttons include
Composition First Frame, Step Backward, Play Reverse, Stop, Play Forward, Step Forward, Composition
Last Frame, and Loop.
Navigation Shortcuts
Many standard transport control keyboard shortcuts you may be familiar with work in Fusion, but
there are some keyboard shortcuts specific to Fusion’s particular needs.
To move the playhead in the Time Ruler using the keyboard, do one of the following:
— Spacebar: Toggles forward playback on and off.
— JKL: Basic JKL playback is supported, including J to play backward, K to stop, and L to play forward.
— Back Arrow: Moves 1 frame backward.
— Forward Arrow: Moves 1 frame forward.
— Shift-Back Arrow: Moves to the clip’s Global End frame.
— Shift-Forward Arrow: Moves to the clip’s Global Start frame.
— Command-Back Arrow: Jumps to the Render Range In point.
— Command-Forward Arrow: Jumps to the Render Range Out point.
Looping Options
The Loop button can be toggled to enable or disable looping during playback. You can right-click this
button to choose the looping method that’s used:
— Playback Loop: The playhead plays to the end of the Time Ruler and starts from the
beginning again.
— Ping-pong Loop: When the playhead reaches the end of the Time Ruler, playback
reverses until the playhead reaches the beginning of the Time Ruler, and then continues to
ping-pong back and forth.
Range Fields
The four time fields on the left side of the transport controls are used to quickly modify the global
range and render range in Fusion Studio.
Audio
The Audio button is a toggle that mutes or enables any audio associated with the clip. Additionally,
right-clicking on this button displays a drop-down menu that can be used to select a WAV file, which
can be played along with the composition, and to assign an offset to the audio playback.
When setting ranges and entering frame numbers to move to a specific frame, numbers can be
entered in sub-frame increments. You can set a range to be –145.6 to 451.75 or set the playhead
to 115.22. This can be very helpful when animating parameters because you can set keyframes where
they actually need to occur, rather than on a frame boundary, so you get more natural animation.
Having sub-frame time lets you use time remapping nodes or just scale keyframes in the Spline view
and maintain precision.
NOTE: Many fields in Fusion can evaluate mathematical expressions that you type
into them. For example, typing 2 + 4 into most fields results in the value 6.0 being
entered. Because Feet + Frames uses the + symbol as a separator symbol rather than a
mathematical symbol, the Current Time field will not correctly evaluate mathematical
expressions that use the + symbol, even when the display format is set to Frames mode.
Rendering for final output is always done at the highest quality, regardless of these settings.
HiQ
As you build a composition, often the quality of the displayed image is less important than the
speed at which you can work. The High Quality setting gives you the option to either display
images with faster interactivity or at final render quality. When you turn off High Quality, complex
and time-consuming operations such as area sampling, anti-aliasing, and interpolation are
skipped to render the image to the viewer more quickly. Enabling High Quality forces a full-quality
render to the viewer that’s identical to what will be output during final delivery.
MB
The Motion Blur button is a global setting. Turning off Motion Blur temporarily disables motion
blur throughout the composition, regardless of any individual nodes for which it’s enabled.
Prx
A draft mode to speed processing while you’re building your composite. Turning on Proxy reduces
the resolution of the images that are rendered to the viewer, speeding render times by causing
only one out of every x pixels to be processed, rather than processing every pixel. The value of x
is decided by adjusting a slider in the General panel of the Fusion Preferences, found under the
Fusion menu on macOS or the File menu on Windows and Linux.
Aprx
A draft mode to speed processing while you’re building your composite. Turning on Auto Proxy
reduces the resolution of the image while you click and drag to adjust a parameter. Once you release
that control, the image snaps back to its original resolution. This lets you adjust processor-intensive
operations more smoothly, without the wait for every frame to render at full quality causing
jerkiness. You can set the auto proxy ratio by adjusting a slider in the General panel of the Fusion
Preferences, found under the Fusion menu on macOS or the File menu on Windows and Linux.
Selective Updates
The last of the five buttons on the right of the transport controls is a three-way toggle that
determines when nodes update images in the viewer. By default, when working in Fusion, any
node needed to display the image in the viewer is updated. The Selective Update button can
change this behavior during previews and final renders.
The options are also available in the Fusion Preferences General panel.
Priority is given to caching nodes that are currently being displayed, based on which nodes are loaded
to which viewers. However, other nodes may also be cached, depending on available memory and on
how processor-intensive those nodes happen to be, among other factors.
— Limit Fusion Memory Cache To: This slider sets the maximum amount of RAM that Fusion
can access for caching. It is a subset of the RAM allocated to DaVinci Resolve. You can assign
a maximum of 75% to Fusion from DaVinci Resolve’s total RAM allocation. When not using the
Fusion page, the RAM is released for other pages in DaVinci Resolve.
There are two settings in Fusion Studio for limiting the RAM used for caching. These settings are
located in the Preferences Memory panel.
— Limit Caching To: This slider sets the maximum amount of RAM used for caching. The 60%
default setting on a 32-GB system limits the cache to 19.2 GB. The maximum amount you can
assign to Fusion Studio is limited to 80% of the total system memory. This leaves a minimum
amount of memory for other applications and the operating system.
— Leave at least # MBytes: This number field further limits caching in cases where the system’s
available free RAM drops below the entered value. For instance, setting this to 200 MB attempts
to keep 200 MB of RAM free for the OS or other applications. Setting the number field to 0 allows
Fusion Studio to use the full amount of RAM specified by the Limit Caching To setting, ignoring
other apps.
The green lines indicate frames that have been cached for playback.
There’s one exception to this, however. When you cache frames at the High Quality setting, and you
then turn off High Quality, the green frames won’t turn red. Instead, the High Quality cached frames
are used even though the HiQ setting has been disabled.
The toolbar has buttons for adding commonly used nodes to the Node Editor.
The default toolbar is divided into sections that group commonly used nodes together. As you hover
the pointer over any button, a tooltip shows you that node’s name.
— Loader/Saver nodes (Fusion Studio Only): The Loader node is the primary node used to
select and load clips from the hard drive. The Saver node is used to write or render your
composition to disk.
— Generator/Title/Paint nodes: The Background and FastNoise generators are commonly used to
create all kinds of effects, and the Title generator is obviously a ubiquitous tool, as is Paint.
— Color/Blur nodes: ColorCorrector, ColorCurves, HueCurves, and BrightnessContrast are the four
most commonly used color adjustment nodes, while the Blur node is ubiquitous.
— Compositing/Transform nodes: The Merge node is the primary node used to composite one
image against another. ChannelBooleans and MatteControl are both essential for reassigning
channels from one node to another. Resize alters the resolution of the image, permanently altering
the available resolution, while Transform applies pan/tilt/rotate/zoom effects in a resolution-
independent fashion that traces back to the original resolution available to the source image.
— Mask nodes: Rectangle, Ellipse, Polygon, and BSpline mask nodes let you create shapes to use for
rotoscoping, creating garbage masks, or other uses.
— Particle system nodes: Three particle nodes let you create complete particle systems when you
click them from left to right. pEmitter emits particles in 3D space, while pMerge lets you merge
multiple emitters and particle effects to create more complex systems. pRender renders a 2D
result that can be composited against other 2D images.
— 3D nodes: Seven 3D nodes let you build sophisticated 3D scenes. These nodes auto attach to one
another to create a quick 3D template when you click from left to right. ImagePlane3D lets you
connect 2D stills and movies for compositing into 3D scenes. Shape3D lets you create geometric
primitives of different kinds. Text3D lets you build 3D text objects. Merge3D lets you composite
multiple 3D image planes, primitive shapes, and 3D text together to create complex scenes, while
Camera3D lets you frame the scene in whatever ways you like. SpotLight lets you light the scenes
in different ways and Renderer3D renders the final scene and outputs 2D images and auxiliary
channels that can be used to composite 3D output against other 2D layers.
When you’re first learning to use Fusion, these nodes are really all you need to build most common
composites. Once you’ve become a more advanced user, you’ll still find that these are truly the most
common operations you’ll use.
TIP: Adding and deleting tools from a custom toolbar is not undoable. If you are creating
a complex toolset, make a new custom toolbar based on your current toolbar in between
major changes and work off that. That way if you make an error, you can revert back to the
last known good toolbar. Once you have the final toolbar the way you want it, you can go
back and remove all the interim custom toolbars you made.
Node Editor
The Node Editor is the heart of Fusion because it’s where you build the tree of nodes that makes up
each composition. Each node you add to the node tree adds a specific operation that creates one
effect, whether it’s blurring the image, adjusting color, painting strokes, drawing and adding a mask,
extracting a key, creating text, or compositing two images into one.
You can think of each node as a layer in an effects stack, except that you have the freedom to route
image data in any direction to branch and merge different segments of your composite in completely
nonlinear ways. This makes it easy to build complex effects, but it also makes it easy to see what’s
You can connect a single node’s output to the inputs of multiple nodes (called “branching”).
You can then composite images together by connecting the output from multiple nodes to certain
nodes such as the Merge node that combines multiple inputs into a single output.
Nodes can be oriented in any direction; the input arrows let you follow the flow of image data.
There are other standard methods of panning and zooming around the Node Editor.
When in the Fusion page, you can choose the layouts from Workspace > Layout Presets. Choosing a
vertical layout allows the node tree to flow from top to bottom, leaving much more room along the
lower half of the screen for the Spline Editor or Keyframes Editor.
The Mid Flow Vertical layout preset used with the Vertical Flow direction setting.
When using the vertical layouts, enabling the Flow > Build Direction > Vertical option in the Fusion
settings will cause all new Node trees to build vertically, leaving maximum room for Fusion’s
animation tools.
You can then save alternative layouts based on these two vertical presets using the Workspace >
Layout Presets submenu.
These Layout options are not available in Fusion Studio, however, you can use the Floating Frame to
position the Node Editor wherever you like.
Keeping Organized
As you work, it’s important to keep the node trees that you create tidy to facilitate a clear
understanding of what’s happening. Fortunately, the Fusion Node Editor provides a variety of
methods and options to help you with this, found within the Options and Arrange Tools submenus of
the Node Editor contextual menu.
Status Bar
The Status bar in the lower-left corner of the Fusion window shows you a variety of up-to-date
information about things you’re selecting and what’s happening in Fusion. For example, hovering the
pointer over a node displays information about that node in the Status bar. Additionally, the currently
achieved frame rate appears whenever you initiate playback, and the percentage of the RAM cache
that’s used appears at all times. Other information, updates, and warnings appear in this area
as you work.
Occasionally the Status bar will display a badge to let you know there’s a message in the console you
might be interested in. The message could be a log, script message, or error.
The Effects Library with Tools open The Templates section of the Effects Library
The hierarchical category browser of the Effects Library is divided into several sections depending
on whether you are using Fusion Studio or the Fusion page within DaVinci Resolve. The Tools section
is the most often used since it contains every node that represents an elemental image-processing
operation in Fusion. Hovering the pointer over a specific tool will reveal a tool-tip explaining its
functionality at the bottom right of the DaVinci Resolve interface. The OpenFX section contains third-
party plugins, and if you are using the Fusion page, it also contains ResolveFX, which are included
with DaVinci Resolve. A third section, only visible when using the Fusion page in DaVinci Resolve, is the
Templates section. The Template section contains a variety of additional content including templates
for Lens Flares, Backgrounds, Generators, Particle Systems, Shaders (for texturing 3D objects), and
other resources for use in your composites.
The Effects Library’s list can be made full height or half height using a button at the far left
of the UI toolbar.
1
The Inspector shows parameters from The Modifier panel
one or more selected nodes. showing a Perturb modifier
Other nodes display node-specific items here. For example, Paint nodes show each brush
stroke as an individual set of controls in the Modifiers panel, available for further editing
or animating.
Versions: Clicking Versions reveals another toolbar with six buttons. Each button can
hold an individual set of adjustments for that node that you can use to store multiple
versions of an effect.
Pin: The Inspector is also capable of simultaneously displaying all parameters for
multiple nodes you’ve selected in the Node Editor. Furthermore, a Pin button in the
title bar of each node’s parameters lets you “pin” that node’s parameters into the
Inspector so that they remain there even when that node is deselected, which is
valuable for key nodes that you need to adjust even while inspecting other nodes of
your composition.
Parameter Tabs
Many nodes expose multiple tabs’ worth of controls in the Inspector, seen as icons at the top of the
parameter section for each node. Click any tab to expose that set of controls.
Keyframes Editor
The Keyframes Editor displays each node in the current composition as a stack of layers within a
miniature timeline. The order of the layers is largely irrelevant as the order and flow of connections
in the node tree dictate the order of image-processing operations. You use the Keyframes Editor to
trim, extend, or slide Loader, MediaIn, and effects nodes, or to adjust the timing of keyframes, which
appear superimposed over each effect node unless you open them up into their editable track.
The Keyframes Editor is used to adjust the timing of clips, effects, and keyframes.
A Horizontal zoom control lets you scale the size of the editor.
— A Zoom to Fit button fits the width of all layers to the current width of the Keyframes Editor.
— A Zoom to Rect tool lets you draw a rectangle to define an area of the Keyframes
Editor to zoom into.
— A Sort pop-up menu lets you sort or filter the tracks in various ways.
— An Option menu provides access to many other ways of filtering tracks and
controlling visible options.
A timeline ruler provides a time reference, as well as a place in which you can scrub the playhead.
At the left, a track header contains the name of each layer, as well as controls governing that layer.
— A lock button lets you prevent a particular layer from being changed.
— Nodes that have been keyframed have a disclosure control, which when opened displays a
keyframe track for each animated parameter.
In the middle, the actual editing area displays all layers and keyframe tracks available in the
current composition.
At the bottom-left, Time Stretch and Spreadsheet mode controls provide additional ways
to manipulate keyframes.
At the bottom right, the Time/TOffset/TScale drop-down menu and value fields let you numerically
alter the position of selected keyframes either absolutely, relatively, or based on their distance from
the playhead.
The Keyframes Editor also lets you adjust the timing of elements that you’ve added from directly
within Fusion.
To edit keyframes, you can click the disclosure control to the left of any animated layer’s name in the
track header, which opens up keyframe tracks for every keyframed parameter within that layer.
To change the position of a keyframe using the toolbar, do one of the following:
— Select a keyframe, and then enter a new frame number in the Time Edit box.
— Choose T Offset from the Time Editor pop-up, select one or more keyframes, and enter a
frame offset.
— Choose T Scale from the Time Editor pop-up, select one or more keyframes, and enter a multiplier
added to the current playhead frame position. For instance, if the playhead is on frame 10 and
the keyframe is on frame 30, entering the TScale value of 2 will position the keyframe on frame
50. The distance between the playhead and original keyframe is 20, so (20 x 2) = 40, which is then
added to the playhead position.
For example, if you’re animating a motion path, then the “Key Frame” row shows the frame each
keyframe is positioned at, and the “Path1Displacement” row shows the position along the path at
each keyframe. If you change the Key Frame value of any keyframe, you’ll move that keyframe to a
new frame of the Timeline.
— Vertical and horizontal zoom controls let you scale the size of the editor.
— A Zoom to Fit button fits the width of all curves to the current width of the Spline Editor.
— A Zoom to Rect tool lets you draw a rectangle to define an area of the Spline Editor to zoom into.
A Timeline ruler provides a time reference, as well as a place in which you can scrub the playhead.
The Parameter list at the left is where you decide which splines are visible in the Graph view. By
default, the Parameter list shows every parameter of every node in a hierarchical list. Checkboxes
beside each name are used to show or hide the curves for different keyframed parameters. Color
controls let you customize each spline’s tint to make splines easier to see in a crowded situation.
The Graph view that takes up most of this panel shows the animation spline along two axes. The
horizontal axis represents time, and the vertical axis represents the spline’s value. Selected control
points show their values in the edit fields at the bottom of the graph.
Lastly, the toolbar at the bottom of the Spline Editor provides controls to set control point
interpolation, spline looping, or choose Spline editing tools for different purposes.
Invert: Inverts the vertical position of non-animated LUT splines. This does not operate
on animation splines.
Step In: For each keyframe, creates sudden changes in value at the next keyframe
to the right. Similar to a hold keyframe in After Effects® or a static keyframe in the
DaVinci Resolve Color page.
Step Out: Creates sudden changes in value at every keyframe for which there’s a change
in value at the next keyframe to the right. Similar to a hold keyframe in After Effects or a
static keyframe in the DaVinci Resolve Color page.
Reverse: Reverses the horizontal position of selected keyframes in time, so the keyframes
are backward.
Set Loop: Repeats the same pattern of keyframes over and over.
Set Ping Pong: Repeats a reversed set of the selected keyframes and then a duplicate set
of the selected keyframes to create a more seamless pattern of animation.
Set Relative: Repeats the same pattern of selected keyframes but with the values of
each repeated pattern of keyframes being incremented or decremented by the trend of
all keyframes in the selection. This results in a loop of keyframes where the value either
steadily increases or decreases with each subsequent loop.
Click Append: Click once to select this tool and click again to de-select it. This tool
lets you add or adjust keyframes and spline segments (sections of splines between
two keyframes), depending on the keyframe mode you’re in. With Smooth or Linear
keyframes, clicking anywhere above or below a spline segment adds a new keyframe
to the segment at the location where you clicked. With Step In or Step Out keyframes,
clicking anywhere above or below a line segment moves that segment to where
you’ve clicked.
Time Stretch: If you select a range of keyframes, you can turn on the Time Stretch tool
to show a box you can use to squeeze and stretch the entire range of keyframes relative
to one another, to change the overall timing of a sequence of keyframes without losing
the relative timing from one keyframe to the next. Alternatively, you can turn on Time
Stretch and draw a bounding box around the keyframes you want to adjust to create a
time‑stretching boundary that way. Click Time Stretch a second time to turn it off.
Shape Box: Turn on the Shape Box to draw a bounding box around a group of control
points you want to adjust in order to horizontally squish and stretch (using the top/
bottom/left/right handles), corner pin (using the corner handles), move (dragging on the
box boundary), or corner stretch (Command-drag the corner handles).
Show Key Markers: Turning on this control shows keyframes in the top ruler that
correspond to the frame at which each visible control point appears. The colors of these
keyframes correspond to the color of the control points they’re indicating.
Thumbnail Timeline in
the Fusion Page
In the Fusion page of DaVinci Resolve, the Thumbnail timeline (hidden by default) can be opened by
clicking the Clips button in the UI toolbar and appears underneath the Node Editor when it’s open.
The Thumbnail timeline shows every clip in the current Timeline, giving you a way to navigate from
one clip to another. Each thumbnail has a pop-up menu for creating and switching among multiple
versions of compositions, and resetting the current composition, when necessary.
The Thumbnail timeline lets you navigate the Timeline and manage versions of compositions.
TIP: If you drag one or more clips from the Media Pool onto a connection line between
two nodes in the Node Editor, the clips are automatically connected to that line via enough
Merge nodes to connect them all.
For more information on using the myriad features of the Media Pool, see Chapter 18, “Adding and
Organizing Media with the Media Pool” in DaVinci Resolve Reference Manual.
To use the Import Media command in the Fusion page Media Pool:
1 With the Fusion page open, right-click anywhere in the Media Pool, and choose Import Media.
2 Use the Import dialog to select one or more clips to import, and click Open. Those clips are added
to the Media Pool of your project.
For more information on importing media using the myriad features of the Media page, see
Chapter 18, “Adding and Organizing Media with the Media Pool” in DaVinci Resolve Reference Manual.
Similar to the Media Pool in DaVinci Resolve, when adding an item to the Fusion bins, a link is created
between the item on disk and the bins. Fusion does not copy the file into its own cache or hard drive
space. The file remains in its original format and in its original location.
Bins Interface
The Bins window is actually a separate application used to save content you may want to reuse at a
later time. The Bins window is separated into two panels. The sidebar on the left is a bin list where
items are placed into categories, while the panel on the right displays the selected bin’s content.
When you select a bin from the bin list, the contents of the folder are displayed in the Contents panel
as thumbnail icons.
A toolbar along the bottom of the bin provides access to organization, playback, and editing controls.
For more information on Bins and the Studio Player, see Chapter 13 in the Fusion Reference Manual.
The Console
The Console is a window in which you can see the error, log, script, and input messages that may
explain something Fusion is trying to do in greater detail. The Console is also where you can read
FusionScript outputs, or input FusionScripts directly. In DaVinci Resolve, the Console is available by
choosing Workspace > Console or choosing View > Console in Fusion Studio. There is also a Console
button in the Fusion Studio User Interface toolbar.
Occasionally the Status bar displays a badge to let you know there’s a message in the Console you
might be interested in.
A toolbar at the top of the Console contains controls governing what the Console shows. At the
top left, the Clear Screen button clears the contents of the Console. The next four buttons toggle
the visibility of error messages, log messages, script messages, and input echoing. Showing only a
particular kind of message can help you find what you’re looking for when you’re under the gun at
3:00 in the morning. The next three buttons let you choose the input script language. Lua 5.1 is the
default and is installed with Fusion. Python 2.x and Python 3.x require that you install the appropriate
Python environment on your computer. Because scripts in the Console are executed immediately, you
can switch between input languages at any time.
At the bottom of the Console is an Entry field. You can type scripting commands here for execution in
the current comp context. Scripts are entered one line at a time, and are executed immediately. For
more information on scripting, see the Fusion Scripting Manual.
Customizing Fusion
This section explains how you can customize Fusion to accommodate whatever workflow
you’re pursuing.
In DaVinci Resolve, configure and resize the panels you want displayed and then:
— Choose Workspace > Layout Presets > Save Layout Presets.
In Fusion Studio, configure and resize the panels you want displayed and then:
— Click the Grab Document Layout button in the Preferences > Layout panel to save the layout for all
new Compositions.
— Click the Grab Program Layout button to remember the size and position of any floating views,
and enable the Create Floating Views checkbox to automatically create the floating windows when
Fusion restarts.
When using multiple monitors, you can choose to have floating panels spread across your displays for
greater flexibility.
The Fusion Hotkey Manager dialog is divided into two sections. The left is where you select the
functional area where you want to assign a keyboard shortcut. The right side displays the keyboard
shortcut if one exists. You can use the New button at the bottom of the dialog to add a new
keyboard shortcut.
There is no practical limit to the number of steps that are undoable (although there may be a limit to
what you can remember).
Getting Clips
into Fusion
This chapter details the various ways you can move clips
into Fusion as you build your compositions.
Contents
Preparing Compositions Importing Universal
in the Fusion Page������������������������������������������������ 61 Scene Descriptor (USD) Files��������������������������� 72
Working on Single Clips in the Fusion Page� 61 The USD Loader����������������������������������������������������� 72
Turning One or More
Replacing Materials with
Clips into Fusion Clips����������������������������������������� 62
Imported MaterialX Files����������������������������������� 73
Adding Fusion Composition Generators������ 64
Scene Tree Dialog for
Creating a Fusion
Object Selection in USD Files���������������������������� 73
Composition Clip in a Bin����������������������������������� 64
Preparing Compositions in Fusion Studio� 74
Resetting a Fusion Clip��������������������������������������� 65
Adding Clips from the Media Pool������������������ 65 Reading Clips into Fusion Studio���������������������� 78
The MediaIn node represents the image that’s fed to the Fusion page for further work, and the
MediaOut node represents the final output that’s fed onward to the Color page for grading.
The default node tree that appears when you first open
the Fusion page while the playhead is parked on a clip.
This initial node structure makes it easy to quickly use the Fusion page to create relatively simple
effects using the procedural flexibility of node-based compositing.
For example, if you have a clip that’s an establishing shot, with no camera motion, that needs some
fast paint to cover up a bit of garbage in the background, you can open the Fusion page, add a Paint
node, and use the Clone mode of the Stroke tool to paint it out quickly.
TIP: The resolution of a single clip brought into Fusion via the Edit or Cut page Timeline is
the resolution of the source clip, not the Timeline resolution.
TIP: While you’ll likely want to do all the compositing for a greenscreen style effect in
the Fusion page, it’s also possible to add a keyer, such as the excellent DeltaKeyer node,
between the MediaIn and MediaOut nodes, all by itself. When you pull a key this way,
the alpha channel is added to the MediaOut node, so your clip on the Edit page has
transparency, letting you add a background clip on a lower track of your Edit page Timeline.
The nice thing about creating a Fusion clip is that every superimposed clip in a stack is automatically
connected into a cascading series of Merge nodes that create the desired arrangement of clips. Note
that whatever clips were in the bottom of the stack in the Edit page appear at the top of the Node
Editor in the Fusion page, but the arrangement of background and foreground input connections is
appropriate to recreate the same compositional order.
The initial node tree of the three clips we turned into a Fusion clip.
3 A new clip named “Fusion Composition” appears in the Timeline. It initially displays only black in
the Timeline viewer, since it’s a blank composition with no contents.
4 With the playhead parked over that clip, open the Fusion page. Since this composition is blank,
there’s only a single MediaOut node in the Node Editor. At this point, you can add whatever media,
generators, and other effects you require.
1 Select the bin in the Media Pool where you want to save the Fusion Composition.
2 Right click in an empty area of the bin and choose New Fusion Composition.
3 In the New Fusion Composition clip dialog, enter a Name for the clip, a duration, and a frame rate,
and then click Create.
4 The clip will appear in the bin. To open it in Fusion, do one of the following:
— Double-click the Fusion Composition
— Right-click over the Fusion Composition clip and choose Open in Fusion Page
To learn about creating custom Fusion Transitions that appear in the Effects Library, go to
Chapter 66, “Node Groups, Macros, and Fusion Templates” in the DaVinci Resolve Reference
Manual or Chapter 6 in the Fusion Reference Manual.
When you add a clip by dragging it into an empty area of the Node Editor, it becomes another MediaIn
node, disconnected, that’s ready for you to merge into your current composite in any one of a
variety of ways.
TIP: Dragging a clip from the Media Pool on top of a connection line between two other
nodes in the Node Editor adds that clip as the foreground clip to a Merge node.
When you add additional clips from the Media Pool, those clips becomes a part of the composition,
similar to how Ext Matte nodes you add to the Color page Node Editor become part of that
clip’s grade.
To hear audio from a clip brought in through the Media Pool, do the following:
1 Select the clip in the Node Editor.
2 In the Inspector, click the Audio tab and select the clip name from the Audio Track
drop-down menu.
3 Right-click the speaker icon in the toolbar, then choose the MediaIn for the Media Pool
clip to solo its audio.
You can now use the speaker icon contextual menu to switch back and forth between all the
MediaIn nodes.
TIP: If you connect a mask node without any shapes drawn, that mask outputs full
transparency, with the result that the image output by the MediaIn node is uselessly blank.
If you want to rotoscope over a MediaIn node, first create a disconnected mask node, and
with the mask node selected (exposing its controls in the Inspector) and the MediaIn node
loaded into the viewer, draw your mask. Once the shape you’re drawing has been closed,
you can connect the mask node to the MediaIn node’s input, and you’re good to go.
Image Tab
— Clip Name: Displays the name of that clip.
— Process Mode: Lets you choose whether the clip represented by that node will be processed as
Full Frames, or via one of the specified interlaced methods.
— Auto: Passes along any metadata that might be in the incoming image.
— Space: Allows the user to set the color space from a variety of options.
— Auto: Passes along any metadata that might be in the incoming image.
— Space: Allows you to choose a specific setting from a Gamma Space drop-down menu, while a
visual graph lets you see a representation of the gamma setting you’ve selected.
— Log: Similar to the Log-Lin node, this option reveals specific log-encoded gamma profiles so that
you can select the one that matches your content. A visual graph shows a representation of the
log setting you’ve selected. When Cineon is selected from the Log Type menu, additional Lock
RGB, Level, Soft Clip, Film Stock Gamma, Conversion Gamma, and Conversion table options are
presented to finesse the gamma output.
— Remove Curve: Depending on the selected gamma space or on the gamma space found in Auto
mode, the associated gamma curve is removed from the material, effectively converting it to
output in a linear color space.
— Pre-Divide/Post-Mujltiply: Lets you convert “straight” alpha channels into pre-multiplied alpha
channels, when necessary.
TIP: All content in the DaVinci Resolve Fusion page is processed using 32-bit floating-point
bit depth, regardless of the content’s actual bit depth.
Audio Tab
The Inspector for the MediaIn node contains an Audio tab, where you can choose to solo the audio
from the clip or hear all the audio tracks in the Timeline.
The Audio tab in the MediaIn node is used to select the track for playback, slip the audio timing,
and reset the audio cache
If the audio is out of sync when playing back in Fusion, the Audio tab’s Sound Offset wheel allows
you to slip the audio in subframe frame increments. The slipped audio is only modified in the
Fusion page. All other pages retain the original audio placement.
To purge the audio cache after any change to the audio playback:
— Click the Purge Audio Cache button in the Inspector.
The audio will be updated when you next play back the composition.
— Global In/Out: Use this control to specify the position of this node within the composition. For
instance, when the clip is added to the comp from the Media Pool, it is added at frame 0. However,
the MediaIn node from the Edit page Timeline may not start until a much later frame, based on
where it is edited into the Timeline. Use Global In to specify the frame on which that the clip starts
so that it aligned with media from the Edit page Timeline. It is easiest to view and change the
alignment of different clips in the comp while viewing the Keyframes Editor.
To slide the clip in time or align it to other clips without changing its length, place the mouse
pointer in the middle of the range control and drag it to the new location, or enter the value
manually in the Global In value control.
If the Global In and Out values are decreased to the point where the range between the In and
Out values is smaller than the n number of available frames in the clip, Fusion automatically trims
the clip by adjusting the Clip Time range control. If the Global In/Out values are increased to
the point where the range between the In and Out values is larger than the number of available
frames in the clip, Fusion automatically lengthens the clip by adjusting the Hold First/Last Frame
controls. Extended frames are visually represented in the range control by changing the color of
the held frames to purple in the control.
— Trim: The Trim range control is used to trim frames from the start or end of a clip. Adjust the Trim
In to remove frames from the start and set Trim Out to specify the last frame of the clip. The values
used here are offsets. A value of 5 in Trim In would use the 5th frame in the sequence as the start,
ignoring the first four frames. A Trim Out value of 95 would stop loading frames after the 95th.
— Hold First Frame/Hold Last Frame: The Hold First Frame and Hold Last Frame controls will hold
the first or last frame of the clip for the specified amount of frames. Held frames are included in a
loop if the footage is looped.
— Reverse: Select this checkbox to reverse the footage so that the last frame is played first and the
first frame is played last.
— Loop: Select this checkbox to loop the footage until the end of the project. Any lengthening of the
clip using Hold First/Last Frame or shortening using Trim In/Out is included in the looped clip.
Importing Universal
Scene Descriptor (USD) Files
The Universal Scene Descriptor USD framework is a set of open standards for describing, composing,
and simulating a 3D environment. The USD file format is more than just a simple format - it’s an open
source 3D scene description used for creating and sharing 3D content across various applications.
DaVinci Resolve and Fusion can import USD (.usdc, .usdz, and .usda) 3D information including
geometry, lighting, cameras, materials, and animation. A new collection of USD tools has been added
to Fusion, allowing users to manipulate, re-light, and render these USD files.
Users can select, filter, isolate, and choose elements in a USD asset to focus on logical
components and make targeted adjustments.
Preparing Compositions
in Fusion Studio
The next few sections in this chapter cover preparing a project and adding clips into a composition
when using Fusion Studio. The term composition, or comp, is used to refer to the Fusion project
file. By default, opening the Fusion Studio application creates a new empty composition when it’s
launched. A composition can contain single frames, image sequences, or movie files at various
resolutions and bit depths. Knowing which files you can load in, how to set up a composition to handle
them, and finally, reading those files in are the first steps in beginning to composite.
If the composition has unsaved changes, a dialog box appears allowing you to save before closing.
TIP: Compositions that have unsaved changes will display an asterisk (*) next to the
composition’s name in the Fusion Studio title bar and in the composition’s tab.
Auto Save
Auto save automatically saves the composition to a temporary file at preset intervals. Auto saves help
to protect you from loss of work due to power loss, software issues, or accidental closure.
To enable auto save for new compositions, choose Fusion Studio > Preferences, and then locate Global
> General > Auto Save in the Preferences dialog.
An auto-save file does not overwrite the current composition in the file system. A file with the same
name is created in the same folder as the composition but with the extension .autosave instead of
.comp. Unsaved compositions will place the autosave file in the default folder specified by the Comp:
path in the Paths panel of the Global Preferences.
If an auto-save file is present when Fusion Studio loads a composition, a dialog will appear asking to
load the auto-saved or original version of the composition.
A .comp extension is added to the end of the filename. Only the node tree created in the Fusion page
is exported. Clips not added to the Node Editor will not appear in the Fusion Studio bins. ResolveFX
added to the comp will also not translate from the Fusion page to Fusion Studio.
MediaIn nodes from DaVinci Resolve are automatically converted to Loader nodes, and if the file path
remains identical, the media is automatically relinked.
The return trip can also be performed, saving a composition file from Fusion Studio and importing it
into the Fusion page within DaVinci Resolve.
To import a composition from Fusion Studio into the Fusion page within DaVinci Resolve:
1 From within Fusion Studio, open the composition you want to move into the Fusion page.
2 From within DaVinci Resolve, switch to the Fusion page with an empty composition.
The composition you import will completely replace the existing composition in the Fusion page
Node Editor.
3 Choose File > Import Fusion Composition.
4 In the Open dialog, navigate to the Fusion comp and click Open.
5 The new comp is loaded into the Node Editor, replacing the previously existing composition.
TIP: To keep an existing comp in the Fusion page and merge a new comp from Fusion
Studio, open Fusion Studio, select all the nodes in the Node Editor, and press Command-C
to copy the selected nodes. Then, open DaVinci Resolve and switch the Fusion page with the
composition you want, click in an empty location in the Node Editor, and press Command-V
to paste the Fusion Studio nodes. Proceed to connect the pasted node tree into the existing
one using a Merge or Merge 3D node.
When you open Fusion Studio, an empty composition is created. The first thing you do when
starting on a new composition is to set the preferences to match the intended final output format.
The preferences are organized into separate groups: one for global preferences, and one for the
preferences of the currently opened composition.
Although the final output resolution is determined in the Node Editor, the Frame Format preferences
are used to determine the default resolution used for new Creator tools (i.e., text, background,
fractals, etc.), aspect ratio, as well as the frame rate used for playback.
If the same frame format is used day after day, the global Frame Format preferences should match the
most commonly used footage. For example, on a project where the majority of the source content will
be 1080p high definition, it makes sense to set up the global preferences to match the frame format of
the HD source content you typically use.
To set up the default Frame Format for new compositions, do the following:
1 Choose Fusion Studio > Preferences.
2 Click the Global and Default Settings disclosure triangle in the sidebar to open the Globals group.
3 Select the Frame Format category to display its options.
When you set options in the Global Frame Format category, they determine the default frame format
for any new composition you create. They do not affect existing compositions or the composition
currently open. If you want to make changes to existing compositions, you must open the comp. You
can then select the Frame Format controls listed under the comp’s name in the sidebar.
Source media is read into a comp using a Loader tool. Although there are other tools within Fusion
Studio that you can use to generate images like gradients, fractals, or text, each still image, image
sequence, or movie file must be added to your comp using a Loader tool.
If multiple files are dragged into the Node Editor, a separate Loader is added for each file. However,
if you drag a single frame from an image sequence, the entire series of the image sequence is read
into the comp using one Loader, as long as the numbers are sequential.
A Loader represents any clip, image file, or graphic that you bring into Fusion. However, other types of
media can also be brought into Fusion Studio. Photoshop PSD files, SVG splines, and 3D models in the
Alembic, FBX, and OBJ format can be imported using the File > Import menu.
TIP: Using File > Import > Footage creates a new composition along with a Loader node for
the footage. The selected media is automatically used for the name of the composition.
For more information about the Loader node, see Chapter 44, “I/O Nodes,” in the Fusion
Reference Manual.
At the top of the Inspector are the Global In and Global Out settings. This range slider determines
when in your composition the clip begins and ends. It is the equivalent of sliding a clip along a track in
a Timeline. The Hold First Frame and Hold Last Frame dials at the bottom of the Inspector allow you to
freeze frames in case the clip is shorter than the composition’s global time.
Below the filename in the Inspector is a Trim In and Out range slider. This range slider determines
the start frame and end frame of the clip. Dragging the Trim In will remove frames from the start
of the clip, and dragging the Trim Out will remove frames from the end of the clip.
Although you may remove frames from the start of a clip, the Global In always determines
where in time the clip begins in your comp. For instance, if the Loader has a Global In starting on
frame 0, and you trim the clip to start on frame 10, then frame 10 of the source clip will appear at
the comp’s starting point on frame 0.
Instead of using the Inspector to adjust timing, it is visually more obvious if you use the Keyframes
Editor. For more information on the Keyframes Editor and adjusting a clip’s time, see Chapter 9,
“Animating in Fusion’s Keyframes Editor,” in the Fusion Reference Manual.
TIP: If you connect a Mask node without any shapes drawn, that mask outputs full
transparency, so the result is that the image output by the MediaIn node is blank. If you
want to rotoscope over a MediaIn node, first create a disconnected Mask node, and with
the Mask node selected and the Media In node loaded into the viewer, draw your mask.
Once the shape you’re drawing has been closed, connect the Mask node to the MediaIn
node’s input, and you’re good to go.
— Generate smaller media files and write them to disk using Optimized Media in DaVinci Resolve
— Render out proxy files using Saver nodes in Fusion Studio
Both applications also allow you to generate proxies on-the-fly without rendering new files to
disk using the Proxy and Auto Proxy options in the transport controls area.
To enable the Proxy and Auto Proxy options, you can do one of two things, depending on the version
of Fusion you are using:
— In the Fusion page, right-click the empty area behind the transport controls to enable the
Proxy option.
— In Fusion Studio, click the Proxy (Prx) button in the transport area to enable the
usage of proxies.
The Proxy option reduces the resolution of the images as you view and work with them. Instead of
displaying every pixel, the Proxy option processes one out of every x pixels interactively. In Fusion
Studio, the value of x is determined by right-clicking the Prx button and selecting a proxy ratio from
the drop-down menu. For instance, choosing 5 from the menu sets the ratio at 5:1. In the Fusion
page, the proxy ratio is set by choosing Fusion > Fusion Settings and setting the Proxy slider in the
General panel.
The Auto Proxy button enables Fusion to interactively degrade the image only while adjustments are
made. The image returns to normal resolution when the control is released. Similar to the Prx button
in Fusion Studio, you can set the Auto Proxy ratio by right-clicking the APrx button and choosing a
ratio from the menu.
When a Loader node is selected, the Inspector includes a Proxy Filename field where you can specify
a clip that will be loaded when the Proxy mode is enabled. This allows smaller versions of the image
to be loaded to speed up file I/O from disk and processing. This is particularly useful when working
with high resolution files like EXR that might be stored on a remote server. Lower resolution versions
of the elements can be stored locally, reducing network bandwidth, interactive render times, and
memory usage.
The proxy clip that you create must have the same number of frames as the original clip, and if using
image sequences, the sequence numbers for the clip must start and end on the same frame numbers.
If the proxies are the same format as the original files, the proxies will use the same format options in
the Inspector as the originals.
TIP: Even though the proxies are being processed smaller than their original size, the
viewers scale the images so they refer to original resolutions.
DPX
The Format tab in Fusion Studio’s Loader node for DPX files is used to convert image data from
logarithmic to linear. These settings are often left in bypass mode, and the Log to Linear conversion is
handled using a Cineon Log node.
OpenEXR
The OpenEXR format provides a compact and flexible high dynamic range (float) format. The format
supports a variety of extra non-RGBA channels and metadata. These channels can be viewed and
enabled in the Format tab of the Inspector.
— Using the Media Pool in DaVinci Resolve’s Fusion page: Any PSD file added to the Media Pool
in DaVinci Resolve can be accessed from the Fusion page. After dragging the PSD file from the
Media Pool into the Node Editor, the image appears as a MediaIn node. From there, you can select
which layer to use from the PSD file from the Layer drop-down menu in the Inspector.
— Using a Loader node in Fusion Studio: This lets you read in Photoshop PSD files with the
ability to select the layer in the PSD file that is used in the comp. Fusion can load any one of the
individual layers stored in the PSD file, or the completed image with all layers. Transformation and
adjustment layers are not supported.
To load all layers individually from a PSD file, with appropriate blend modes, do one of the following:
— In DaVinci Resolve, switch to the Fusion page and choose Fusion > Import > PSD.
— In Fusion Studio, choose File > Import > PSD.
Using either of the methods above creates a node tree where each PSD layer is represented by a node
and one or more Merge nodes are used to combine the layers. The Merge nodes are set to the Apply
mode used in the PSD file and automatically named based on the Apply mode setting.
You can either load the audio file independent of any nodes, or load an audio file into the Saver node.
The benefit of using a Saver node to load the audio is that you can view the audio waveforms in the
Keyframes Editor.
When you want to find the precise location of an audio beat, transient, or cue, you can slowly drag
over the audio waveform to hear the audio.
Rendering Using
Saver Nodes
This chapter covers how to render compositions using Saver nodes in Fusion Studio and the
Fusion page in DaVinci Resolve. It also covers how to render using multiple computers over a network
when using Fusion Studio.
Contents
Rendering Overview�������������������������������������������� 86 Using Mapped Drives���������������������������������������� 104
Rendering in the Fusion Page�������������������������� 86 Installing All Fonts on Render Nodes ��������� 104
A single Saver node is added to the end of a node tree to render the final composite.
You can also use multiple Saver nodes stemming from the same node in order to create several
output formats. The below example uses the three Savers to export different formats of the
same shot.
Multiple Saver nodes can be added to create different formats for output.
Adding a Saver node to a node tree automatically opens a Save dialog where you name the file
and navigate to where the exported file is saved. You can then use the Inspector to configure the
output format.
For more information on the Saver node, see Chapter 44, “I/O Nodes,” in the Fusion Reference Manual.
If you decide to output an image sequence, a four-digit frame number is automatically added before
the filename extension. For example, naming your file image_name.exr results in files named image_
name0000.exr, image_name0001.exr, and so on. You can specify the frame padding by adding several
zeroes to indicate the number of digits. For example, entering a filename as image_name_000.exr
results in a sequence of images with the names Image_name_000.exr, Image_name_001.exr, Image_
name_002.exr, and so on.
NOTE: The starting frame number always uses the Time Ruler start frame number.
The Render Settings dialog opens providing options for the rendered output.
Ensure that the frame range and other parameters are correct and click Start Render.
Using the Saver node is useful for optimizing extremely complex and processor-intensive
compositions. For example, you can render out specific branches of a node tree that no
longer requires frequent adjustment to OpenEXR via a Saver node, and then reimport the
result to take the place of the original branch of nodes in order to improve the performance
of your composition.
Alternatively, you can render out multi-channel mattes or EXR images containing Arbitrary
Output Variables (AOVs) to bring into other applications.
— HiQ: When enabled, this setting renders in full image quality. If you need to see what the final
output of a node would look like, then you would enable the HiQ setting. If you are producing a
rough preview to test animation, you can save yourself time by disabling this setting.
— MB: The MB in this setting stands for Motion Blur. When enabled, this setting renders with motion
blur applied if any node is set to produce motion blur. If you are generating a rough preview and
you aren’t concerned with the motion blur for animated elements, then you can save yourself time
by disabling this setting.
— Some: When Some is enabled, only the nodes specifically needed to produce the image of the
node you’re previewing are rendered.
Size
When the Configurations section is set to Preview, you can use the Size options to render out frame
sizes lower than full resolution. This is helpful when using the Render dialog to create proxies or just
creating a smaller file size.
Network
The Network setting controls the distribution of rendering to multiple computers. For more
information, see the network rendering section in this chapter.
Shoot On
Again, this option is only available when Configurations is set to Preview. The Shoot On setting allows
you to skip frames when rendering. You can choose to render every second, third, or fourth to save
render time and get faster feedback. You can use the Step parameter to determine the interval at
which frames are rendered.
Frame Range
Regardless of whether the Configurations is set to Final or Preview, this option defaults to the current
Render In/Out Range set in the Time Ruler to determine the start and end frames for rendering. You
can modify the range to render more or fewer frames.
Configurations
When set to Final, the Render Settings are set to deliver the highest quality results, and you cannot
modify most of the options in this dialog. When set to Preview, you can set the options to gain faster
rendering performance. Once you’ve created a useful preview configuration, you can save it for later
use by clicking the Add button, giving it a name, and clicking OK.
For more information on rendering RAM previews, see Chapter 7, “Using Viewers,” in the Fusion
Reference Manual.
TIP: Option-Shift-dragging a node into a viewer will skip the Render dialog and
previously used settings.
Setting Up Network
Rendering in Fusion Studio
Fusion Studio is capable of distributing a variety of rendering tasks to an unlimited number of
computers on a network, allowing multiple computers to assist with creating network-rendered
previews, disk caches, and final renders.
Using the Render Settings dialog or the built-in Render Manager, you can submit compositions to be
rendered by other copies of Fusion Studio, as well as to one or more Fusion Render nodes. Rendering
can also be controlled through the command line for integration with third-party render managers
like Deadline, Rush, and Smedge.
Render nodes are computers that do not have the full Fusion application installed but do have Fusion
Render node software installed. The Render node software is not installed by default when you install
Fusion Studio, but it can be installed at any time using the Fusion Render node Installer. The installer is
located in the Blackmagic Fusion Studio installer.dmg on macOS and the Blackmagic Fusion Studio.zip
on Linux and Windows. Fusion Studio is licensed for an unlimited number of Render nodes, so you can
install the Render node software on as many macOS, Windows, and Linux computers that you want
involved in network rendering.
By default, the Render node application will be added to the Start Menu on Windows under
Blackmagic Design. On macOS, it is added to the menu bar, and on Linux it appears in the app
launcher. Each time you log in to the computer, the Render node application will run automatically.
Multi-License Dongles
Using a multi-license dongle, you can license 10 copies of Fusion Studio by connecting the dongle to
any computer on the same subnet. Since these licenses “float” over a network, Fusion Studio does not
have to be running on the same computer where the dongle is connected. As long as Fusion Studio is
on the same subnet, it can automatically find the license server and check out an available license.
Multi-seat dongles can be combined together to tailor the number of Fusion seats in a larger facility.
For example, three dongles each licensed for 10 Fusion Studios would serve up 30 licenses. This
also allows for redundancy. For instance, in the example above, three computers can act as license
servers. If the first server fails for some reason, Fusion Studio will automatically try the next server.
Alternatively, multiple dongles can also be plugged into a single computer.
You will need your network administrator to set firewall rules allowing the Fusion Server, FusionScript,
and the Fusion Render node applications to communicate and confirm licensing with the computer
that has the Fusion Studio dongle.
If for some reason you remove a dongle or the network drops out, the licenses of any connected
Fusion Studio application will also drop. Upon losing its license, Fusion Studio will start searching for
another license, locally or on a different machine. If no license is found, Fusion pauses rendering and
displays a dialog with options to retry the search or autosave the comp and quit. Render nodes only
check for a license on the network once during startup, so they are not affected by removing the
dongle or network issues.
Instead of looking in a single location for the Fusion Server, you can set up multiple license servers
separated by semicolons. For instance, fu:SetPrefs("Global.EnvironmentVars.FUSION_LICENSE_
SERVER", "192.168.1.12; 192.168.10.55;*")
You can also use the environment variable to scan for license servers within a subnet—for example,
“bobs-mac.local;10.0.0.23;*;license.mystudio.com”. Including an asterisk (*) indicates a broadcast
search of the local subnet.
Like most environment variables, you can put the license server in the Global Preferences via the
Prefs text file. EnvironmentVars: fu:SetPrefs("Global.EnvironmentVars. FUSION_LICENSE_SERVER",
"10.0.0.23;*") fu:SavePrefs(), see Chapter 15, “Preferences,” in the Fusion Reference Manual for more
information on using environment variables.
NOTE: The use of straight quotes (" ") in the environment variables above are intentional
and should not be replaced with typographer’s, or curly, quotes (“ ”).
— The Render Master manages the list of compositions to be rendered (the queue) and allocates
frames to Render nodes for rendering. Metaphorically speaking, the Render Master is the traffic
cop of this process.
— The Render nodes are the main computers used for the rendering process. All computers involved
in network rendering must be on the same network subnet, and they all must have access to the
various files (including Fonts and third-party plugins) used to create the composite. The path to
the files must be the same for each computer involved in rendering.
Any copy of Fusion can act as a Render Master by setting up the Fusion Network Preferences.
Acting as a Render Master has no significant impact on render performance. The system resources
consumed are insignificant. However, there are specific steps you must take to make one of your
computers a Render Master.
1 Install a copy of Fusion Studio on the computer you want to be the Render Master.
2 In Fusion Studio, choose Fusion Studio > Preferences on macOS or File > Preferences on
Windows and Linux
3 In the Preferences dialog, select the Global > Network Preferences panel.
4 Enter the name of the Render Master in the Name field and enter the IP address.
5 Enable the Make This Machine a Render Master checkbox.
6 If you want to use this computer as part of the render farm, enable the Allow This Machine to Be
Used as a Render Node checkbox as well.
Once a computer is enabled to act as the master, use the Render Manager to add the Render nodes it
will manage. The Render Manager dialog is described in detail later in this chapter.
The Render Manager is used to reorder, add, and remove compositions from a render queue.
Right-clicking in the Render node list allows you to add Render nodes by entering the Render node’s
name or IP address. You can also choose Scan to have the Render Manager look for Render nodes on
the local network.
Scanning looks through all IP addresses on the network subnet to determine whether any other
computers in the local network are actively responding on the port Fusion uses for network
rendering. A copy of the Fusion Render node must be running on the remote computer in order for it
to be detected by the scan.
In the Add Render Node dialog that opens, enter the name or the IP address of the remote Render
node. The Render Manager will attempt to resolve names into IP addresses and IP addresses into
names automatically. You can use this method to add Render nodes to the list when they are not
currently available on the network.
Submitting Comps to
Network Render
To submit a comp to render on the network, you can use the Render Manager, the Render Settings
dialog, or a third-party render farm application. The Render Settings dialog is quicker, while the
Render Manager and third-party render farm applications can provide more feedback and control
over the process.
NOTE: Distributed network rendering works for image sequences like EXR, TIFF, and DPX.
You cannot use network rendering for Quicktime, H264, ProRes, or MXF files.
New Render nodes are automatically added to All, but you can assign them to other groups as well.
When a render is submitted to the network, it is automatically sent to the All group. However, you can
choose to submit it to other groups in the list.
Continuing with the group example above, five Render nodes are contained in the All group, and
two of those Render nodes are also in the Hi-Performance group. If you submit a render to the
Hi-Performance group, only two of the computers on the network are used for rendering. If a
composition is then submitted to the All group, the remaining three machines will start rendering the
new composition. Once the two Render nodes in the Hi_Performance group complete the first job,
they join the render in progress on the All group.
Groups are optional and do not have to be used. However, groups can make managing large networks
of Render nodes easier and more efficient.
When a Render node is a member of multiple groups, the order of the groups is important because
the order defines the priority for that Render node.
For example, if groups are assigned to a Render node as All, Hi-Performance, then renders submitted
to the All group take priority. Any renders in progress that were submitted to the Hi-Performance
group will be overridden. If the order is changed to Hi-Performance, All, then the priority is reversed.
There are two modes for the Render Log: a Verbose mode and a Brief mode. Verbose mode logs
all events from the Render Manager, while Brief mode logs only which frames are assigned to each
Render node and when they are completed.
Keep in mind that using a third-party render manager will prevent the use of some of Fusion’s network
rendering features, such as the ability to create network rendered Flipbook Previews and disk caches.
This would start up, render frames from 101 to 110, and then quit.
Command Description
-quit Causes the Render Node to quit after the render is complete.
TIP: An X11 virtual frame buffer is required to make a headless Linux command line
interface work.
Preparing Compositions
for Network Rendering
The way you construct a composition in Fusion Studio can help or hinder network rendering.
The media you read in, where plugins are installed, and the mix of operating systems on your
networked computers all play a part in how smoothly your network rendering goes. Your setup must
include several essential parts before network rendering will work:
— License dongle, Render Master, and Render nodes must be on the same local network (subnet).
— Fusion Server must be running as a background service on the same
computer where the dongle is installed.
— All source media from the comp should be placed on a network volume.
— The network volume must be mounted on each Render node.
— Loaders must point to the media on the mounted volumes.
— Savers must write to a drive that is mounted on each Render node.
— The Fusion comp must be saved to a volume that is mounted on each Render node.
— All Render Nodes and Render Masters need read and write access to any volumes specified as a
source media location or render destination.
— Make sure all fonts used in the comp for Text+ and 3D text nodes are installed on all the Render nodes
— Make sure all Render nodes have third-party OFX plugins installed if any are used in the comp.
For example, if you open a composition located at c:\compositions\test1.comp in Fusion Studio and
add the composition to the network rendering queue, the Render Manager sends a message to each
Render node to load the composition and render it. The problem is that each computer is likely to
have its own c:\drive that does not contain the comp you created. In most cases, the Render nodes will
be unable to load the composition, causing it to fail.
Path Maps located in Fusion Preferences are virtual paths used to replace segments of file paths. They
can change the absolute paths used by Loader and Saver nodes to relative paths. There are a number
of Path Maps already in Fusion, but you can also create your own. The most common path to use is the
Comp:\ path.
Comp:\ is a shortcut for the folder where the actual composition is saved. So, using Comp:\ in a Loader
makes the path to the media file relative, based on the saved location of the comp file. As long as
all your source media is stored in the same folder or subfolder as your comp file, Fusion locates the
media regardless of the actual hard drive name.
Here’s an example of a file structure that enables you to use relative file references.
The composition is stored in the following file path:
Volumes\Project\Shot0810\Fusion\Shot0810.comp
Volumes\Project\Shot0810\Fusion\Greenscreen\0810Green_0000.exr
File paths can use relative paths based on the location of the saved comp file.
In this situation, using the Comp:\ path means your media location starts from your comp file’s
location. The relative path set in the Loader node would then be:
Comp:\Greenscreen\0810Green_0000.exr
Volumes\Project\Shot0810\Footage\Greenscreen\0810Green_0000.exr
The relative path set in the Loader node would then be:
Comp:\..\Footage\Greenscreen\0810Green_0000.exr
TIP: Some Path Maps are not set up on a Fusion Render Node automatically. For instance,
you must manually add an entry for macros if you are using macros in your comp.
Windows assigns a new drive letter to the folder, and it can be accessed just like any other drive
connected to your computer. Mapped drives assign a letter of the alphabet to a shared network
resource. Your shared drives must be the same drive letters on all Render nodes. For example, if your
media is on drive Z, then the network drive must appear as the letter Z on each of the Render nodes.
On macOS, you can map a network drive using Connect to Server from the Go menu. Entering the
smb:// path to the drive will mount it on the computer. Using Accounts > LogIn Items, you can have the
network drive auto-mount after a reboot as well.
Flipbook Previews
Fusion Studio is able to use Render nodes to accelerate the production of Flipbook Previews, allowing
for lightning fast previews. Frames for previews that are not network rendered are rendered directly
into memory. Select the Use Network checkbox and the Render nodes to render the preview frames
to the folder set in the Preferences Global > Path > Preview Renders. This folder should be accessible
for all Render nodes that will participate in the network render. The default value is Temp\, which is a
virtual path pointing to the system’s default temp folder. This will need to be changed before network
rendered previews can function. Once the preview render is completed, the frames that are produced
by each Render node are spooled into memory on the local workstation. As each frame is copied into
memory, it is deleted from disk.
Disk Cache
Right-clicking a node in the Node Editor and choosing Cache to Disk opens a dialog used to create the
disk cache. If you enabled the Use Network checkbox and click the Pre-Render button to submit the
disk cache, the network Render nodes are used to accelerate the creation of the disk cache.
Fusion Studio includes a variety of measures to protect the queue and ensure that the render
continues even under some of the worst conditions.
When the Render node becomes available for rendering again, it will signal the Render Master that it is
ready to render again, and new frames will be assigned to that Render node.
This is why it is important to set the Render Master in the network preferences of the Render nodes. If
the master is not set, the Render node will not know what master to contact when it becomes available.
In the Fusion Render node Preferences, select the Tweaks panel. Using the Last Render node Restart
Timeout field, you can enter the number of seconds Fusion waits after the last Render node goes
offline before aborting that queue and waiting for direct intervention.
Fusion Server monitors the Render node to ensure that the Render node is still running during
a render. It consumes almost no CPU cycles and very little RAM. If the monitored Render node
disappears from the system’s process list without issuing a proper shutdown signal, as can happen
after a crash, the Fusion Server relaunches the Render node, allowing it to rejoin the render.
Fusion Server will only detect situations where the Render node has exited abnormally. If the Render
node is still in the process list but has become unresponsive for some reason, the Fusion Server
cannot detect the problem. Hung processes like this are detected and handled by frame timeouts, as
described below.
Frame Timeouts
Frame timeouts are a fail-safe method of canceling a Render node’s render if a frame takes longer
than the specified time (with a default of 60 minutes, or one hour). The frame timeout ensures that
an overnight render will continue if a composition hangs or begins swapping excessively and fails to
complete its assigned frame.
The timeout is set per composition in the queue. To change the timeout value for a composition from
the default of 60 minutes, right-click on the composition in the Render Manager’s queue list and select
Set Frame Timeout from the contextual menu.
Heartbeats
The Render Master regularly sends out heartbeat signals to each node, awaiting the node’s reply.
A heartbeat is basically a message from the manager to the Render node asking if the node is still
responsive and healthy. If the Render node fails to respond to several consecutive heartbeats, Fusion
will assume the Render node is no longer available. The frames assigned to that Render node will be
reassigned to other Render nodes in the list.
The number of heartbeats in a row that must be missed before a Render node is removed from the list
by the manager, as well as the interval of time between heartbeats, can be configured in the Network
Preferences panel of the master. The default settings for these options are fine for 90% of cases.
If the compositions that are rendered tend to use more memory than is physically installed, this will
cause swapping of memory to disk. It may be preferable to increase these two settings somewhat to
compensate for the sluggish response time until more RAM can be added to the Render node.
To access preferences for a node, right-click on the icon in the Windows Notification area or from the
macOS menu bar and choose Preferences. In the Preferences dialog, select the Memory panel.
Normal values are 2 or 3, although machines with a lot of memory may benefit from higher values,
whereas machines with less memory may require the value to be 1.
Simultaneous Branching
Enable this option to render every layer in parallel. This can offer substantial gains in throughput
but may also use considerably more memory, especially if many layers are used in the composition.
Machines with limited memory may need to have Simultaneous Branching disabled when rendering
compositions with many layers.
Time Stretching
Compositions using the Time Stretcher and Time Speed tools may encounter difficulties when
rendered over the network. Speeding up or slowing down compositions and clips requires fetching
multiple frames before and after the current frame that is being rendered, resulting in increased I/O
to the file server. This may worsen bottlenecks over the network and lead to inefficient rendering. If
the composition uses the Time Stretcher or Time Speed tools, make certain that the network is up to
the load or pre-render that part of the composition before network rendering.
Linear Tools
Certain tools cannot be network rendered properly. Particle systems from third-party vendors, such
as Genarts’s Smoke and Rain, and the Fusion Trails node cannot render properly over the network.
These tools generally store the previously rendered result and use it as part of the next frame’s
render, so every frame is dependent on the one rendered before it. This data is local to the tool, so
these tools do not render correctly over a network.
NOTE: The above does not apply to network rendered previews, which are previews
created over the network that employ spooling to allow multi-frame formats to render
successfully. Only final renders are affected by this limitation.
Troubleshooting
There are some common pitfalls when rendering across a network. Virtually all problems with network
rendering have to do with path names or plugins. Return to the “Preparing Compositions for Network
Rendering” section in this chapter to review some of the essential setup requirements. Verify that all
Render nodes can load the compositions and the media, and that all Render nodes have installed the
plugins used in the composition.
If some difficulties persist, contact Blackmagic Design’s technical support using the support section
on the Blackmagic Design website. Save a copy of the render.log file to send to technical support.
— No Render Nodes Could Be Found: On the Preferences Network tab, make sure that there is at
least one Render node available, running and enabled. If all Render nodes are listed as Offline
when they are not, check the network.
— The Composition Could Not Be Loaded: Some Render nodes may not be able to load a
composition while others can. This could be because the Render node could not find the
composition (check that the path name of the composition is valid for that Render node) or
because the composition uses plugins that the Render node does not recognize.
— The Render Nodes Stop Responding: If a network link fails, or a Render node goes down
for some reason, the Render node will be removed from the active list and its frames will be
reassigned. If no more Render nodes are available, the composition will fail after a short delay
(configurable in network preferences). If this happens, check the render log for clues as to which
Render nodes failed and why.
— The Render Nodes Failed to Render a Frame: Sometimes a Render node simply cannot render
a particular frame. This could be because the Render node could not find all the source frames
it needed, or the disk it was saving to become full or because of any other reason for which Fusion
might normally be unable to render a frame. In this case, the Render Manager will attempt to
reassign that failed frame to a different Render node. If no Render node can render the frame, the
render will fail. Try manually rendering that frame on a single machine and observe what happens.
— Check the Render Nodes: Fusion’s Render Manager incorporates a number of methods to
ensure the reliability of network renders. Periodically, the Render Manager will send signals called
Heartbeats, generated at regular intervals, to detect network or machine failures. In this event, a
failed Render node’s outstanding frames are reassigned to other Render nodes where possible.
In rare cases, a Render node may fail in a way that the heartbeat continues even though the
Render node is no longer processing. If a Render node failed (although the Render Master may
not have detected it) and you do not want to wait for the Frame Timeout, simply restart the Fusion
workstation or Fusion Render Node that has hung. This triggers the heartbeat check, reassigns
the frames on which that Render node was working, and the render should continue. Heartbeats
may fail if the system that is performing the render is making extremely heavy use of the Swap
file or is spending an extraordinary amount of time waiting for images to become available over a
badly lagged network. The solution is to provide the Render node with more RAM, adjust memory
settings for that node, or upgrade the network bandwidth.
— Check the Network: At the Render Master, bring up the Network tab of the Preferences dialog
box and click Scan. If a Render node is not listed as running, the Render Master will not be able
to contact it for network rendering. Alternatively, bring up a command prompt and ping the
Render nodes manually. If the remote systems do not respond when they are up and running,
the network is not functioning and should be examined further.
Working in the
Node Editor
This chapter discusses how to work in the Node Editor, including
multiple ways to add, connect, rearrange, and remove nodes to
create any effect you can think of.
Contents
Learning to Use the Node Editor����������������� 111 Drag and Drop Nodes into a Viewer������������ 125
Navigating within the Node Editor������������� 112 Using the Contextual Menu���������������������������� 125
Cut, Copy, and Paste in the Node Editor���� 141 Finding Nodes����������������������������������������������������� 151
Pasting Node Settings�������������������������������������� 142
Performing Simple Searches�������������������������� 152
Copying and Pasting Nodes
to and from Any Text Editor���������������������������� 142 Using Regular Expressions������������������������������ 152
Node Thumbnails���������������������������������������������� 149 Node Tooltips and the Status Bar��������������� 157
To pan the Node Editor using the Node Navigator, do the following:
— Drag within the Node Navigator to move around different parts of your node tree.
— Within the Navigator, drag with two fingers on a track pad to move around different
parts of your node tree.
The first nine saved bookmarks are given keyboard shortcuts and listed in the Options menu. They are
also listed in the Go To Bookmarks dialog along with any saved bookmarks beyond the initial nine.
TIP: You can return the Node Editor to the default scale by right-clicking in the Node Editor
and choosing Scale > Default Scale or pressing Cmd-1.
If your Node Tree changes and you want to update Bookmark names or delete bookmarks, those tasks
can be done in the Manage Bookmarks dialog.
Using Bookmarks
You can jump to a Bookmark view by selecting a bookmark listed in the Options menu or choosing
Go To Bookmarks to open the Go To Bookmarks dialog. The Go To Bookmarks dialog has all the
bookmarks listed in the order they were created in the current composition. Double-clicking on any
entry in the dialog will update the Node Editor to that view and close the Go To Bookmarks dialog.
The keyboard short cuts will update to reflect the new order.
TIP: You can hold down the Shift key to select multiple bookmarks and move them
simultaneously up or down in the Manage Bookmark list.
TIP: If you don’t know which node a particular icon corresponds to, just hover your pointer
over any toolbar button and a tooltip will display the full name of that tool.
To replace a node in the Node Editor with a node from the toolbar:
1 Drag a button from the toolbar so that it’s directly over the node in the Node Editor that you want
replaced. When the node underneath is highlighted, drop the node.
TIP: When you replace one node with another, any settings that are identical between the
two nodes are copied into the new node. For example, replacing a Transform node with a
Merge will copy the existing center and angle values from the Transform to the Merge.
TIP: Whenever you use the Select Tool window, the text you entered is remembered the
next time you open it, so if you want to add another node of the same kind—for example, if
you want to add two Blur nodes in a row—you can just press Shift-Spacebar and then press
Return to add the second Blur node.
The Effects Library appears at the upper-left corner of the Fusion window, and consists of two panels.
A category list at the left shows all categories of nodes and presets that are available, and a list at the
right shows the full contents of each selected category.
By default, the category list shows the primary sets of effects: Tools, Open FX, Templates, and LUTs;
with disclosure controls to the left that hierarchically show all subcategories within each category. The
categories are:
— Tools: Tools consist of all the effects nodes that you use to build compositions, organized by
categories such as 3D, Blur, Filter, Mask, Particles, and so on.
— Open FX: All Resolve FX and any installed third party Open FX plugins will appear here.
To replace a node in the Node Editor with a node from the Effects Library:
1 Drag a node from the browser of the Effects Library so it’s directly over the node in the Node
Editor that you want replaced. When that node is highlighted, drop it.
2 Click OK in the dialog to confirm the replacement.
Other times, such as when adding an item from the “How to” category, dragging a single item from
the Node Editor results in a whole node tree being added to the Node Editor. Fortunately, all nodes
of the incoming node tree are automatically selected when you do this, so it’s easy to drag the entire
node tree to another location in the Node Editor where there’s more room. When this happens, the
nodes of the incoming effect are exposed so you can reconnect and reconfigure it as necessary to
integrate the effect with the rest of your composition.
Adding a LightWrap effect from the “How to” bin of the Templates category of the Effects Library.
TIP: When you replace one node with another, any settings that are identical between the
two nodes are copied into the new node. For example, replacing a Transform node with a
Merge will copy the existing center and angle values from the Transform to the Merge.
Deleting Nodes
To delete one or more selected nodes, press Delete (macOS) or Backspace (Windows), or right-click
one or more selected nodes and choose Delete from the contextual menu. The node is removed
from the Node Editor, and whichever nodes are connected to its primary input and output are now
connected together. Nodes connected to other inputs (such as mask inputs) become disconnected.
Before deleting a node from a node tree (top), and after upstream and
downstream nodes have automatically reconnected (bottom).
Disconnected Nodes
It’s perfectly fine to have disconnected nodes, or even entire disconnected branches of a node tree,
in the Node Editor alongside the rest of a composition. All disconnected nodes are simply ignored
while being saved for possible future use. This can be useful when you’re saving nodes that you’ve
customized but later decided you don’t need. It’s also useful for saving branches of trees that you’ve
since exported to be self-contained media that’s re-imported to take the place of the original effect,
but you want to save the original nodes just in case you need to redo your work.
Selecting Nodes
Selecting nodes is one of the most fundamental things you can do to move nodes or target them for
different operations. There are a variety of methods you can use.
While multiple nodes can be selected, only one node will be the active node. To indicate the difference,
the active node remains highlighted with orange, while all other selected nodes are highlighted with
white. Unselected nodes have simple black outlines.
The active node is highlighted orange, while other selected nodes are highlighted white.
To set the active node when there are multiple selected nodes:
— Option-click one of the selected nodes in the Node Editor to make that one the active node.
— Open the Inspector (if necessary), and click a node’s header bar to make it the active node.
In the following example, you’re set up to rotoscope an image using a Polygon node that’s attached to
the garbage mask input of a MatteControl node which is inserting the mask as an alpha channel.
When you first open Fusion Studio with an empty comp, both viewers remain empty even after
reading in media using a Loader node. The viewers only display content when you assign a node to
one of them.
There are several different ways to display a node in a viewer. Which ones you use depends on how
you like to work.
For complex compositions, you may need to open additional viewers. For example, one viewer may
be used to display the end result of the final comp, while another viewer displays the source, a third
viewer displays a mask, and a fourth viewer might be a broadcast monitor connected via a Blackmagic
DeckLink card or other display hardware. When you have more than two viewers, additional View
indicators are added and each one is assigned a consecutive number between 3 and 9.
Clearing Viewers
Whenever you load a node into a viewer, you prompt that node, all upstream nodes, and other related
nodes to be rendered. If you load nodes into both viewers, this is doubly true. If you want to prevent
your computer from processing views that aren’t currently necessary, you can clear each viewer.
Create/Play Preview
You can right-click a node, and choose an option from the Create/Preview Play On submenu of the
contextual menu to render and play a preview of any node’s output on one of the available viewers.
The Render Settings dialog is displayed, and after accepting the settings, the tool will be rendered and
the resulting frames stored in RAM for fast playback on that view.
TIP: Hold the Shift key when selecting the viewer from the menu to bypass the Render
dialog and to start creating the preview immediately using the default settings or the last
settings used to create a preview.
Node Basics
Each node displays small colored knots around the edges. One or more arrows represent inputs, and
the square represent the tool’s processed output, of which there is always only one. Outputs are white
if they’re connected properly, gray if they’re disconnected, or red to let you know that something’s
wrong and the node cannot process properly.
Each node takes as its input the output of the node before it. By connecting a MediaIn node’s
output to a Blur node, you move image data from the MediaIn node to the Blur node, which does
something to process the image before the Blur node’s output is in turn passed to the next node
in the tree.
If you drop a connection on top of a node that already has the background input connected,
then the second most important connection will be attached, which for multi-input nodes is the
foreground input, and for other single-use nodes may be the Effects Mask input.
Some multi-input nodes are capable of adding inputs to accommodate many connections, such
as the Merge3D node. These nodes simply add another input whenever you drop a connection
onto them.
However, there’s an alternate method of connecting nodes together in instances where there are
several inputs to choose from and you want to make sure you’re choosing the correct one. Hold down
the Option key while dragging a connection from one node’s output and dropping it onto the body of
another node. This opens a pop-up menu from which you can choose the specific input you want to
connect to, by name. Please note that this menu only appears after you’ve dropped the connection on
the node and released your pointing device’s button.
TIP: Rather than remembering the different knot types, press the right mouse button, hold
Option, and drag from the output of a node to the center of another tool. When you release
the mouse, a tooltip will appear allowing you to select the knot you want to connect to.
In the following example, a MediaIn node adds a clip to the composition, while a Defocus node
blurs the image, and then a TV node adds scanlines and vertical distortion. Those effect nodes are
then connected to the MediaOut node in the Fusion page in DaVinci Resolve or a Saver node in
Fusion Studio.
As you can see above, connecting the Defocus node first, followed by the TV node, means that while
the initial image is softened, the TV effect is sharp. However, if you reverse the order of these two
nodes, then the TV effect distorts the image, but the Defocus node now blurs the overall result, so
that the TV effect is just as soft as the image it’s applied to. The explicit order of operations you apply
makes a big difference.
As you can see, the node tree that comprises each composition is a schematic of operations with
tremendous flexibility. Additionally, the node tree structure facilitates compositing by giving you the
ability to direct each node’s output into separate branches, which can be independently processed
and later recombined in many different ways, to create increasingly complex composites while
eliminating the need to precompose, nest, or otherwise compound layers together, which would
impair the legibility of your composition.
In the following example, several graphics layers are individually transformed and combined with a
series of Merge nodes. The result of the last Merge node is then transformed, allowing you to move
the entire collection of previous layers around at once. Because each of these operations is clearly
represented via the node tree, it’s easy to see everything that’s happening, and why.
This is an important distinction to make because, unlike layer-based systems, the visual positioning of
nodes in your node tree has no bearing on the order of operations in that composition. The only thing
that matters is whether nodes are upstream or downstream of each other.
TIP: To help you stay organized, there are Select > Upstream/Downstream commands in
the Node Editor contextual menu for selecting all upstream or downstream nodes to move
them, group them, or perform other organizational tasks.
By clicking and/or dragging these two halves, it’s possible to quickly disconnect, reconnect, and
overwrite node connections, which is essential to rearranging your node tree quickly and efficiently.
Branching
A node’s input can only have one connection attached to it. However, a tool’s output can be connected
to inputs on as many nodes as you require. Splitting a node’s output to inputs on multiple nodes is
called branching. There are innumerable reasons why you might want to branch a node’s output.
A simple example is to process an image in several different ways before recombining these results
later on in the node tree.
Alternatively, it lets you use one image in several different ways—for example, feeding the RGB to
one branch for keying and compositing, while feeding the A channel to the Effects Mask input of
another node to limit its effect, or feeding RGB to a tracker to extract motion information.
Two MediaIn nodes and a DeltaKeyer node attached to a Merge node, creating a composite.
— Background (orange): The default input. Whichever image is connected to this input defines the
output resolution of the Merge node.
— Foreground (green): The secondary input, meant for whichever image you want to be “on top.”
— Effect Mask (blue): An optional input you can use to attach a mask or matte with which to limit
the effect of the Merge node.
It’s important to make sure you’re attaching the correct nodes to the correct inputs to ensure you’re
getting the result you want, and it’s important to keep these inputs in mind when you connect to
a Merge node. Of course, you can always drag a connection to a specific input to make sure you’re
connecting things the way you need. However, if you’re in a hurry and you simply drag connections
right on top of a Merge node:
TIP: When you add a Merge node after a selected node by clicking the Merge button on
the toolbar, by clicking on the Merge icon in the Effects Library, or by right-clicking a node
in the node tree and choosing Insert Tool > Composite > Merge from the contextual menu,
the new Merge node is always added with the background connected to the upstream node
coming before it.
When you drop the resulting node, this automatically creates a Merge node, the background input
of which is connected to the next node to the left of the connection you dropped the clip onto, and
the foreground input of which is connected to the new node that represents the clip or Generator
you’ve just added.
Additionally, If you drag two or more nodes from an OS window into the Node Editor at the same
time, Merge nodes will be automatically created to connect them all, making this a fast way to initially
build a composite.
If you like, you can change how connections are drawn by enabling orthogonal connections, which
automatically draws lines with right angles to avoid having connections overlap nodes.
Functionally, there’s no difference to your composition; this only affects how your node tree appears.
Routers are tiny nodes with a single input and an output, but with no parameters except for
a comments field (available in the Inspector), which you can use to add notes about what’s
happening in that part of the composition.
Even more usefully, you can branch a router’s output to multiple nodes, which makes routers even
more useful for keeping node trees neat in situations where you want to branch the output of a
node in one part of your node tree to other nodes that are all the way on the opposite end of that
same node tree.
Before swapping node inputs (left), and after swapping node inputs
(right), the connections don’t move but the colors change.
Inputs can move freely around the node, so swapping two inputs doesn’t move the connection
lines; instead, the inputs change color to indicate you’ve reversed the background (orange) and
foreground (green) connections.
To extract one or more nodes from their position in the node tree:
— To extract a single node: Hold down the Shift key, drag a node from the node tree up or down to
disconnect it, and then drop the node before releasing the Shift key. That node is now detached,
and the output of the next upstream node is automatically connected to the input of the next
downstream node to fill the gap in the node tree.
— To extract multiple nodes: Select the nodes you want to extract, hold down the Shift key,
drag one of the selected nodes up or down to disconnect them, and then drop the node before
releasing the Shift key. Those nodes are now detached (although they remain connected to one
another), and the output of the next upstream node is automatically connected to the input of the
next downstream node to fill the gap in the node tree.
Before extracting a pair of nodes (left), and after extracting a pair of nodes (right).
After you’ve extracted a node, you can re-insert it into another connection somewhere else. You can
only insert one node at a time.
To insert a disconnected node in the Node Editor between two compatible nodes:
1 Hold down the Shift key and drag a disconnected node directly over a connection between two
other nodes.
2 Once the connection highlights, drop the node, and then release the Shift key. That node is now
attached to the nodes coming before and after it.
TIP: If you hold down the Shift key, you can extract a node and re-insert it somewhere else
with a single drag.
When you paste into the Node Editor, you create a copy of the last node or nodes you’ve cut or copied.
When pasting, there are a few different things you can do to control where pasted nodes appear.
TIP: When you paste a MediaIn, Loader, or Generator node so it will be inserted after a
selected node in the node tree, a Merge tool is automatically created and used to composite
the pasted node by connecting it to the foreground input. While this can save you a few
steps, some artists may prefer to perform these sorts of merges manually, so this can be
changed using the Defaults panel in the Auto tools section of the Fusion > Fusion Settings.
Note that you can paste settings between two nodes of the same type, or between two entirely
different kinds of nodes that happen to have one or more of the same parameters in the Inspector.
When copying settings from one type of node to another, only the settings that match between two
nodes will be copied. A common example is to copy an animated Center parameter from a Transform
node to the Center parameter of a Mask node.
One or more nodes can be copied from the Node Editor and pasted directly into a text editor or email.
This pastes the selection in text format, just as it’s saved internally in Fusion. For example, if you copy
the following set of three nodes:
At this point, you have the option of editing the text (if you know what you’re doing), emailing it to
colleagues, or storing it in a digital notepad of some sort for future use. To use this script in Fusion
again, you need only copy it and paste it back into the Node Editor.
TIP: This is a very easy way to pass specific node settings back and forth between artists
who may not be in the same room, city, or country.
Instancing Nodes
Normally, when you use copy and paste to create a duplicate of a node, the new node is completely
independent from the original node, so that changes made to one aren’t rippled to the other.
However, there are times when two nodes must have identical settings at all times. For example, when
you’re making identical color corrections to two or more images, you don’t want to constantly have to
adjust one color correction node and then manually adjust the other to match. It’s a hassle, and you
risk forgetting to keep them in sync if you’re working in a hurry.
While there are ways to publish controls in one node and connect them to matching controls in
another node, this becomes prohibitively complex and time consuming for nodes in which you’re
making adjustments to several controls. In these cases, creating “instanced” nodes is a real time-saver,
as well as an obvious visual cue in your node tree as to what’s going on.
However you paste an instance, the name of that instanced node takes the form
"Instance_NameOfNode." If you paste multiple instances, each instance is numbered
"Instance_NameOfNode_01."
A green link line shows an instanced Blur node’s relationship to the original Blur node it was copied from.
When a node tree contains instanced nodes, a green line shows the link between the original node
and its instances. You have the option to hide these green link lines to reduce visual clutter in the
Node Editor.
To toggle the visibility of green instance link lines in the Node Editor:
1 Right-click anywhere in the background of the Node Editor.
2 Choose Options > Show Instance Links from the contextual menu.
If you’ve been using an instance of a node and you later discover you need to use it to apply separate
adjustments, you can “de-instance” the node.
De-Instancing and
Re-Instancing Specific Parameters
By default, every parameter in an instanced node is linked to the original node, so that any change
you make is rippled across. However, from time to time you’ll find the need to independently adjust
just one or two parameters while keeping the rest of that node’s parameters linked. For this reason,
instead of de-instancing the entire tool, you can de-instance individual parameters.
If you’ve only de-instanced individual parameters, you can re-instance those parameters later on if
you change your mind.
Moving Nodes
Selecting one or more nodes and dragging them moves them to a new location, which is one of the
simplest ways of organizing a node tree, by grouping nodes spatially according to the role they play in
the overall composition.
Keep in mind that the location of nodes in the Node Editor is purely aesthetic, and does nothing to
impact the output of a composition. Node tree organization is purely for your own peace of mind, as
well as that of your collaborators.
TIP: Once you’ve arranged the nodes in a composition in some rational way, you can
use the Sticky Note and Underlay tools to add information about what’s going on and to
visually associate collections of nodes more definitively. These tools are covered later in
this section.
— Right-click over an empty area of the Node Editor, and choose Arrange Tools > To Grid from the
contextual menu. All nodes you drag now snap to the nearest grid coordinate.
— Right-click over an empty area of the Node Editor, and choose Arrange Tools > To Connected from
the contextual menu. All nodes you drag now snap to the horizontal or vertical position of the
nodes they’re attached to.
TIP: You can set “Arrange to Grid” or “Arrange to Connected” as the default for new
compositions by choosing Fusion > Fusion Settings in DaVinci Resolve or File > Preferences
in Fusion Studio, and turning the Fusion > Node Editor > Arrange To Grid or Arrange to
Connected checkboxes on.
Renaming Nodes
Each node that’s created is automatically assigned a name (based on its function) and a number (based
on how many of that type of node have been created already). For example, the first Blur node added
to a composition will be called Blur1, the second will be Blur2, and so on. Although initially helpful, larger
compositions may benefit from important nodes having more descriptive names to make it easier to
identify what they’re actually doing, or to make it easier to reference those nodes in expressions.
To rename a node:
1 Do one of the following:
— Right-click a node and choose Rename from the contextual menu.
— Select a node and press F2.
2 When the Rename dialog appears, type a new name, and then click OK or press Return.
Since Fusion can be scripted and use expressions, the names of nodes must adhere to a scriptable
syntax. Only use alphanumeric characters (no special characters), and do not use any spaces.
Also, you cannot start a node name with a number. If you accidentally create a name that doesn’t
exactly follow the guidelines, spaces and invalid characters will be automatically deleted.
If you want to see the original node types instead of the node names, press and hold Command-Shift-E.
To return a node to its regular color, right-click it and choose Set Color > Clear Color from the
contextual menu, or open the Node Color pop-up for a node in the Inspector, and choose Clear Color.
Sticky Notes are yellow boxes in which you can type whatever text you want. They can be resized,
moved, and collapsed when they’re not being edited, but once created they remain attached
to the background of the Node Editor where you placed them until you either move them or
delete them.
Underlay Boxes can be named to identify the purpose of that collection of nodes, and they can
be colored to be distinct from other Underlay Boxes or to adhere to some sort of color code for
your compositions.
— To delete an Underlay Box and all nodes within: Select an Underlay Box and press the Delete
key to delete both the Underlay Box and all nodes found inside it. If you don’t also want to delete
the nodes, first drag the nodes out of the box.
— To delete an Underlay Box but keep all nodes within: Option-click the Underlay Box to select it
and not the nodes, and then press the Delete key. The nodes within remain where they were.
Node Thumbnails
Once a source or an effect has been added to the Node Editor, it’s represented by a node. By default,
nodes are rectangular and thin, making it easier to fit reasonably complicated grades within a
relatively small area. However, if you like, you can also display node thumbnails.
Nodes can be displayed as a small rectangle or as a larger square. The rectangular form displays
the node’s name in the center, while the square form shows either the tool’s icon or a thumbnail of
the image it is outputting.
NOTE: If Show Thumbnails is enabled, nodes may not update until the playhead is moved
in the Time Ruler.
When you’ve manually enabled thumbnails for different nodes, they’ll remain visible whether or not
those nodes are selected.
Most nodes in the Particle and 3D categories fall into this group. The exceptions are the
pRender node and the Render 3D node. These two nodes are capable of displaying a
rendered thumbnail if you have the menu options set for Thumbnails to be displayed.
In other cases, whether nodes display images in their thumbnail is more situational. Some
Transform nodes are able to concatenate their results with one another, passing the actual
processing downstream to another node later in the node tree. In this case, upstream
Transform nodes don’t actually process the image, so they don’t produce a thumbnail.
In other situations where the Loader is not reading in a clip or the clip is trimmed in the
Keyframes Editor to be out of range, it can cause the node not to process the image, so
it will not produce a rendered Thumbnail image. Also, nodes that have been set to Pass
Through mode are disabled and do not display a rendered Thumbnail image.
Finding Nodes
Modern visual effects require detailed work that often results in compositions with hundreds of
nodes. For such large node trees, finding things visually would have you panning around the Node
Editor for a long, long time. Happily, you can quickly locate nodes in the Node Editor using the
Find dialog.
The Find window closes. If either the Find Next, Find Previous, or Find All operations are successful,
the found node or nodes are selected. If not, a dialog appears letting you know that the string could
not be found.
TIP: Finding all the nodes of a particular type can be very useful if you want, for example,
to disable all Resize nodes. Find All will select all the nodes based on the search term, and
you can temporarily disable them by pressing the shortcut for Bypass, Command-P.
Character Sets
Any characters typed between two brackets [ ] will be searched for. Here are some examples of
character set searches that work in Fusion.
[a-z]
Finds: Every node using a lower caps letter
[a-d]
Finds: Every lower caps letter from a to d, and will find nodes with a, b, c, or d
[Tt]
Finds: Every node with an upper case T or a lower case t
[0-9]
Finds: Every numeral
[5-7]
Finds: Every numeral from five to seven, and will find nodes numbered with 5, 6, or 7
TIP: You can also save six different settings for a node in the Node Editor using the Version
buttons at the top of the Inspector. For more information, see Chapter 8, “Editing Parameters
in the Inspector,” in the Fusion Reference Manual.
Resetting Defaults
Even if you’ve created new default settings for new nodes, you can always reset individual parameters
to the original default setting. In addition, it’s easy to restore the original default settings for new
nodes you create.
To reset every parameter in a node to the original defaults, do one of the following:
— Right-click on the node and choose Settings > Reset Default.
— Right-click that node’s control header in the Inspector, and choose Settings > Reset Default.
— Delete the .setting file from the Defaults folder.
NOTE: When you use the Settings > Reset Default command, the default .setting file is
deleted. If you want to save a node’s settings as alternate settings, you should use the
Settings > Save As command.
TIP: If you drop a setting directly onto a connection line, the new node will be inserted onto
that connection.
— Show Controls: Sets whether that node reveals its parameters in the Inspector when it’s selected
and whether its onscreen controls appear in viewers. On by default.
— Pass Through: (Command-P) Identical to the toggle switch in the Inspector that turns nodes off
and on. Disabled nodes are ignored as image data is passed from the next previous upstream
node to the next downstream node. On by default.
— Locked: (Command-L) Identical to the lock button in the Inspector that prevents a node from
being edited in the Inspector. Off by default.
— Update: (Command-U) On by default. While this option is enabled, all changes to the node will
cause it to re-render. When Update is disabled, you can still change the node’s parameters, but
those changes will not process or update the image until Update is re-enabled. While disabled,
the last processed image for that node will be displayed as a freeze frame. One example of when
this is useful is when you have a large or processor-intensive composition (such as a particularly
intense particle system), and disabling this option temporarily will let you quickly make several
quick parameter adjustments to different nodes without forcing you to wait for the node tree to
re-render after every adjustment. Another example of when this is useful is when you want to
quickly see the effect of animated downstream nodes while keeping upstream nodes that are too
processor-intensive to play in real time from rendering additional frames.
— Force Cache: When enabled, this node’s output for the current frame has an extremely high
cache priority, essentially forcing it to stay cached in memory. Off by default.
Toggling any one of these node modes displays a badge within that node indicating its state.
— Pipes Always Visible: Enabling this option causes a connection to cross over a node instead of
beneath it, sometimes making it easier to follow the connection’s path.
— Show Hidden Pipes: When enabled, the Inspector option to Hide Incoming Connections in every
node is overridden and all connections are displayed in the Node Editor.
— Aspect Correct Tile Pictures: Aspect Correct Tile Pictures forces the display of thumbnails to be
aspect corrected, which is slower but visually more accurate. This option is enabled by default.
— Full Tile Render Indicators: Enabling this option causes the thumbnail to flash green
when rendering, which makes it easier to identify which node is processing in a large,
complex node tree.
— Show Grid: This option can be used to enable or disable the Node Editor’s background grid.
— Show Instance Links: When enabled, the Node Editor draws a green connection between an
instanced node and its parent.
— Auto Remove Routers: If routers are disconnected from a tool, they are automatically
deleted from the Node Editor. This option is enabled by default to eliminate the need to delete
orphaned routers.
— Show Navigator: Enabling this option displays a small overview window of the entire node tree in
the Node Editor’s top-right corner. For more information, see the Navigator section in this chapter.
— Auto Navigator: The Navigator only appears when one or more nodes is outside the visible area
of the Node Editor. For more information, see the Navigator section in this chapter.
— Build Flow Vertically/Horizontally: Node trees can be built either horizontally from left to right
or vertically from top to bottom. Enabling one of these options determines whether new nodes
are added beneath the current node or to the right of the current tool.
— Orthogonal/Direct Pipes: Use these two options to decide whether connections between nodes
are drawn as Direct (straight) lines or Orthogonal (bent) lines.
If you wait a few moments later, a more elaborate presentation of the same information appears
within a floating tooltip in the Inspector. This tooltip gives you additional information about the
Domain (Image and DoD) and the data range used by that clip.
Node Groups,
Macros, and
Fusion Templates
This chapter reveals how to use groups, macros, and templates in
Fusion so working with complex effects becomes more organized,
more efficient, and easier.
Contents
Groups�������������������������������������������������������������������� 159 Saving a Title Macro������������������������������������������� 164
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 158
Groups
When you work on complex visual effects, node trees can become sprawling and unwieldy, so
grouping tools together can help you better organize all the nodes and connections. Groups are
containers in your node tree that can hold multiple nodes, similar to the way a folder on your Desktop
holds multiple files. There is no limit to the number of nodes that can be contained within a group, and
you can even create subgroups within a group.
Creating Groups
Creating a group is as simple as selecting the nodes you want to group together and using the
Group command.
To create a group:
1 Select the nodes you want grouped together.
2 Right-click one of the selected nodes and choose Group from the contextual menu (Command-G).
Several nodes selected in preparation for making a group (left), and the resulting group (right).
The selected nodes are collapsed into a group, which is displayed as a single node in the Node Editor.
The Group node can have inputs and outputs, depending on the connections of the nodes within
the group. The Group node only displays inputs for nodes that are already connected to nodes
outside the group. Unconnected inputs inside the group will not have an Input knot displayed on the
Group node.
Deleting Groups
Deleting a group is no different from deleting any other node in the Node Editor. Select a group and
press Delete, Backspace, or Forward-Delete, and the group along with all nodes contained within it are
removed from the node tree.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 159
Expanding and Collapsing Groups
A collapsed group is represented by a single “stack” node in the node tree. If you want to modify any
of the nodes inside the group, you can open the group by double-clicking it or by selecting the group
node and pressing Command-E.
When you open a group, a floating window shows the nodes within that group. This floating window is
its own Node Editor that can be resized, zoomed, and panned independently of the main Node Editor.
Within the group window, you can select and adjust any node you want to, and even add, insert, and
delete nodes while it is open. When you’re ready to collapse the group again, click the minimize button
at the top left corner of the floating window, or use the keyboard shortcut (Cmd-E).
Ungrouping Nodes
If you decide you no longer need a particular group, or you simply find it easier to have constant
access to all the nodes in the group at once, you can decompose or “ungroup” the group without
deleting the nodes within it to eliminate the group but keep the contents in the Node Editor.
A good example of when you might want to Save and Load a group is in a studio with two or more
compositing artists. A lead artist in your studio can set up the master comp and create a group
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 160
specifically for keying greenscreen. That key group can then be passed to another artist who refines
the key, builds the mattes, and cleans up the clips. The setting can then be saved out and loaded
back into the master comp. As versions are improved, these settings can be reloaded, updating the
master comp.
In Fusion Studio, you can also save and reuse groups from the Bins window:
— To save a group: Drag the group from the Node Editor into the opened Bin window. A dialog
will appear to name the group setting file and the location where it should be saved on disk.
The .setting file will be saved in the specified location and placed in the bins for easy access
in the future.
Macros
Some effects aren’t built with one tool, but with an entire series of operations, sometimes in complex
branches with interconnected parameter controls. Fusion provides many individual effects nodes
for you to work with but gives users the ability to repackage them in different combinations as
self‑contained “bundles” that are either macros or groups. These “bundles” have several advantages:
Macros and groups are functionally similar, but they differ slightly in how they’re created and
presented to the user. Groups can be thought of as a quick way of organizing a composition by
reducing the visual complexity of a node tree. Macros, on the other hand, take longer to create
because of how customizable they are, but they’re easier to reuse in other comps.
Creating Macros
While macros let you save complex functions for future use in very customized ways, they’re actually
pretty easy to create.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 161
TIP: If you want to control the order in which each node’s controls will appear in the
macro you’re creating, Command-click each node in the order in which you want it
to appear.
2 Right-click one of the selected nodes and choose Macro > Create Macro from the
contextual menu.
A Macro Editor window appears, showing each node you selected as a list, in the order in which
each node was selected.
The macro editor with a Blur node and Color Corrector node.
3 First, enter a name for the macro in the field at the top of the Macro Editor. This name should
be short but descriptive of the macro’s purpose. No spaces are allowed, and you should
avoid special characters.
4 Next, open the disclosure control to the left of each node that has controls you want to expose to
the user and click the checkbox to the right of each node output, node input, and node control
that you want to expose.
The controls you check will be exposed to users in the order in which they appear in this list, so
you can see how controlling the order in which you select nodes in Step 1, before you start editing
your macro, is useful. Additionally, the inputs and outputs that were connected in your node tree
are already checked, so if you like these becoming the inputs and outputs of the macro you’re
creating, that part is done for you.
For each control’s checkbox that you turn on, a series of fields to the left of that control’s row lets
you edit the default value of that control as well as the minimum and maximum values that control
will initially allow.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 162
7 A Save Macro As dialog appears in which you can re-edit the Macro Name (if necessary), and
choose a location for your macro.
To have a macro appear in the Fusion page Effects Library Tools > Macros category, save it in the
following locations:
— On macOS: Macintosh HD/Users/username/Library/Application Support/Blackmagic Design/
DaVinci Resolve/Fusion/Macros/
— On Windows: C:\Users\username\AppData\Roaming\Blackmagic Design\DaVinci Resolve\
Support\Fusion\Macros
— On Linux: home/username/.local/share/DaVinciResolve/Fusion/Macros
To have a macro appear in the Fusion Studio Effects Library Tools > Macros category, save it in the
following locations:
— On macOS: Macintosh HD/Users/username/Library/Application Support/Blackmagic Design/
Fusion/Macros/
— On Windows: C:\Users\username\AppData\Roaming\Blackmagic Design\Fusion\Macros
— On Linux: home/username/.fusion/BlackmagicDesign/Fusion/Macros
Using Macros
Macros can be added to a node tree using the Add Tool > Macros or Replace Tool > Macros submenus
of the Node Editor contextual menu.
Re-Editing Macros
To re-edit an existing macro, just right-click anywhere within the Node Editor and choose the macro
you want to edit from the Macro submenu of the same contextual menu. The Macro Editor appears,
and you can make your changes and save the result.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 163
Creating Fusion Templates
The integration of Fusion into DaVinci Resolve has enabled the ability to create Fusion Titles,
Transitions, Effects, and Generators templates for use in the Edit page. You can create these templates
in the Fusion page or within Fusion Studio and then copy them into DaVinci Resolve. Fusion Titles,
Generators, and Transition templates are essentially comps created in Fusion but editable in the
Timeline of the Edit page with custom controls. This section shows you how it’s done.
Having built your composition, select every single node you want to include in that template except for
the MediaIn and MediaOut nodes in DaVinci Resolve or Loader and Saver nodes in Fusion Studio.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 164
Selecting the nodes you want to turn into a title template.
TIP: If you want to control the order in which node controls will be displayed later on,
you can Command-click each node you want to include in the macro, one by one, in the
order in which you want controls from those nodes to appear. This is an extra step, but
it keeps things better organized later on.
Having made this selection, right-click one of the selected nodes and choose Macro > Create
Macro from the contextual menu.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 165
The Macro Editor window appears, filled to the brim with a hierarchical list of every parameter in
the composition you’ve just selected.
The Macro Editor populated with the parameters of all the nodes you selected.
This list may look intimidating, but closing the disclosure control of the top Text1 node shows us
what’s really going on.
Closing the top node’s parameters reveals a simple list of all the nodes we’ve selected. The Macro
Editor is designed to let you choose which parameters you want to expose as custom editable
controls for that macro. Whichever controls you choose will appear in the Inspector whenever you
select that macro, or the node or clip that macro will become.
So all we have to do now is to turn on the checkboxes of all the parameters we’d like to be able to
customize. For this example, we’ll check the Text3D node’s Styled Text checkbox, the Cloth node’s
Diffuse Color, Green, and Blue checkboxes, and the SpotLight node’s Z Rotation checkbox, so that
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 166
only the middle word of the template is editable, but we can also change its color and tilt its
lighting (making a “swing-on” effect possible).
Selecting the checkboxes of parameters we’d like to edit when using this as a template.
Once we’ve turned on all the parameters we’d like to use in the eventual template, we click the Close
button, and a Save Macro As dialog appears.
To have the Title template appear in the Effects Library > Titles category of DaVinci Resolve, save the
macro in the following locations:
Choosing where to save a title template for the Edit page in DaVinci Resolve.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 167
Using Your New Title Template
After you’ve saved your macro, you’ll need to quit and reopen DaVinci Resolve. When you open the
Effects Library of the Edit page, you should see your new template inside the Titles category, ready to
go in the Fusion Titles list.
Custom titles appear in the Fusion Titles section of the Effects Library.
Editing this template into the Timeline and opening the Inspector, we can see the parameters we
enabled for editing, and we can use these to customize the template for our own purposes.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 168
Getting Started with a Fusion Transition Template
When creating a Fusion transition template, it’s easiest to start with an existing transition
template and build off that. Three transitions are located in the Fusion Transitions category of the
DaVinci Resolve Effects Library. The simplest transition is the Cross Dissolve, while the most complex
example is the Slice Push.
Once you apply a Fusion Transition in the Edit page, you can right-click it and choose Open in Fusion Page.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 169
The Fusion page opens, displaying the node tree used to create the Fusion transition.
The MediaIn 1 node represents the outgoing clip in the Edit page Timeline. The MediaIn 2 clip
represents the incoming clip. You can modify or completely change the Cross Dissolve effect to create
your own custom transition using any of Fusion’s nodes.
The Fusion Cross Dissolve node tree replaced with Transforms and a Merge node.
TIP: To modify the duration of the Fusion transition from the Edit page Timeline, you must
apply the Resolve Parameter Modifier to any animated parameter. In place of keyframing
the transition, you create the transition using the Scale and Offset parameters of the
Resolve parameter modifier.
Start by selecting every node in the Node Editor that you want to include in the transition template,
including the two MediaIn nodes and the MediaOut node.
TIP: Since the transition template must include the MediaIn and MediaOut nodes, the final
steps for saving a transition template must be performed in DaVinci Resolve’s Fusion page
and cannot be performed in Fusion Studio.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 170
Having made this selection, right-click one of the selected nodes and choose Macro >
Create Macro from the contextual menu.
The Macro Editor displaying the parameters of all the nodes you selected.
The Macro Editor window appears, displaying a hierarchical list of every parameter in the composition
you’ve just selected. The order of nodes is based on the order they were selected in the Node Editor
prior to creating the macro.
The Macro Editor is designed to let you choose which parameters you want to display as custom
controls in the Edit page Inspector when the transition is applied.
For transitions, you can choose not to display any controls in the Inspector, allowing only
duration adjustments in the Timeline. However, you can choose a simplified set of parameters for
customization by enabling the checkboxes next to any parameter name.
Once you enable all the parameters you want to use in the eventual template, click the Close button,
and a Save Macro As dialog appears. Here, you can enter the name of the transition, as it should
appear in the Edit page Effects Library.
To have the transition template appear in the Effects Library > Fusion Transitions category of
DaVinci Resolve, save the macro in the following locations:
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 171
Using Your New Transition Template
After you’ve saved your macro, you’ll need to quit and reopen DaVinci Resolve. When you open the
Effects Library on the Edit page, the new transition template is listed in the Video Transitions category,
in the Fusion Transitions list.
Applying this transition to a cut in the Timeline and opening the Inspector shows the parameters you
enabled for editing, if any.
To open the Fusion Noise Gradient Generator in the Fusion page, do the following:
1 On the Edit page, drag the Fusion Noise Gradient Generator from the Effects Library to the Timeline.
2 Right-click over the Noise Gradient Generator and choose Open in Fusion Page from the
pop-up menu.
The Fusion page opens, displaying the node tree that is used to create the Fusion Generator.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 172
Creating a Fusion Generator Template
As easy as it is to begin with the Noise Gradient Generator template, you can just as easily start by
adding a Fusion Composition Effect to a Timeline in the Edit page.
1 On the Edit page, drag the Fusion Composition Effect from the Effects Library to the Timeline.
2 Right-click over the Composition Effect and choose Open in Fusion Page from the pop-up menu.
An empty Fusion page with a single MediaOut node opens, ready for you to create a Fusion Generator.
The Fusion Generator is a solid image generated from any number of tools combined to create a
static or animated background. You can choose to combine gradient colors, masks, paint strokes, or
particles in 2D or 3D to create the background generator you want.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 173
Saving a New Fusion Generator
After creating the generator you want in Fusion, you need to save it to the Effects Library to reuse
it in other projects from the Edit page. To do this, you must create a Macro and save it to the
Generator folder.
Ordinarily, macros are used as building blocks inside of Fusion so that you can turn frequently-made
compositing tricks that you use all the time into your own nodes. However, you also use this macro
functionality to build Generator templates for the DaVinci Resolve Edit page.
Start by selecting every node in the Node Editor that you want to include in the Generator template
including the MediaOut node.
Having made this selection, right-click one of the selected nodes and choose Macro > Create Macro
from the contextual menu.
The Macro Editor displaying the parameters of all the nodes you selected.
The Macro Editor window appears, displaying a hierarchical list of every parameter in the composition
you’ve just selected. The order of nodes is based on the order they were selected in the Node Editor
prior to creating the macro.
The Macro Editor is designed to let you choose which parameters you want to display as custom
controls in the Edit page Inspector when the Generator is applied. You can choose a simplified set of
parameters for customization by enabling the checkboxes next to any parameter name.
Once you enable all the parameters you want to use in the eventual template, click the Close button,
and a Save Macro As dialog appears. Here, you can enter the name of the Transition, as it should
appear in the Edit page Effects Library.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 174
To have the Generator template appear in the Effects Library > Fusion Generators category of
DaVinci Resolve, save the macro in the following locations:
Applying this Generator to the Timeline and opening the Inspector shows the parameters you enabled
for editing, if any.
Once inside Fusion, you use Fusion’s nodes to create the effect you want. You can use a single node or
a hundred, depending on the effect you want to create. For instance, using Fusion’s Color Correction
nodes, you can create a simple color corrector you can use on the Edit page.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 175
To create a simple Color Corrector effect, do the following:
1 Insert the Color Corrector node between the MediaIn and MediaOut nodes.
2 Select the Color Corrector node in the Node Editor, and then press Cmd-A to select the
remaining nodes.
3 Right-click over any of the selected node and choose Macro > Create Macros from the contextual
menu. Enabling the checkboxes in this window determines the parameters that appear in the
Edit page Inspector.
4 The Macro Editor window opens. Here, you can enable the checkboxes for any parameters you
want to be shown in the Edit page Inspector.
5 Enter the name of your effect at the top of the Macro Editor window.
6 To save the Macro, click Close at the bottom of the window, then click Yes in the dialog that
appears asking you to save the changes.
The Macros must be saved into the correct folder for DaVinci Resolve to recognize the
Macro as an effect.
You can save and organize your Fusion Effects into separate subfolders underneath the paths above.
These subfolders will show up in the Effects section in the Edit page.
To see the effect in the Edit page Effects Library, you’ll need to quit DaVinci Resolve and
relaunch the application.
Once inside Fusion, use Fusion’s nodes to create the effect you want.
Save the Macro following the same steps you use for single clip effects. Enable any of the parameters
you want to control in the Edit page. To be able to switch the order of video layers within the effect,
make sure you have the Layer checkbox enabled for all the MediaIn nodes.
Once you’ve saved the Macro and relaunched DaVinci Resolve, to use the effect on multiple timeline
layers, you must create a Fusion clip. The Fusion clip should contain the same number of layers the
effect requires. The order of the Timeline layers, going from the bottom track to the top, matches the
MediaIn numbers. For instance, video track 1 will match the position and appearance of MediaIn1,
video track 2 matches MediaIn2 and so on. If you want to change how tracks map to MediaIn nodes,
you can change the Layer number in the Inspector, assuming you enabled the MediaIn Layer
checkbox when creating the Macro.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 176
Changing Durations of a Template
After you make a template in Fusion, you may want to change its duration in the Edit or Cut page
Timeline. Changing the duration when animation is involved can be complicated, so there are two
Modifiers in Fusion that can help determine how keyframes react when the duration is updated in the
Edit or Cut page Timeline.
When creating Fusion templates for the Edit or Cut page in DaVinci Resolve, the Anim Curves Modifier
allows the keyframed animation you’ve created in Fusion to stretch and squish appropriately as the
transition, title, or effect’s duration changes on the Edit and Cut page Timelines.
When you use this template in the Edit page, now you can drag clips from the Edit page Media Pool
and drop them in the Transition Inspector’s Clip Name field. They will then instantly update the
template with the chosen media.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 177
The Clip Name parameters
are checked for MediaIn2
and MediaIn1 nodes in
the Macro Editor, allowing
media drop zones for both
incoming and outgoing sides
of this Fusion transition.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 178
When you restart DaVinci Resolve, the icon you created will be embedded in the template thumbnail
across all the Effects Libraries in the program.
Creating a Fusion Template Bundle requires using a specific directory structure and using your
operating systems file browser and .zip compression utility. The directory structure is listed below,
and you can always find a specific folder from within Fusion by right-clicking on any bin in the Effect
Library and selecting Show Folder.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 179
To import a Fusion Template Bundle:
1 Double-click on a .drfx file in your OS. DaVinci Resolve will launch and a dialogue box will appear
asking if you want to install the template bundle.
2 Drag to the .drfx file from your OS directly into the Fusion page in DaVinci Resolve. A dialogue box
will appear asking if you want to install the template bundle.
IMPORTANT: The Fusion Template Bundle contains all the templates in one file. It does not
uncompress them into separate template files again. Therefore if you delete the .drfx file,
all associated templates inside that bundle will be removed as well.
Fusion Fundamentals | Chapter 6 Node Groups, Macros, and Fusion Templates 180
Chapter 7
Using Viewers
This chapter covers working with viewers in Fusion, including using
onscreen controls and toolbars, creating groups and subviews,
managing viewer Lookup Tables (LUTs), working with the 3D viewer,
and setting up viewer preferences and options.
Contents
Viewer Overview������������������������������������������������ 182 Toolbars����������������������������������������������������������������� 190
Loading Nodes into Viewers������������������������� 184 Split Wipes between Buffers�������������������������� 192
Zooming and Panning into Viewers���������� 186 Changing the Subview Type���������������������������� 194
Flipbook Previews��������������������������������������������� 186 Swapping the Subview with the Main View 194
Copying a Viewer’s POV to a Camera����������� 202 How Lookup Tables Work in Fusion��������������� 212
Lighting and Shadows in 3D Viewers����������� 203 Types of Viewer LUTs������������������������������������������� 212
Transparency in 3D Viewers���������������������������� 204 Using Viewer LUTs������������������������������������������������ 214
Grid���������������������������������������������������������������������������� 205 Editing Viewer LUTs���������������������������������������������� 215
Vertex Normals����������������������������������������������������� 205
LUT Processing Order����������������������������������������� 217
Quad View�������������������������������������������������������������� 206
Applying Multiple LUTs��������������������������������������� 217
Quad View Layouts���������������������������������������������� 206
Saving Custom LUTs��������������������������������������������� 218
Using Quad Views for 2D Scenes������������������� 207
LUT Files�������������������������������������������������������������������� 219
Guides���������������������������������������������������������������������� 207
Viewer Preferences and Settings���������������� 220
Frame Format Settings�������������������������������������� 208
Viewer Settings����������������������������������������������������� 220
Domain of Definition and
Region of Interest���������������������������������������������� 208 The Viewer Options Menu������������������������������ 220
Viewer Overview
Viewers in Fusion display the current frame of the current composition in a variety of ways to help you
see what you’re doing and evaluate the final result of your compositing artistry. Viewers display 2D
images, but they can also display a 3D environment using a 3D View as well as a special Quad viewer
to help you effectively work in three dimensions.
Additionally, you can expose “subviews” including color inspectors, magnifiers, waveforms,
histograms, and vectorscopes to help you analyze the image as you work.
To create a new floating display view, select Window > New View from the menu bar at the top of
the screen. The position and configuration of the new view can be saved in the layout preferences,
if required.
Video Output
When using DaVinci Resolve or Fusion Studio, if Blackmagic video hardware is present in the computer,
then you can select a node to preview directly on that display. While video output can’t be used for
manipulating onscreen controls such as center crosshairs or spline control points, they’re extremely
valuable for evaluating your composition via the output format, and for determining image accuracy
using a properly calibrated display.
The video hardware is configured from the DaVinci Resolve and Fusion Studio preferences.
When a node is being viewed, a View Indicator button appears at the bottom left. This is the same
control that appears when you hover the pointer over a node. Not only does this control let you know
which nodes are loaded into which viewer, but they also expose little round buttons for changing
which viewer they appear in.
Clearing Viewers
To clear an image from a viewer, click in the viewer to make it active; a light purple outline is displayed
around the active panel. With the viewer active, press the ` (accent) key. This key is usually found to the
left of the 1 key on U.S. keyboards. The fastest way to remove all images from all viewers is to make
sure none of the viewers is the active panel, and then press the accent key.
If you want all new compositions to open with a certain viewer layout, you can configure the layout
of the two primary viewers, and then use the Grab Document Layout button in the Fusion Global
Layout preferences to remember the layout for any new compositions. To save the position and size
of floating viewers, you use the Grab Program Layout button. Finally, if you want to have the floating
viewers opened automatically when you open Fusion, enable the Create Floating Views checkbox.
The amount of vertical space available for both viewers can be adjusted by dragging the horizontal
scrollbar between the viewers and the work area below them.
Flipbook Previews
As you build increasingly complex compositions, and you find yourself needing to preview specific
branches of your node tree to get a sense of how various details you’re working on are looking, you
may find it useful to create targeted RAM previews at various levels of quality right in the viewer by
creating a RAM Flipbook. RAM Flipbook Previews are preview renders that exist entirely within RAM
and allow you to render a node’s output at differing levels of quality for quick processing in order to
watch a real-time preview.
2 When the Preview Render dialog opens, choose the quality, resolution, and motion blur settings
you want to use for the Flipbook Preview.
3 When you’ve chosen the settings you want to use, click Start Render.
The current frame range of the Time Ruler is rendered using the settings you’ve selected, and the
result is viewable in the viewer you selected or dragged into.
Once you’ve created a Flipbook Preview within a particular viewer, right-clicking that viewer presents
Flipbook-specific commands and options to Play, Loop, or Ping-Pong the Flipbook, to open it Full
Screen, to Show Frame Numbers, and to eliminate it.
TIP: If you want to create a Flipbook Preview and bypass the Render Settings dialog by
just using either the default setting or the settings that were chosen last, hold down Shift-
Option while you drag a node into the viewer. The Settings dialog will not appear, and
rendering the preview will start right away.
To scrub through a Flipbook frame-by-frame using the keyboard, do one of the following:
— Press the Left or Right Arrow keys to move to the previous or next frame.
— Hold Shift and press the Left or Right Arrow keys to jump back or forward 10 frames.
— Press Command-Left Arrow to jump to the first frame.
— Press Command-Right Arrow to jump to the last frame.
TIP: The mouse and keyboard shortcuts work in full-screen mode as well.
Settings
The Settings section of the Preview Render dialog includes three buttons that determine the overall
quality and appearance of your Flipbook Preview. These buttons also have a significant impact on
render times.
— HiQ: When enabled, this setting renders the preview in full image quality. If you need to see
what the final output of a node would look like, then you would enable the HiQ setting. If you are
producing a rough preview to test animation, you can save yourself time by disabling this setting.
— MB: The MB in this setting stands for Motion Blur. When enabled, this setting renders with motion
blur applied if any node is set to produce motion blur. If you are generating a rough preview and
you aren’t concerned with the motion blur for animated elements, then you can save yourself time
by disabling this setting.
— Some: When Some is enabled, only the nodes specifically needed to produce the image of the
node you’re previewing are rendered.
Size
Since RAM Flipbook Previews use RAM, it’s helpful to know how many frames you can render into RAM
before you run out of memory. The Flipbook Preview dialog calculates the currently available memory
and displays how many frames will fit into RAM. If you have a small amount of RAM in your computer
Network
Network rendering is only available in Fusion Studio. For more information on network rendering,
see Chapter 4, “Rendering Using Saver Nodes,” in the Fusion Reference Manual.
Shoot On
Sometimes you may not want to render every single frame, but instead every second, third, or fourth
frame to save render time and get faster feedback. You can use the Step parameter to determine the
interval at which frames are rendered.
Frame Range
This field defaults to the current Render Range In/Out set in the Time Ruler to determine the start and
end frames for rendering. You can modify the range to render more or fewer frames.
Configurations
Once you’ve created a useful preview configuration, you can save it for later use by clicking the
Add button, giving it a name, and clicking OK.
Updating a Preview
This option is designed for the interactive frame-by-frame work of rotoscoping and painting. Right-
click over a preview in the viewer and choose Update from its contextual menu. When active, any
frames that are modified on the previewed node are automatically updated in the preview’s playback.
This lets you reserve the RAM for playback. You can keep it playing on a loop or ping-pong while you
work in another viewer.
Onscreen Controls
When it comes to adjusting images, the Control Panel provides very precise numerical values, but
sometimes visually positioning an element using onscreen controls can get you where you want to go
with less tweaking.
The controls shown in viewers are determined by which nodes are selected, not by the node displayed
in the viewer. For example, a downstream blur is easily viewed while manipulating the controls for a
selected polygon mask or merge. If multiple nodes are selected, the controls for every selected node
are shown simultaneously.
— Up and Down Arrow keys can be used to adjust the vertical position of an onscreen
control by small steps.
— Holding down the Command key while using the Up and Down Arrow keys reduces the scale of
each step by a factor of ten. Holding Shift increases the scale of each step by a factor of ten.
Toolbars
There are two toolbars in the viewer: a viewer toolbar, which always appears at the top of each
viewer and gives you control over what that viewer shows, and an optional node toolbar that appears
underneath that gives you contextual controls based on the node you’ve selected in the Node Editor.
Node Toolbars
In addition to the viewer toolbar, a node toolbar is displayed underneath, at the top of the viewer
display area, whenever you select a node that exposes special nodes. Examples of nodes that expose
a toolbar include the text, masks, paths, paint strokes, and the 3D environment.
A/B Buffers
Each viewer has two buffers, each of which can contain images from different nodes, enabling easy
comparison of two different nodes within the same viewer by either toggling between buffers, or
via an adjustable split-wipe. Each buffer can be considered a complete and separate viewer within
the same viewer pane. The A buffer is always shown by default, so when you first load a node into a
viewer, the image loads into the A buffer.
TIP: Each buffer can be set to different display settings—for example, showing different
channels or different viewing LUTs, either applied to different nodes or applied to two
buffered versions of the same node.
4 (Optional) If you want to change the image that’s displayed on that side of the split, you can drag
new nodes onto either half of the viewer.
5 To turn off the wipe, click the Switch to Split Wipe View button again (or press /).
Even when you wipe, you can choose different display channels, view LUTs, or other display options
for each buffer individually by clicking on the half of the wipe you want to alter, and then choosing the
options you want that buffer to use. This allows easy comparison of different channels, LUTs, or other
viewer settings while wiping the same image, or different images.
Subviews
A subview is a “mini” viewer that appears within the main viewer. A subview is usually used to show
different information about the image.
The Subview
menu with
the Histogram
subview
displayed
The Subview drop-down menu and contextual menu show all the available subview types. Once you
choose an option from the list, that view will be displayed in the subview, and the Subview button will
show and hide it as you wish.
Navigator
The Navigator can only be used in a subview. It provides a small overview of the entire image, with
a rectangle that indicates the portion of the image that is actually visible in the main viewer. This is
useful when zooming in on an image in the main view.
Magnifier
The Magnifier can be used only in a subview. It shows a zoomed-in version of the pixels under the
cursor in the main viewer.
This is the only subview type that is not just a different view of the same node in the main viewer.
3D Image Viewer
The 3D Image Viewer is available when viewing a node from the 3D category.
A 3D Image Viewer
as a subview
3D Histogram
The more advanced 3D Histogram Viewer shows the color distribution in an image within a 3D
cube. One advantage to a 3D Histogram is that it can accurately represent the out-of-range colors
commonly found in floating-point and high-dynamic-range images. It can also be used to look at
vector images like position, normal, velocity, and so on.
Color Inspector
The Color Inspector can only be used in a subview. The Color Inspector shows information about
the color channels of the pixel under the cursor. It will show all channels present, even the auxiliary
channels such as Z buffer, XYZ normals, and UV mapping channels.
Histogram
The Histogram Viewer is an analysis node that can be used to identify problems with the contrast and
dynamic range in an image. The graph shows the frequency distribution of colors in the image, including
out-of-range colors in floating-point images. The horizontal axis shows the colors from shadows to
highlights. The vertical axis shows the number of pixels in the image that occur at each level.
Image Info
The Image Info view can only be used in a subview. The Image Info tab shows a horizontal bar across
the top of the image with information about the frame size, pixel aspect, and color depth of the
viewed image.
The Image Info subview for viewing size, pixel aspect, and color depth information
Metadata
The content of this subview is based entirely on the amount of metadata in your image. Most Loaders
will give the color space and file path for the image. Much more information can be displayed if it
exists in the image.
The Metadata
subview for viewing
embedded metadata
Vectorscope
The Vectorscope Viewer duplicates the behavior of a specific type of video test equipment, displaying
a circular graph that helps to visualize the intensity of chrominance signals.
Waveform
The Waveform Viewer duplicates the behavior of a specific type of video test equipment, displaying a
line or bar graph that helps to visualize the voltage or luminance of a broadcast signal.
The 3D Viewer
Building a composite in 3D space has different requirements from traditional 2D compositing.
When a node from the 3D category or some particle systems is selected, a 3D Viewer is used to
display the scene. The 3D Viewer shows a representation of a composite in a true GPU-accelerated
3D environment.
For more information on 3D controls, see Chapter 25, “3D Compositing Basics,” in the Fusion
Reference Manual.
Another small change is that there’s a lower limit to the scale of a 3D scene. Continuing to zoom in
past this limit will instead move (“dolly“) the point of view forward. The mouse wheel will move forward
slowly, and the keyboard will move more quickly.
Critically, the 3D Viewer gives you additional control to rotate the viewer within the three dimensions
of the scene to better see your scene from different angles as you work.
TIP: These rotation controls can be used with the 3D Histogram subview as well.
Additionally, if you have a camera or spotlight in your scene, you can switch the viewer to face the
scene from the point of view of those objects.
The Camera3D’s controls will inherit the viewer’s position and angle values.
TIP: The Copy PoV To command uses the object’s own coordinate space; any
transformations performed downstream by another node are not taken into account.
POV Labels
As you switch the POV of the viewer, you can keep track of which POV is currently displayed via a text
label at the bottom-left corner of the viewer. Right-clicking directly on this label, or on the axis control
above it, acts as a shortcut to the Camera submenu, allowing you to easily choose another viewpoint.
A 3D scene using default lights (top), and the same scene with lighting turned on (bottom)
TIP: Attempting to load a Light node into a viewer all by itself will result in an empty
scene, with nothing illuminated. To see the effects of lights, you must view the Merge
3D node the light is connected to.
NOTE: The shadows shown in the 3D Viewer are always hard edged. Soft shadows are
available for output to the rest of your composition in the software renderer of the
Renderer3D node.
Transparency in 3D Viewers
Image planes and 3D objects are obscured by other objects in a scene depending on the X, Y, and
Z position coordinates of each object in 3D space. The default method used to determine which
polygons are hidden and which are shown based on these coordinates is called Z-buffering.
Z-buffering is extremely fast but not always accurate when dealing with multiple transparent layers in
a scene. Fortunately, there is another option for more complex 3D scenes with transparency: Sorted.
The Sorted method can be significantly slower in some scenes but will provide more accurate results
no matter how many layers of transparency happen to be in a scene.
The default behavior in the viewer is to use Z-buffering, but if your scene requires the Sorted method,
you can easily change this.
The default grid of the 3D Viewer grid with its origin at x = 0, y = 0 and z = 0
Vertex Normals
Normals indicate what direction each vertex of 3D geometry is facing, and they are used when
calculating lighting and texturing on an object. When viewing any kind of 3D geometry, including an
image plane or a full FBX mesh, you can display the normals for each object in a scene.
While there are four panes in the Quad view, they all show the same scene. When assigning views
within a Quad view, you can choose between displaying Front, Left, Top, Bottom, and Perspective
orthographic views, or you can choose the view through any camera or spotlight that’s present in
the scene.
Guides
Guides are onscreen overlays used to help you compose elements within a boundary or along the
center vertical and horizontal axes. While guides are displayed in the viewer, they’re not rendered
into the scene. There are four commonly used guides that can be displayed, including Monitor Safety,
Safe Title, Center, and Film.
— Guide 1 contains four fields that specify the offset from the edges of the image for the left,
top, right, and bottom guides, in that order. As with all offsets in Fusion, this is a resolution-
independent number where 1 is the width of the full image and 0.5 is half the width of the image.
— Guide 2’s text box is used to set the aspect ratio of the projection area.
Firstly, nodes will no longer be required to render portions of the image that will not be affected by
the node. This helps the renderer to optimize its performance. Secondly, Fusion can now keep track of
and apply a node’s effect to pixels that lie outside the visible portion of the image.
For example, consider the output of a Text+ node rendered against a transparent background. The
text occupies only a portion of the pixels in the image. Without Domain of Definition, you would be
required to process every pixel in the image needlessly. With a DoD, you are able to optimize effects
applied to the image, producing faster results and consuming less memory in the process.
The DoD is shown as two XY coordinates indicating the corners of an axis-aligned bounding box (in pixels)
For the most part, the DoD is calculated automatically and without the need for manual intervention.
For example, all the nodes in the Generator category automatically generate the correct DoD. For
nodes like Fast Noise, Mandelbrot, and Background, this is usually the full dimensions of the image. In
the case of Text+ and virtually all of the Mask nodes, the DoD will often be much smaller or larger.
The OpenEXR format is capable of storing the data window of the image, and Fusion will apply this as
the DoD when loading such an image through a Loader node and will write out the DoD through the
Saver node.
When using the Fusion page in DaVinci Resolve, clips from the Edit page timeline or Media Pool will
typically have the DoD default to the full image width of the source media. The exception is media
stored in OpenEXR format.
The DoD is established as soon as the image is created or loaded into the composition. From there,
it passes downstream, where viewers combine it with their Region of Interest in order to determine
exactly what pixels should be affected by the node. As you work, different nodes will automatically
shrink, expand, or move the DoD as they apply their effect to an image, causing the DoD to change
from node to node.
When RoI is enabled and Show Region is selected from the menu, a rectangular RoI control appears in
the viewer. If this is the first time RoI has been enabled, it will be set to the full width and height of the
image. Otherwise, the last known position of the RoI for that view is used. However, if you want to set
the RoI to a custom area within the frame, you can do one of the following.
To reset the RoI to the full width and height of the current image, do one of the following:
— Choose Reset from the viewer menu next to the RoI button.
— Right-click anywhere within the viewer and choose Region > Reset Region from the contextual
menu or from the toolbar button menu.
— Disable the ROI control, which will also reset it.
The RoI improves not only rendering speed and memory use, but it can also reduce file I/O, since
Loaders and MediaIn nodes only load pixels from within the RoI, if one is specified. This does require
that the file format used supports direct pixel access. Cineon, DPX, and many uncompressed file
formats support this feature, as well as OpenEXR and TIFF in limited cases.
Please note that changes to the viewed image size or color depth will cause the pixels outside the RoI
to be reset to the image’s canvas color. This also happens when switching in and out of Proxy mode,
as well as during Proxy mode switching with Auto Proxy enabled. When the image size is maintained,
so are the last rendered pixel values outside the RoI. This can be useful for comparing changes made
within the RoI with a previous node state.
TIP: Right-clicking in a viewer and choosing Options > Show Controls for showing onscreen
controls will override the RoI, forcing renders of pixels for the entire image.
— The simplest form of a LUT is a 1D LUT. It accounts for one color channel at a time, so it can make
overall tonality changes but not very specific color changes.
— A 3D LUT looks at each possible color value (red, green, and blue) independently. A 3D LUT
allows for large global changes as well as very specific color changes to be applied to images
very quickly.
Image LUTs
Image LUTs can be applied to each viewer. In fact, you can even apply separate Image LUTs for the A
and B buffers of a single viewer. These LUTs can only be applied to 2D images and not to 3D scenes.
Image LUTs are routinely used to get from one scene-referred color space to another. For example,
if you’re working with log-encoded media but want to see how the image will look in the final color
space, you can choose a LUT to make the image transform as a preview.
Buffer LUTs
The Buffer LUT is applied to the viewers regardless of contents, including 3D scenes, 3D materials, and
subview types. Only one Buffer LUT can be applied. If a 2D image is being displayed with an Image
LUT applied, then the Buffer LUT is applied to the result of the image LUT. Buffer LUTs are typically
used to simulate another output color space that’s unique to the display you’re using—for instance,
making a DCI-P3 projector show the image as it would look on an sRGB monitor.
When dealing with nonlinear files from many of today’s digital cinema cameras, a modern workflow
would be to convert everything to linear at the beginning of the node tree, then create your
composite, and then apply an Image LUT or Buffer LUT that matches the color space you want it to be
in for either grading in the Color page or for final output.
However, in more elaborate production pipelines, you may need to apply multiple LUTs consecutively.
Since the purpose of the View LUT is to provide an unchanging correction for the monitor or the file’s
color space, however, these splines cannot be animated.
Macro LUTs
Any macro node can also be used as a viewer LUT simply by saving the macro’s .setting file to the
correct Fusion directory.
For this to work, the macro must have one image input and one image output. Any controls exposed
on the macro will be available when the Edit option is selected for the LUT. For more information
about creating macros, see Chapter 6, “Node Groups, Macros, and Fusion Templates.” in the Fusion
Reference Manual.
LUT Presets
All LUTs available to DaVinci Resolve are also accessible to the Fusion page, which includes custom
LUTs you’ve installed, as well as preset LUTs that come installed with DaVinci Resolve, such as the
highly useful VFX IO category that includes a wide variety of miscellaneous to Linear and Linear to
miscellaneous transforms. All of these LUTs appear by category in the viewer LUT menu.
Fuse LUTs
Fuses are scriptable plugins that are installed with the application or that you create in Fusion. A fuse
named CT_ViewLUTPlugin can be applied as a LUT to a viewer. You can also script fuses that use
graphics hardware shaders embedded into the LUT for real-time processing. Since fuse LUTs require
shader-capable graphics hardware, they cannot be applied in software. For more information about
Fuses, see the Fusion Scripting Guide located on the Blackmagic Design website.
Buffer LUTs are often useful for applying monitor corrections, which do not usually change
between projects.
The Remove and Add Gamma checkboxes let you choose to do the gamut conversion with linear
or nonlinear gamma, or they let you simply remove or add the appropriate gamma values without
changing the color space.
When you select a node to be displayed, the image produced is processed before it is shown in the
viewers. The processing order is slightly different for 2D images and 3D scenes.
2D images first have the image LUT applied, and the result is composited over the checker underlay.
3D scenes are instead rendered with OpenGL.
Order of processing
For either 2D or 3D, the result may be drawn to an offscreen buffer where a Buffer LUT can be applied,
along with dithering, a full view checker underlay, and any stereo processing. The final result is then
drawn to the viewer and any onscreen controls are drawn on top.
A complete stacked LUT configuration can be saved to and loaded from a .viewlut file, as
described below.
LUT Settings
The most straightforward way to save a LUT you have created using the Fusion View LUT Editor is to
use the LUT > Save menu found in the viewer contextual menu. The settings are saved as an ASCII
file with the extension .viewlut in the LUTs folder. Any files with this extension found in that folder will
appear in the Image LUT menus for ease of loading. You can also load the settings that are not found
in the menu by choosing LUT > Load from the viewer’s contextual menu.
The Import LUT option will load LUT files back into the Curve Editor, or alternatively, if the file has been
saved in Fusion’s LUTs folder, it will appear in the LUT drop-down menu list.
TIP: This is one way to move LUTs between viewers or to and from the Color Curves node
or any other LUT Editor in Fusion.
This allows almost any combination of nodes to be used as a viewer LUT. This is the most flexible
approach but is also potentially the slowest. The LUT nodes must be rendered solely on the CPU,
whereas other methods are GPU-accelerated.
The LUT default settings found in the View panel of the Fusion Settings window
Viewer Settings
It is often preferable to switch between entirely different viewer configurations while working.
For example, while keying, the image may be in the main viewer, and the alpha channel may be in a
subview. Viewer settings toward the end of a project may consist of the histogram, vectorscope, and
waveform, as well as the image in a view set to Quad view.
Fusion provides the ability to quickly load and save viewer settings to help reduce the amount of effort
required to change from one configuration to another.
Show Controls
When onscreen controls are not necessary or are getting in the way of evaluating the image, you can
temporarily hide them using the Show Controls option. This option is toggled using Command-K.
Checker Underlay
The Checker Underlay shows a checkerboard beneath transparent pixels to make it easier to identify
transparent areas. This is the default option for 2D viewers. Disabling this option replaces the
checkerboard with black.
You can enable the Show Square Pixels option to override the aspect correction. Show Square Pixels
can also be toggled on and off using the 1:1 button in the viewer toolbar.
Gain/Gamma
Exposes or hides a simple pair of Gain and Gamma sliders that let you adjust the viewed image.
Especially useful for “gamma slamming” a composite to see how well it holds up with a variety of
gamma settings. Defaults to no change.
360° View
Sets the Fusion page viewer to properly display spherical imagery in a variety of formats, selectable
from this submenu. Disable toggles 360 viewing on or off, while Auto, LatLong, Vert Cross, Horiz
Cross, Vert Strip, and Horiz Strip let you properly display different formats of 360º video.
Alpha Overlay
When you enable the alpha overlay, the viewer will show the alpha channel overlaid on top of the
color channels. This can be helpful when trying to see where one image stops and another begins in a
composite. This option is disabled by default.
Follow Active
Enabling the Follow Active option will cause the viewer to always display the currently active node in
the Node Editor. This option is disabled by default, so you can view a different node than what you
control in the Control Panel.
Show Controls
When onscreen controls are not necessary or are getting in the way of evaluating the image, you can
temporarily hide them using the Show Controls option. This option is toggled using Command-K.
Show Labels
The Show Labels option lets you toggle the display of the text that sometimes accompanies onscreen
controls in the viewer without disabling the functions that are showing those overlays, and without
hiding the onscreen controls themselves.
Editing Parameters
in the Inspector
The Inspector is where you adjust the parameters of each node to
do what needs to be done. This chapter covers the various node
parameters and methods for working with the available controls.
Contents
Overview of the Inspector����������������������������� 224 Checkboxes����������������������������������������������������������� 236
This chapter covers methods for opening node parameters in the Inspector to edit them in different
ways according to the type of available controls.
— The Tools panel is where the parameters of selected nodes appear so you can edit them.
Inspector Height
A small arrow button at the far right of the UI toolbar lets you toggle the Inspector between full-height
and half-height views, depending on how much room you need for editing parameters.
In maximized height mode, the Inspector takes up up the entire right side of the UI, letting you see
every control that a node has available, or creating enough room to see the parameters of two or
three pinned nodes all at once. In half-height mode, the top of the Inspector is aligned with the tops of
the viewers, expanding the horizontal space that’s available for the Node Editor.
— Auto Control Open: When enabled (the default), whichever node is active automatically opens
its controls in the Inspector. When disabled, selecting an active node opens that node’s Inspector
header in the Inspector, but the parameters remain hidden unless you click the Inspector header.
— Auto Control Hide: When enabled (the default), only selected nodes are visible in the Inspector,
and all deselected nodes are automatically removed from the Inspector to reduce clutter. When
disabled, parameters from selected nodes remain in the Inspector, even when those nodes are
deselected, so that the Inspector accumulates the parameters of every node you select over time.
— Auto Control Close Tools: When enabled (the default), only the parameters for the active
node can be exposed. When disabled, you can open the parameters of multiple nodes in the
Inspector if you want.
— Auto Controls for Selected: When enabled (the default), selecting multiple nodes opens multiple
control headers for those nodes in the Inspector. When disabled, only the active node appears in
the Inspector; multi-selected nodes highlighted in white do not appear.
When you select a single node so that it’s highlighted orange in the Node Editor, all of its parameters
appear in the Inspector. If you select multiple nodes at once, Inspector headers appear for each
selected node (highlighted in white in the Node Editor), but the parameters for the active node
(highlighted in orange) are exposed for editing.
Only one node’s parameters can be edited at a time, so clicking another node’s Inspector header
opens that node’s parameters and closes the parameters of the previous node you were working on.
This also makes the newly opened node the active node, highlighting it orange in the Inspector.
While the Pin button is on, that node’s parameters remain open in the Inspector. If you select
another node in the Node Editor, that node’s parameters appear beneath any pinned nodes.
You can have as many pinned nodes in the Inspector as you like, but the more you have, the more
likely you’ll need to scroll up or down in the Inspector to get to all the parameters you want to edit.
To remove a pinned node from the Inspector, just turn off its Pin button in the Inspector header.
When you select multiple nodes at once, you’ll see multiple headers in the Inspector. By default, only
the parameters for the active node (highlighted orange in the Node Editor) can be opened at any
given time, although you can change this behavior in Fusion’s Preferences.
To turn nodes off and on: Each Inspector header has a toggle switch to the left
of its name, which can be used to enable or disable that node. Disabled nodes
pass image data from the previous upstream node to the next downstream node
without alteration.
To change the Inspector header name: The name of the node corresponding to
that Inspector header is displayed next. You can change the name by right-clicking
the Inspector header to expose contextual menu commands similar to those found
when you right-click a node in the Node Editor and choosing Rename. Alternatively,
you can click an Inspector header and press F2 to edit its name. A Rename dialog
appears, where you can enter a new name and click OK (or press Return).
To pin Inspector controls: Clicking the Pin button “pins” that node’s parameters
in the Inspector so they remain in place, even if you deselect that node. You can
have as many pinned nodes as you like in the Inspector, but the more you have,
the more likely you’ll be scrolling up and down the Inspector to navigate all the
available parameters.
To lock nodes: Clicking the Lock button locks that node so no changes
can be made to it.
Versioning Nodes
Each button is capable of containing separate parameter settings for that node, making it easy to save
and compare up to six different versions of settings for each node. All versions are saved along with
the node in the Node Editor for future use.
An orange underline indicates the currently selected version, which is the version that’s currently
being used by your composition. To clear a version you don’t want to use any more, right-click that
version number and choose Clear from the contextual menu.
Parameter Tabs
Underneath the Inspector header is a series of panel tabs, displayed as thematic icons. Clicking one of
these icons opens a separate tab of parameters, which are usually grouped by function. Simple nodes,
such as the Blur node, consist of two tabs where the first contains all of the parameters relating to
blurring the image, and the second is the Settings tab.
The following controls are common to most nodes, although some are node-specific. For example,
Motion Blur settings have no purpose in a Color Space node.
For example, if you wanted to use the Transform node to affect only the green channel of an image,
you can turn off the Red, Blue, and Alpha checkboxes. As a result, the green channel is processed by
this operation, and the red, blue, and alpha channels are copied straight from the node’s input to the
node’s output, skipping that node’s processing to remain unaffected.
In these cases, the Common Control channel boxes are instanced to the channel boxes found
elsewhere in the node. Blur, Brightness/Contrast, Erode/Dilate, and Filter are examples of nodes that
all have RGBY checkboxes in the main Controls tab of the Inspector, in addition to the Settings tab.
TIP: The Apply Mask Inverted checkbox option operates only on effects masks, not on
garbage masks.
Multiply By Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the Mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., those set to 0) to
become black. This creates a premultiplied image.
Sample Controls
The Sample Controls are only displayed once the Use Object or Use Material checkbox is enabled.
These controls select which ID is used to create a mask from the Object or Material channels
saved in the image. You use the Sample button to grab IDs from the image in the viewer, the
same way you use the Color Picker to select a color, by holding down the left mouse button
on the Sample button, then dragging over to the viewer to the part of the image you want to
select. The image or sequence must have been rendered from a 3D software package with those
channels included.
Correct Edges
The Correct Edges checkbox is only displayed once the Use Object or Use Material checkbox
is enabled. When the Correct Edges checkbox is enabled, the Coverage and Background Color
channels are used to separate and improve the effect around the edge of the object. When
disabled (or no Coverage or Background Color channels are available), aliasing may occur on the
edge of the mask.
When Motion Blur is disabled, no additional controls are displayed. However, turning on Motion Blur
reveals four additional sliders with which you can customize the look of the motion blur you’re adding
to that node.
Quality
Quality determines the number of samples used to create the blur. The default quality setting
of 2 will create two samples on either side of an object’s actual motion. Larger values produce
smoother results but will increase the render time.
Shutter Angle
Shutter Angle controls the angle of the virtual shutter used to produce the Motion Blur effect.
Larger angles create more blur but increase the render times. A value of 360 is the equivalent of
having the shutter open for one whole frame exposure. Higher values are possible and can be
used to create interesting effects. The default value for this slider is 100.
Center Bias
Center Bias modifies the position of the center of the motion blur. Adjusting the value allows for
the creation of trail-type effects.
Sample Spread
Adjusting Sample Spread modifies the weight given to each sample. This affects the brightness of
the samples set with the Quality slider.
Scripting
Scripting fields are present on every node and contain one or more editable text fields that can be
used to add scripts that process when that node is rendering. For more information on the contents
of this tab, please consult the Scripting documentation.
Comments
A Comments field is found on every node and contains a single text field that is used to add comments
and notes to that node. To enter text, simply click within the field to place a cursor, and begin typing.
When a note is added to a node, the comments icon appears in the Control Header and can be seen
in a node’s tooltip when the cursor is placed over the node in the Node Editor. The contents of the
Comments tab can be animated over time, if required.
Additional controls appear under this tab if the node is a Loader. For more information, see Chapter 43,
“Generator Nodes,” in the Fusion Reference Manual.
Clicking on the gutter to the left or right of the handle will increase or decrease the value. Holding
Command while clicking on the gutter will adjust the values in smaller increments. Holding Shift while
clicking will adjust the value in larger increments.
While slider controls use a minimum and maximum value range, entering a value in the edit box
outside that range will often expand the range of the slider to accommodate the new value. For
example, it is possible to enter 500 in a Blur Size control, even though the Blur Size sliders default
maximum value is 100. The slider will automatically adjust its maximum displayed value to allow entry
of these larger values.
If the slider has been altered from its default value, a small circular indicator will appear below the
gutter. Clicking on this circle will reset the slider to its default.
Thumbwheel
A Thumbwheel control is identical to a slider except it does not have a maximum or minimum value.
To make an adjustment, you drag the center portion left or right or enter a value directly into the
edit box. Thumbwheel controls are typically used on angle parameters, although they do have other
uses as well.
If the thumbwheel has been altered from its default value, a small circular indicator will appear below
above the thumbwheel. Clicking on this circle will reset the thumbwheel to its default.
Range Controls
The Range controls are actually two separate controls, one for setting the Low Range value and one
for the High Range value. To adjust the values, drag the handles on either end of the Range bar. To
slide the high and low values of the range simultaneously, drag from the center of the Range bar. You
can also expand or contract the range symmetrically by holding Command and dragging either end of
the Range bar. You find Range controls on parameters that require a high and low threshold, like the
Matte Control, Chroma Keyer, and Ultra Keyer nodes.
TIP: You can enter floating-point values in the Range controls by typing the values in using
the Low and High numeric entry boxes.
Checkboxes
Checkboxes are controls that have either an On or Off value. Clicking on the checkbox control will
toggle the state between selected and not selected. Checkboxes can be animated, with a value of 0 for
Off and a value of 1.0 or greater for On.
Drop-Down Menus
Drop-down menus are used to select one option from a menu. Once the menu is open, choosing one
of the items will select that entry. When the menu is closed, the selection is displayed in the Inspector.
Button Arrays
Button arrays are groups of buttons that allow you to select from a range of options. They are
almost identical in function to drop-down menu controls, except that in the case of a button array it
is possible to see all of the available options at a glance. Often button arrays use icons to make the
options more immediately comprehensible.
The Color panel is extremely flexible and has four different techniques for selecting and
displaying colors.
TIP: Color can be represented by 0–1, 0.255, or 0–65000 by setting the range you want in
the Preferences > General panel.
Each operating system has a slightly different Picking Colors from an Image
layout, but the general idea is the same. If you are trying to match the color from an
You can choose a color from the swatches image in the viewer, you can hold down the
provided—the color wheel on macOS, or the cursor over the Eyedropper, and then drag the
color palette on Windows. However you choose pointer into the viewer. The pointer will change
your color, you must click OK for the selection to to an Eyedropper, and a pop-up swatch will
be applied. appear above the cursor with the color you are
hovering over and its values. When you are over
The Color Chooser the color you want, release the mouse button to
You also have access to the built-in color set the color.
chooser, which includes sections for choosing
grayscale values, as well as the currently chosen
hue with different ranges of saturation and
value. A hue bar and alpha bar (depending on
the node) let you choose different values.
The Eyedropper
with color swatch
Radial gradient
Linear gradient
Angle gradient
Reflect gradient
Square draws the gradient by using a square Start and End Position
pattern when the starting point is at the center The Start and End Position controls have a set of
of the image. X and Y edit boxes that are useful for fine-tuning
the start and end position of the gradient. The
position settings are also represented by two
crosshair onscreen controls in the viewer, which
may be more practical for initial positioning.
You can add, move, copy, and delete colors from the gradient using the Colors bar.
To delete a color stop from the Colors bar, do one of the following:
— Drag the color stop up past the Gradient Colors bar.
— Select the color stop, then click the red X button to delete it.
Interpolation Space
The Gradient Interpolation Method pop-up menu lets you select what color space is used to calculate
the colors between color stops.
Offset
When you adjust the Offset control, the position of the gradient is moved relative to the start and end
markers. This control is most useful when used in conjunction with the repeat and ping-pong modes
described below.
Once/Repeat/Ping-Pong
These three buttons are used to set the behavior of the gradient when the Offset control scrolls the
gradient past its start and end positions. The Once button is the default behavior, which keeps the
color continuous for offset. Repeat loops around to the start color when the offset goes beyond the
end color. Ping-pong repeats the color pattern in reverse.
To attach a modifier:
1 Right-click over the parameter to which you want to attach a modifier.
2 Make a selection from the Modifier submenu in the contextual menu.
Animating Parameters
in the Inspector
Fusion can keyframe most parameters in most nodes, in order to create animated effects such as
animated transforms, rotoscoping with splines, dynamically altering warping behaviors, and so on; the
list is endless. For convenience, a set of keyframing controls are available within the Inspector next to
each keyframable parameter. These controls are:
— A gray Keyframe button to the right each keyframable parameter. Clicking this gray button creates
a keyframe at the current position of the playhead, and turns the button orange.
— When you add a keyframe to a parameter, moving to a new frame and changing the parameter
will automatically add a keyframe at the current position.
— Whenever the playhead is sitting right on top of a keyframe, this button turns orange. Clicking an
orange Keyframe button deletes the keyframe at that frame and turns the button gray again.
— Small navigation arrows appear to the right and left if there are more keyframes in those
directions. Clicking on navigation arrows to the right and left of keyframes jumps the playhead to
those keyframes.
Once you’ve keyframed one or more parameters, If Show Modes/Options has been enabled, the
node containing the parameters you keyframed displays a Keyframe badge, to show that node
has been animated.
TIP: If you change the default spline type from Bézier, the contextual menu will display the
name of the current spline type.
Attaching a Parameter to
an Existing Animation Curve
Multiple parameters can be connected to the same animation curve. This can be an invaluable
timesaver if you are identically animating different parameters in a node.
Connecting Parameters
It is often useful to connect two parameters together even without an animation curve. There are two
methods you can use.
TIP: Disabling the Auto Control Close node’s General preference, and then selecting two
nodes in the Node Editor will allow you to pick whip two parameters from different nodes.
The Expression field can further be used to add mathematical formulas to the value received from the
target parameter.
For more information on Pick Whipping and Expressions, see Chapter 12, “Using Modifiers, Expressions,
and Custom Controls,” in the Fusion Reference Manual.
User custom controls can be added or edited via the Edit Control dialog. Right-click the name of a
node in the Inspector (in the header bar) and choose Edit Control from the contextual menu. A new
window will appear, titled Edit Control.
In the Input attributes, you can select an existing control or create a new one, name it, define the type,
and assign it to a tab. In the Type attributes, you define the input controls, the defaults and ranges,
and whether it has an onscreen preview control. The Input Ctrl attributes box contains settings
specific to the selected node control, and the View Ctrl attributes box contains settings for the preview
control, if any.
We could use the Center input control, along with its preview control, to set an angle and distance
from directly within the viewer using expressions.
1 Right-click the label for the Length parameter, choose Expression from the contextual menu, and
then paste the following expression into the Expression field that appears:
-sqrt(((Center.X-.5)*(Input.XScale))^2+((Center.Y-.5)*(Input.YScale)*(Input.
Height/Input. Width))^2)
2 Next, right-click the label for the Angle parameter, choose Expression from the contextual menu,
and then paste the following expression into the Expression field that appears:
atan2(Center.Y-.5)/(Input.OriginalWidth/Input.X , .5-Center.X) * 180 / pi
To hide the Length and Angle, we’ll run the UserControls script again. This time when we select the
Length and Angle IDs, we’ll choose Hide in the dialog. Press OK for each.
Finally, to change the options available in the Type, we have two options. We can hide the buttons and
use a checkbox instead, or we can change the MultiButton from four entries to two. Let’s try both.
To add the checkbox, run UserControls again, but this time instead of selecting an existing ID, we’ll
type Centered into the Name. This will set the name and the ID of our input to Centered. The Type
is set to Number, and the Page is set to Controls. Now in the Type Attributes, set the Input Ctrl to
be CheckboxControl. Press OK, and now we have our checkbox. To make the new control affect
the Type, add a SimpleExpression to the Type:
iif(Centered==1, 2, 0).
Once that’s done, we can use the UserControls to hide the Type control.
To make a new MultiButton, run the UserControl script, and add a new control ID, TypeNew.
You can set the Name to be Type, as the Names do not need to be unique, just the IDs. Set the
Type to Number, the Page to Controls, and the Input Ctrl to MultiButtonControl. In the Input Ctrl
attributes, we can enter the names of our buttons. Let’s do Linear and Centered. Type them in and
hit Add for each. Press OK, and we have our new buttons with the unneeded options removed.
To make this new control affect the original Type, add a SimpleExpression to the Type:
iif(TypeNew==0, 0, 2).
Once that’s done, we can use the UserControls to hide the original Type control.
Animating in Fusion’s
Keyframes Editor
This chapter covers how you can keyframe effects in Fusion’s
Inspector and how you can edit clips, effects, and keyframes in the
Keyframes Editor.
Contents
Keyframing in the Inspector������������������������ 248 Showing Keyframe Values������������������������������� 255
Scaling and Panning the Timeline�������������� 251 Show Marker List������������������������������������������������� 260
For convenience, a set of keyframing controls is available within the Inspector next to each
keyframable parameter. These controls are:
— A gray Keyframe button to the right each keyframable parameter. Clicking this gray button creates
a keyframe at the current position of the playhead, and turns the button orange.
— Whenever the playhead is sitting right on top of a keyframe, this button turns orange. Clicking an
orange Keyframe button deletes the keyframe at that frame and turns the button gray again.
— Small navigation arrows appear to the right and left if there are more keyframes in those
directions. Clicking on navigation arrows to the right and left of keyframes jumps the playhead to
those keyframes.
Once you’ve keyframed one or more parameters, if Show Modes/Options has been enabled, the
node containing the parameters you keyframed displays a Keyframe badge to show that node has
been animated.
Once you’ve started keyframing node parameters, you can edit their timing in the Keyframes
Editor and/or Spline Editor.
— To adjust the timing of elements in a project, whether they’re clips or effects. You can trim, slide,
and extend clips, adjust the timing of an animation spline, or trim the duration of an effects node.
You can freely rearrange the order of nodes in the Timeline without affecting the layering order of
your composition. All compositing operations are handled in the Node Editor, while the Keyframes
Editor manages the timing of your composition.
— To create and/or edit keyframes that you’ve applied to effects in a track-based manner, you can
retime keyframes, add and delete keyframes, and even edit keyframe values
Collapse/Open All
A quick way to open or close all available keyframe tracks at
once is to use the Expand/Collapse Tool Controls commands
in the Keyframe Timeline Option menu.
The Playhead
As elsewhere in Fusion, the playhead is a red vertical bar that runs through the Timeline view to
indicate the position of the current frame or time. The Keyframes Editor playhead is locked to the
viewer playhead, so the image you’re viewing is in sync.
You must click on the playhead directly to drag it, even within the Timeline ruler (clicking and
dragging anywhere else in the Timeline ruler scales the Timeline). Additionally, you can jump the
playhead to a new location by holding down the Command-Option keys and clicking in the track area
(not the Timeline ruler).
For example, if you’re animating a blur, then the Key Frame row shows the frame each keyframe is
positioned at, and the Blur1BlurSize row shows the blur size at each keyframe. If you change the Key
Frame value of any keyframe, you’ll move that keyframe to a new frame of the Timeline.
TIP: Selecting a node’s name from the Timeline header also selects the node’s tile in the
Node Editor, with its controls displayed in the Inspector.
Trimming Segments
Trimming segments has different effects on Loaders, MediaIn and Effect nodes:
Furthermore, each keyframe track, whether open or closed, exposes a miniature curve overlay
that provides a visual representation of the rise and fall of keyframed values. This little overlay isn’t
directly editable.
The Drip1 segment has its keyframe tracks exposed, while the Text1 segment has
its keyframe tracks collapsed so they’re displayed within the segment.
To change the position of a keyframe using the toolbar, do one of the following:
— Select a keyframe, and then enter a new frame number in the Time Edit box.
— Choose T Offset from the Time Editor drop-down, select one or more keyframes, and
enter a frame offset.
— Choose T Scale from the Time Editor drop-down, select one or more keyframes,
and enter a frame offset.
— Show All, which shows all node layers in the current composition.
— Show None, which hides all layers.
— Show Tools at Current Time, which only displays node layers under the playhead.
— If you’ve created custom filters, they appear here as well, in alphabetical order.
Filters that you’ve created in the Timeline panel of the Fusion Settings window appear in the
Keyframes Editor Option menu.
To delete a filter:
1 Choose Create/Edit Filters from the Keyframes Editor Option menu to open the Timeline panel of
the Fusion Settings window. This is where you can delete Timeline filters.
2 Choose the filter you want to delete from the Filter pop-up menu.
3 Click the Delete button, and when a dialog asks if you really want to do that, click OK.
Selected Filtering
Choosing “Show only selected tools” from the Keyframes Editor Option menu filters out all segments
except for layers corresponding to selected nodes. This option can be turned on or off.
TIP: When “Show only selected tools” is enabled, you can continue to select nodes in the
Node Editor to update what’s displayed in the Keyframes Editor.
— You can use the Tree Item Order Selection menu to sort the tracks by an assigned number.
— You can use the Sort pop-up menu.
If you begin numbering nodes in the track header and change your mind or decide on a different
order, you can choose Restart to begin numbering again or choose Cancel to keep the current order.
— All Tools: Forces all tools currently in the Node Editor to be displayed in the Keyframes Editor.
— Hierarchy: Sorts with the most background layers at the top of the header, through to the most
foreground layers at the bottom, following the connections of the nodes in the Node Editor.
— Reverse: The opposite of Hierarchy, working backward from the last node in the Node Editor
toward the most background source node.
— Names: Sorts by the alphabetical order of the nodes, starting at the top with the beginning
of the alphabet.
— Start: Orders layers based on their starting point in the composition. Nodes that start earlier
in the Global project time are listed at the top of the header, while nodes that start later are
at the bottom.
— Animated: Restricts the Timeline to showing animated layers only. This is an excellent mode to
use when adjusting the timing of animations on several nodes at once.
The most important attribute of a marker is its position. For it to add value, a marker must be placed
on the frame you intended it to be on. Hovering the cursor over a marker displays a tooltip with its
current frame position. If it is on the wrong frame, you can drag it along the Time Ruler to reposition it.
Markers added to the Time Ruler are editable in the Fusion page, and the changes appear back in the
other DaVinci Resolve pages. Time Ruler markers can be added, moved, deleted, renamed, and given
descriptive notes from within Fusion’s Keyframes or Spline Editor.
NOTE: Markers attached to clips in the Edit page Timeline are visible on MediaIn nodes in
Fusion’s Keyframes Editor but not editable. They are not visible in the Spline Editor.
Jumping to Markers
Double-clicking a marker jumps the playhead to that marker’s position.
Renaming Markers
By default, a marker uses the frame number in its name, but you can give it a more descriptive name
to go along with the frame number, making it easier to identify. To rename a marker in Fusion, right-
click over the marker and choose Rename Guide from the contextual menu. Enter a name in the dialog
and click OK.
The Marker List shows all the current markers in the composition, listed according to their position in
time along with any custom name you’ve given them. If you double-click a marker’s name from the list,
the playhead jumps to the marker’s location.
There is a pair of checkboxes beside the names of each marker. One is for the Spline Editor, and one
is for the Keyframes Editor. By default, markers are shown in both the Spline Editor and Keyframes
Editor, but you can deselect the appropriate checkbox to hide the markers in that view.
Deleting Markers
You can delete a marker by dragging it up beyond the Time Ruler and releasing the mouse. You can
also use the marker’s contextual menu to choose Delete Marker.
Autosnap Points
When you drag keyframes or the edges of segments, often you want them to fall on a specific frame.
Autosnap restricts the placement of keyframes and segment edges to frame boundaries by default,
but you have other options found in the contextual menu. To configure autosnapping on keyframes
and segment edges, right-click anywhere within the Keyframes Editor and choose Options > Autosnap
Points from the contextual menu. This will display the Autosnap Points submenu with options for the
snapping behavior. The options are:
— None: None allows free positioning of keyframes and segment edges with subframe accuracy.
— Frame: Frame forces keyframes and segment edges to snap to the nearest frame.
— Field: Field forces keyframes and segment edges to snap to the nearest field,
which is 0.5 of a frame.
— Guides: When enabled, the keyframes and segment edges snap to markers.
Autosnap Markers
When you click to create a new marker, the default behavior is that it will snap to the closest frame.
If you reposition the marker, it also snaps to the nearest frame as you drag. This behavior can be
changed in the Keyframes Editor’s contextual menu by choosing from the Options > Autosnap
Markers submenu. The options are:
To reveal the Spreadsheet Editor, click on the Spreadsheet button in the toolbar. The Spreadsheet will
split the Work Area panel and appear below the Keyframes Editor’s interface.
To edit an animation parameter in the Spreadsheet Editor, select the parameter in the Keyframes
Editor header. The keyframe row includes a box for each frame number that contains a keyframe.
The value of the keyframe is displayed in the cell below the frame number. Clicking on a cell allows you
to change the frame number the keyframe is on or the parameter’s value for that keyframe.
TIP: Entering a frame number using a decimal point (e.g., 10.25 or 15.75) allows you to set
keyframes on a subframe level to create more natural animations.
Inserting Keyframes
You can also add new keyframes to an animation by clicking in an empty keyframe cell and entering
the desired time for the new keyframe. Using the cell under the new keyframe, you can enter a value
for the parameter.
Line Size
The Line Size option controls the height of each Timeline segment individually. It is often useful to
increase the height of a Timeline bar, especially when editing or manipulating complex splines.
Keyframes displayed as bars (left), and keyframes displayed as Point Values (right).
Waveforms are displayed in the Keyframes Editor for all MediaIn nodes
TIP: Right-clicking a track in the Keyframes Editor and choosing All Line Size > Minimum/
Small/Medium/Large/Huge changes all the tracks and audio waveforms in the
Keyframes Editor.
Animating in Fusion’s
Spline Editor
This chapter covers how you can keyframe effects and control
animations in Fusion’s Spline Editor.
Contents
Spline Editor Overview������������������������������������ 266 Copying and Pasting Keyframes ������������������ 279
Spline Editor Interface ����������������������������������� 266 Time and Value Editors ������������������������������������ 281
The Graph, Header, and Toolbar������������������� 267 Modifying Spline Handles�������������������������������� 282
Navigating Around the Spline Editor ���������� 269 Working with Filters������������������������������������������� 285
Creating Animation Splines ������������������������� 273 Reshaping Splines Using the Toolbar������� 287
Working with Keyframes and Splines������ 276 Reversing Splines ����������������������������������������������� 289
Showing Key Markers���������������������������������������� 279 Importing and Exporting Splines �������������� 295
The advantage of using splines to represent animation instead of keyframes as in the Keyframes
Editor is that splines allow you to manipulate the interpolation between keyframes. For example, if a
keyframe is set at a value of 1.0 on a parameter for frame 1, followed by a keyframe value of 10.0 for
frame 10, the values between keyframes are smoothly interpolated or calculated based on the shape
of the spline. Using the functions and controls in the Spline Editor, you have a fantastic amount of
control over that interpolation.
The Spline Editor can be open alongside the Node Editor or Keyframes Editor, or displayed separately
in order to take up the entire work area.
Graph
The graph is the largest area of the interface. It is here that you see and edit the animation splines.
There are two axes in the graph. The horizontal axis represents time, and the vertical axis represents
the spline’s value. A thin bar, called the playhead, runs vertically through the graph to represent the
current time as it does in the Timeline Editor. You can drag the playhead to change the current time,
updating the frame displayed in the viewers.
Playhead
The playhead is the thin red vertical bar that runs vertically through the Spline Editor graph and
represents the current time of the comp. You can drag the playhead to change the current time.
Status Bar
The status bar in the lower-right corner of the Fusion window regularly displays information about the
position of the pointer, along with the time and value axes.
Contextual Menus
There are two contextual menus accessible from the Spline Editor. The Spline contextual menu is
displayed by right-clicking over the graph, while the Guide contextual menu is displayed by right-
clicking on the Time Ruler above the graph.
Renaming Splines
The name of a spline in the header is based on the parameter it animates. You can change the name
of a spline by right-clicking on it in the header and choosing Rename Spline from the contextual menu.
The most obvious navigation methods to use are the scale sliders and buttons located in the upper
left of the Spline Editor panel.
The Zoom Height and Zoom Width sliders, Fit button, and Zoom to
Rectangle button can be used to navigate around the graph
Zoom Height and Zoom Width sliders let you change the height and width
of the graph area.
The Fit button attempts to rescale the view so that all currently active
splines fit within the graph.
— Choose Scale > Scale to Fit (Command-F) to fit all active splines into the graph area.
— Choose Scale > Scale to Rectangle (Command-R) to draw a bounding box around the area of the graph
you want centered and scaled. This has the same effect as clicking the Zoom to Rectangle button.
— Choose Scale > Default to reset the scaling of the graph area to default values.
— Choose Scale > Zoom In/Zoom Out to scale the graph area. This performs the same functions as
pressing the + and - keys on the keyboard.
— Choose Scale > Auto Fit to scale the graph to fit all splines dynamically as you make splines visible
and hidden. If the scaling is changed with Auto-Fit enabled, the graph area will scroll as you play
the comp to view all the keyframes.
— Choose Scale > Auto Scroll to scroll the graph area if the splines fall outside the graph horizontally
as you play.
— Choose Scale > Manual to disable all automatic attempts at showing splines in the graph.
— Choose Options > Fit Times to automatically scale along the X-axis to fit the selected spline.
All visible splines are taken into account, not just the newly selected spline. With this option off,
activating a new spline will not change the horizontal scale.
— Choose Options > Fit Values to automatically scale along the Y-axis to fit the selected spline.
All visible splines are taken into account, not just the newly selected spline. With this option off,
activating a new spline will not change the vertical scale.
Markers
Markers help identify important frames in a project. They may indicate a frame where a ray gun
shoots a beam in the scene, the moment that someone passes through a portal in the image, or any
other important event in the composite.
Markers added to the Timeline in the Cut, Edit, Fairlight, or Color page will appear in the Keyframes
Editor and Spline Editor of the Fusion page. They can also be added from the Keyframes Editor or the
Spline Editor while working in Fusion Studio or the Fusion page. Markers appear along the top of the
horizontal axis Spline Editor’s Time Ruler. They are displayed as small blue shapes, and when selected,
a line extends from each guide down vertically through the graph.
NOTE: Markers attached to clips in the Cut, Edit, Color, or Fairlight pages Timeline are not
visible in Fusion’s Spline Editor.
Unselected markers appear as blue shapes along the top, while selected
markers display a vertical line running through the graph
To create a marker:
— Right-click in the horizontal axis Time Ruler and choose Add Marker.
If markers currently exist in the comp, they are automatically displayed in the Marker List,
regardless of whether they were added in the Keyframes Editor or the Spline Editor or any other
page in DaVinci Resolve. You can also add markers directly from the Marker List, which can be
helpful if you have multiple markers you need to add, and you know the rough timing.
Autosnap
To assist in precisely positioning keyframe control points along the horizontal (time) axis, you can
enable the Spline Editor’s Autosnap function. Right-clicking over a spline and choosing Options >
Autosnap Points provides a submenu with four options.
To create a spline:
— Right-click on the parameter to be animated in the Inspector, and choose Animate from the
contextual menu.
Selecting Animate from the contextual menu connects the parameter to the default spline type.
This is usually a Bézier Spline unless you change the default spline in the Defaults panel of the
Fusion Preferences.
— Bézier Spline: Bézier splines are the default curve type. Three points for each keyframe on the
spline determine the smoothness of the curve. The first point is the actual keyframe, representing
the value at a given time. The other two points represent handles that determine how smoothly
the curve for the segments leading in and out of the keyframe are drawn. Bézier is the most used
spline type because Bézier splines allow you to create combinations of curves and straight lines.
Bézier spline
B-spline
— Modify with > Cubic Spline: Cubic splines are similar to Bézier splines, in that the spline
passes through the control point. However, Cubic splines do not display handles and always
make the smoothest possible curve. In this way, they are similar to B-splines. This spline type
is almost never used.
Cubic spline
— Modify with > Natural Cubic Spline: Natural Cubic splines are similar to Cubic splines, except
that they change in a more localized area. Changing one control point does not affect other
tangents beyond the next or previous control points.
Adding Keyframes
Once you create one keyframe, additional keyframes are automatically added to a spline whenever
you move the playhead and change the value of that spline’s parameter. For example, if you change
the strength of an animated glow at frame 15, a keyframe with the new value occurs on frame 15.
In the Spline Editor, control points can also be added directly to a spline by clicking on the spline
where you want to add the new keyframe.
To hold a value between two keyframes choose from the Set Key Equal To submenu
Displacement paths are composed of locked and unlocked points. Whether a point is locked is
determined by how you added it to the polyline. Locked points on the spline have an associated point
in the viewer’s motion path; unlocked points do not have a corresponding point in the viewer’s motion
path. Each has a distinct behavior, as described below.
TIP: You can convert displacement splines to X and Y coordinates by right-clicking over the
motion path in the viewer and choosing Path#: Polyline > Convert to X/Y Path.
Locked Points
Locked points are the motion path equivalents of keyframes. They are created by moving the playhead
position and changing the parameter value. These points indicate that the animated object must be in
a specified position on a specified frame. Since these keyframes are only related to position along the
path, they can only be moved horizontally along the spline’s time axis.
The locked points appear as larger-sized lock icons in the Spline Editor. Each locked key has an
associated point on the motion path in the viewer.
Unlocked Points
Unlocked points are created by clicking directly on the spline in the Spline Editor. These points give
additional control over the acceleration along the motion path without adjusting the path itself.
Conversely, you can add unlocked points in the viewer to control the shape of the motion path without
changing the timing.
You can change an unlocked point into a locked point, and vice versa, by selecting the point(s), right-
clicking, and choosing Lock Point from the contextual menu.
For more information on motion paths and locked keyframes, see Chapters 70 and 72 in the
DaVinci Resolve manual or Chapters 8 and 10 in the Fusion Studio manual.
Moving Keyframes
You can freely move keyframes with the mouse, keyboard, or the edit point controls. Keyframes can
even pass over existing points as you move them. For instance, if a keyframe exists on frame 5 and
frame 10, the keyframe at frame 5 can be repositioned to frame 15.
The key markers show keyframes in the horizontal axis using the same color as the splines
There are two options in the graph’s contextual menu for copying keyframes. Choosing Copy Points
(Command-C) copies all selected points. Choosing Copy Value copies a single point identified by the
pointer from multiple selected points. This does not deselect your selection set, and you can pick out
numbers as needed.
Alternatively, you can copy and paste keyframes by dragging them with the mouse. After you select
the points, hold down the Command key and drag the points along the spline to where you want
them pasted.
You can copy a single point’s value from a group of selected points. Since this process does
not deselect the selected set, you can continue picking out values as needed without having to
reselect points.
Keyframes can also be pasted with an offset, allowing you to duplicate a spline shape but increase the
values or shift the timing using an offset to X or Y.
TIP: You cannot copy and paste between different spline types. For instance, you cannot
copy from a Bézier spline and paste into a B-spline.
Time Editor
The Time Editor is used to modify the current time of the selected keyframe. You can change the Time
mode to enter a specific frame number, an offset from the current frame, or spread the keyframes
based on the distance (scale) from the playhead. You can select one of the three modes from the Time
mode drop-down menu.
Time
The number field shows the current frame number of the selected control point. Entering a
new frame number into the field moves the selected control point to the specified frame. If no
keyframes are selected or if multiple keyframes are selected, the field is empty, and you cannot
enter a time.
Time Offset
Selecting T Offset from the drop-down menu changes the mode of the number field to Time
Offset. In this mode, the number field offsets the selected keyframes positively or negatively in
time. An offset of either positive or negative values can be entered. For example, entering an
offset of 2 moves a selected keyframe from frame 10 to 12. If multiple keyframes were selected
in the previous example, all the keyframes would move two frames forward from their current
positions.
Time Scale
Selecting T Scale from the drop-down menu changes the mode of the number field to Time Scale.
In this mode, the selected keyframes’ positions are scaled based on the position of the playhead.
For example, if a keyframe is on frame 10 and the playhead is on frame 5, entering a scale of 2
moves the keyframe 10 frames forward from the playhead’s position, to frame 15. Keyframes on
the left side of the playhead would be scaled using negative values.
Value
The number field shows the value of the currently selected keyframes. Entering a new number
into the field changes the value of the selected keyframe. If more than one keyframe is selected,
the displayed value is an average of the keyframes, but entering a new value will cause all
keyframes to adopt that value.
Value Offset
Choosing Offset from the drop-down menu sets the Value Editor to the Offset mode. In this
mode, the value for the selected keyframes are offset positively or negatively. An offset of either
positive or negative values can be entered. For example, entering a value of -2 changes a value
from 10 to 8. If multiple keyframes are selected, all the keyframes have their values modified by -2.
Value Scale
Choosing Offset from the drop-down menu sets the Value Editor to the Scale mode. Entering
a new value causes the selected keyframes’ values to be scaled or multiplied by the specified
amount. For example, entering a value of 0.5 changes a keyframe’s value from 10 to 5.
Enabling this option causes all the Bézier handles to be independent. This is the same as using the
Command key when moving a handle, except it is applied to all control points until it is disabled.
Reducing Points
When there are too many control points too close together on a spline, you can choose Reduce Points
to decrease their number, making it easier to modify the remaining points. The overall shape of the
spline is maintained as closely as possible while eliminating redundant points from the path.
You can set the slider value as low as possible as long as the spline still closely resembles the shape of
your original spline.
TIP: When the value is 100, no points will be removed from the spline. Use smaller values to
eliminate more points.
A complex composition can easily contain dozens, if not hundreds, of animation curves. As a
composition grows, locating a specific spline can become more difficult. There are two ways to filter the
splines shown in the Spline Editor: display selected tools only or create a filter to show only certain tools.
— Show Only Selected Tool: You can choose to limit the splines displayed in the Spline Editor by
showing only the splines from selected tools. Choosing this option at the top of the Options menu
displays only the splines for tools currently selected in the Node Editor.
— Show All/None: The default behavior of the Spline Editor displays all the splines for all the nodes
with animated parameters. You can override this by enabling Show Only Selected Tools in the
Options menu. You can also disable the Show All setting by choosing Show None, in which case
the Spline Editor remains empty.
— Expose All Controls: The Expose All Controls option is a way of not filtering the parameters.
Choosing this option displays all parameters in the Spline Editor header for all nodes in the Node
Editor. It can be a fast way of activating one of the parameters and automatically adding an
animation spline for it if one does not exist.
— With a large number of nodes displayed, which themselves might have a large number of
parameters, this might lead to cluttering and slowing down the interface. This option is most
effective when used in conjunction with the Show Only Selected Tool option to limit the number of
nodes and parameters displayed and yield optimum performance.
— Follow Active: The Follow Active option is located by right-clicking in the graph and choosing
Options > Follow Active. This option provides a way to filter the splines in the graph while not
filtering the header list of tools. Where the Show Only Selected Tool option hides other tools in
the header, the Follow Active option leaves the header displaying all the tools but automatically
enables only the splines of the Active tool.
To create a filter:
1 From the Options menu, choose Create/Edit Filters.
2 Click the New button to create a new filter and name the new filter in the dialog box.
3 Enable a checkbox next to the entire category or the individual tools in each category to
determine the tools included in the filter.
Enable each tool you want to keep in the Spline Editor when the filter is selected
The Invert All and Set/Reset All buttons can apply global changes to all the checkboxes, toggling
the selected states as described.
To switch the selection state of the categories when creating a filter list:
1 Click the Invert All button.
2 After configuring the custom filter, click the Save button to close the Settings dialog and save the filter.
— Active: When the checkbox is enabled with a check mark, the spline is displayed in the graph and
can be edited.
— Viewed: When the checkbox is enabled with a solid gray box, the spline is visible in the graph but
cannot be edited. It is read-only.
— Disabled: When the checkbox is clear, the spline is not visible in the graph and cannot be edited.
— Select All Tools: Choosing this option activates all splines for editing.
— Deselect All Tools: Choosing this option sets all spline checkboxes to disabled.
— Select One Tool: This option is a toggle. When Select One Tool is chosen from the menu, only
one spline in the header is active and visible at a time. Clicking on any spline’s checkbox will set
it to active, and all other splines will be cleared. When disabled, multiple splines can be active
in the header.
Selection Groups
It is possible to save the current selection state of the splines in the header, making selection groups
that can easily be reapplied when needed. To create a selection group, right-click over any parameter
in the header or in an empty area of the graph and choose Save Current Selection from the contextual
menu. A dialog will appear to name the new selection.
To reapply the selection group, choose the selection group by name from the Set Selection menu in the
same contextual menu. Other context menu options allow selection groups to be renamed or deleted.
Smooth
A smoothed segment provides a gentle keyframe transition in and out of the keyframe by slightly
extending the direction handles on the curve. This slows down the animation as you pass through the
keyframe. To smooth the selected keyframe(s), press Shift-S or click the toolbar’s Smooth button.
Smooth interpolation
between keyframes
Linear
A linear segment effectively takes the shortest route between two control points, which is a straight
line. To make the selected keyframe(s) linear, press Shift-L or click the Linear button in the toolbar.
Step In causes the value of the previous keyframe to hold, then jump straight to the value of the next keyframe.
Step Out causes the value of the selected keyframe to hold right up to the next keyframe.
Step In and Step Out modes can be set for selected keyframes by clicking on the toolbar buttons for
each mode, or by right-clicking and choosing the appropriate option from the contextual menu. The
keyboard shortcuts I and O can also be used to enable Step In and Step Out on selected keyframes.
Reversing Splines
Reverse inverts the horizontal direction of a segment of an animation spline. To apply reverse, choose
a group of points in a spline and click the Reverse button, or right-click and choose Reverse from the
contextual menu, or press the V key. The group of points is immediately mirrored horizontally in the
graph. Points surrounding the reversed selection may also be affected.
Looping Splines
It is often useful to repeat an animated section, either infinitely or for a specified number of times,
such as is required to create a strobing light or a spinning wheel. Fusion offers a variety of ways to
repeat a selected segment.
Set Loop
To repeat or loop a selected spline segment, select the keyframes to be looped. Select Set Loop
from the contextual menu or click on the Set Loop button in the toolbar. The selected section of the
spline repeats forward in time until the end of the global range, or until another keyframe ends the
repeating segment.
Ping-Pong
The Ping-Pong Loop mode repeats the selected segment, reverses each successive loop, and then
repeats. Ping-pong looping can be enabled on the selected segments from the context menu or
the toolbar.
Relative Loop
The Relative Loop mode repeats the segment like the Loop, but each repetition adds upon the last
point of the previous loop so that the values increase steadily over time.
Looping Backward
You can choose Set Pre-Loop by right-clicking in the graph area and choosing it from the contextual
menu. This option contains the same options for looping as the Loop option buttons in the toolbar,
except that the selected segment is repeated backward in time rather than forward.
Gradient Extrapolation
You can choose Gradient Extrapolation by right-clicking in the graph area and choosing it from the
contextual menu. This option continues the trajectory of the last two keyframes.
When you have more than one keyframe selected on the spline, enabling Time Stretch surrounds the
outer keyframes with two vertical white bars. Drag on the white vertical bars to stretch or shrink the
timing of the spline segments within the bars. Drag these bars back and forth to stretch or squash the
spline segment.
TIP: If no keyframes are selected when you enable Time Stretch, drag a rectangle to set the
boundaries of the Time Stretch.
A white rectangle outlines the selected points when the mode is enabled. To scale, skew, or stretch the
spline, drag on any of the control points located around the box. To move all the keyframes, drag on
the box edges.
TIP: If no points are selected, or if you want to select a new group of keyframes, you can
drag out a new rectangle at any time.
The Ease In/Out controls appear above the graph area. You can drag over the number fields to adjust
the length of the direction handles or enter a value in the fields.
Clicking the Lock In/Out button will collapse the two sliders into one, so any adjustments apply to both
direction handles.
To export a spline:
1 Select the active spline in the Spline Editor.
2 Right-click on the spline in the graph area to display the contextual menu and select Export.
3 Choose from three format options in the submenu.
4 Enter a name and location in the file browser dialog, and then click Save.
Exporting a spline gives you three options. You can export the Samples, Key Points, or All Points.
Samples adds a control point at every frame to create an accurate representation of the spline.
Key Points replicates the control point positions and values on the spline using linear interpolation.
All Points exports the spline as you see it in the Spline Editor, using the same position, value, and
interpolation.
To import a spline:
1 Add an animation spline for the parameter.
2 In the Spline Editor, right-click on the animation spline and select Import Spline from the
contextual menu.
3 In the File Browser dialog, select the spline curve .spl file, and then click Open.
Importing a new curve will replace any existing animation on the selected spline.
Animating with
Motion Paths
Layers and 3D objects can move along a designated spline shape to
create motion path animations. This chapter discusses how you can
create, edit, and use motion paths in Fusion.
Contents
Animating Using Motion Paths�������������������������������������������������������������������������� 297
XY Path��������������������������������������������������������������������������������������������������������������������������� 305
The following nodes have parameters that can be animated using path modifiers to move an image
around the composition. These include, but are not limited to:
The following nodes have parameters that can be animated using paths to alter the direction of a
visual effect. These include, but are not limited to:
— Directional Blur: Center X/Y can be animated to change the direction of the blur.
— Hot Spot: Primary Center X/Y can be animated to move the hot spot around.
— Rays: Center X/Y can be animated to change the angle at which rays are emitted.
— Polygon/BSpline/Ellipse/Rectangle/Triangle mask: Center X/Y can be animated to
move the mask.
— Corner Positioner: Top Left/Top Right/Bottom Left/Bottom Right X/Y can be animated to move
each corner of the corner-pinned effect.
— Vortex: Center X/Y can be animated to move the center of the warping effect.
— A Polyline path is generated by applying the path modifier. It uses two splines to define the path;
one for the shape of the path displayed in the viewer, and a Displacement spline for the speed of
the object along the path, displayed in the Spline Editor. The Polyline path is the default type of
path modifier, and most documentation in this chapter assumes that this type is used.
— The XY path modifier employs a spline for the X position of the point and another
for the Y position. The XY path modifier is explained in detail later in this chapter.
— 3D motion paths pertain only to positional controls within 3D scenes.
Polyline Path
Polyline paths are the easiest motion paths to work with. You can use the spline shape in the viewer
to control the shape of the path, while a single Displacement curve in the Spline Editor is used to
control the acceleration along the path. The most obvious way to create a Polyline motion path is by
keyframing the Center X/Y parameter of a Transform node in the Inspector.
To create a Polyline motion path using the Center X/Y parameter in the Inspector:
1 Position the playhead on the frame where the motion will begin.
2 In the Inspector, click the gray Keyframe button to the right of the Center X and Y parameters.
This action applies the path modifier in the Modifiers tab in the Inspector.
Keyframing Center X/Y is not the only way to apply the path modifier. An alternative method is to apply
the path modifier to the Center X/Y parameter either in the Inspector or using the coordinate control
in the viewer.
The object now has a path modifier applied, so without setting a keyframe you can drag the
object to begin creating a motion path in the viewer.
A final alternative method for creating a motion path is to draw a spline shape first and then connect
a path modifier to the spline. Using any of Fusion’s spline tools, you can draw the shape of the path
and then connect the path modifier to the published spline. Once the path modifier and the published
spline are connected, you can keyframe the Displace parameter to move an image along the path. This
method is useful when you want to use a paint stroke or mask shape as a motion path.
2 When done drawing the shape, click the Insert and Modify button in the viewer toolbar to leave
the mask shape as an open spline.
3 At this point you can select any of the control points along the spline and press Shift-S to make
them smooth or Shift-L to make them linear.
All mask polylines have animation enabled by default, but that is usually not desirable for a motion
path. You will need to remove this keyframe animation if you are using a mask shape.
4 At the bottom of the Inspector, right-click on the “Right-click here for shape animation” label and
choose Remove Polygon1Polyline.
5 Right-click at the bottom of the Inspector again and select Publish to give other nodes access to
this spline shape. (For a paint stroke, you will need to make the Stroke editable first by clicking the
Make Editable button in the Stroke Controls.)
This enables the Modifiers tab with the Published Polyline modifier. This published spline can be
used to define the shape of splines in other nodes.
6 Connect a Transform node to the image you want to have follow the path.
7 Right-click over the Center X/Y parameter in the Inspector and choose Path.
This adds a path modifier into the Modifiers tab.
8 In the Inspector, click the Modifiers tab and double-click the Path1 heading to open its parameters.
The Displacement parameter already has a keyframe on it automatically. You’ll want to remove that
so you can set your own.
9 Click the red keyframe button to the right of the Displacement parameter to remove it.
10 At the bottom of the Modifiers tab, right-click on “Right-click here for shape animation” and
choose Connect To > Polygon1Polyline.
11 To quickly see where your object has gone, drag the Displacement slider back and forth.
12 You may want to use the Size parameter to adjust the size of the overall path.
The Displacement slider is meant to be keyframed for animating the object along the path.
You can use the path modifier controls in the Inspector to change the position, size, and rotation
of the entire path shape. The Displacement parameter is represented as a spline in the Spline
Editor, which determines the object’s position along the path and the Heading Offset is used for
the orientation along the path.
The Displacement curve of a Poly path represents the acceleration of an object on a path.
Smaller values are closer to the beginning of a path, while larger values are increasingly closer to the
end of the path.
The curved shape path does not define how fast the bee moves. The speed of the bee at any point
along the path is a function of the Displacement parameter. You can control the Displacement
parameter either in the Modifiers tab or in the Spline Editor.
After the initial animation is set, you can use the Displacement curve in the Spline Editor to adjust
the timing.
3 Drag a control point up or down to change its location on the path while maintaining the timing
between two points.
TIP: Holding down the Option key while clicking on the spline path in the viewer will add a
new point to the spline path without adding a Displacement keyframe in the Spline Editor.
This allows you to refine the shape of the path without changing the speed along the path.
In our example comp, the bee now auto-orients as the path descends and rises.
The Transform’s angle parameter connected to the path modifier’s Heading parameter
At first glance, XY paths work like Polyline paths. To create the path once the modifier is applied,
position the playhead and drag the onscreen control where you want it. Position the playhead
again and move the onscreen control to its new position. The difference is that the control
points are only there for spatial positioning. There is no Displacement parameter for controlling
temporal positioning.
TIP: XY path and Poly path can be converted between each other from the contextual
menu. This gives you the ability to change methods to suit your current needs without
having to redo animation.
The advantage of the XY path modifier is that you can explicitly set an XY coordinate at a specific time
for more control.
XY Path displays X and Y curves in the Spline Editor but does not include a Displacement control.
Locked Points
Locked points are the motion path equivalents of keyframes. They are created by changing the
playhead position and moving the animated control. These points indicate that the animated control
must be in the specified position at the specified frame.
The locked points are displayed as larger-sized hollow squares in the viewer. Each locked key has an
associated point on the path’s Displacement curve in the Spline Editor.
Deleting a locked point from the motion path will change the overall timing of the motion.
Moving the playhead and repositioning the bee adds a second locked point
At a value of 0.0, the control will be located at the beginning of the path. When the value of the
Displacement spline is 1.0, the control is located at the end of the path.
8 Select the keyframe at frame 45 in the Displacement spline and drag it to frame 50.
The motion path is now 50 frames long, without making any changes to the motion path’s shape.
If you try to change this point’s value from 1.0 to 0.75, it cannot be done because the point is the
last in the animation, so the value must be 1.0 in the Displacement spline.
9 Position the playhead on frame 100 and move the bee center to the upper-left corner of the screen.
Moving locked points changes the duration of a motion path without changing its shape
This will create an additional locked point and set a new ending for the path.
Unlocked points do not have corresponding points on the path’s Displacement spline. They appear in
the viewer as smaller, solid square points.
Unlocked points added to the motion path are not displayed on the Displacement spline
You can add unlocked points to the Displacement spline as well. Additional unlocked points in the
Spline Editor can be used to make the object’s motion pause briefly.
Knowing the difference between locked and unlocked points gives you independent control over
the spatial and temporal aspects of motion paths.
A useful example of this technique would be animating the path of a bee in flight. A bee often flies
in a constant figure eight pattern while simultaneously moving forward. The easy way of making this
happen involves two paths working together.
In the following example, the bee would be connected to a first path in a Transform node, which
would be a figure eight of the bee moving in place. This first path’s center would then be connected to
another path defining the forward motion of the bee through the scene via a second Transform node.
2 Add a polyline mask and create a smooth curve spline that travels across the screen.
3 At the bottom of the Inspector, right-click over the “Right-click here for shape animation” label and
choose Remove Polygon Polyline to remove the auto-animation behavior.
4 Right-click over the label again and choose Publish.
5 Select the object’s Transform node and click the Modifiers tab.
6 Right-click over the Path1 Center X/Y parameter and choose Path.
7 At the bottom of Path2, choose Connect To > Polygon: Polyline.
8 Keyframe the Path2 Displacement to move the object along the second path.
When pasting a path, the old motion path will be overwritten with the one from the clipboard.
Right-click on the desired path and select Record from the contextual menu. This displays a submenu
of available data that may be recorded.
Use the Record Time option in conjunction with the Draw Append mode to create complex motion
paths that will recreate the motion precisely as the path is drawn.
The time used to record the animation may not suit the needs of a project precisely. Adjust the path’s
Displacement spline in the Spline Editor to more correctly match the required motion.
Native Format
To save a polyline shape in Fusion’s native ASCII format, right-click on the header of the Mask node in
the Inspector and select Settings > Save As from the contextual menu. Provide a name and path for
the saved file and select OK to write a file with the .setting extension. This file will save the shape of a
mask or path, as well as any animation splines applied to its points or controls.
To load the saved setting back into Fusion, you first create a new polyline of the same type, and
then select Settings > Load from the mask’s context menu or drag the .setting file directly into the
Node Editor.
If you want to move a polyline from one composition to another, you can also copy the
node to the clipboard, open your second composition, and paste it from the clipboard into the
new composition.
Using Modifiers,
Expressions, and
Custom Controls
Some of the most powerful aspects of Fusion are the different
ways it allows you to go beyond the standard tools delivered
with the application.
Contents
The Contextual Menu for Parameters in the Inspector������������������������ 1461
FusionScript�������������������������������������������������������������������������������������������������������������� 1471
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 315
The Contextual Menu for
Parameters in the Inspector
Most of the features in this chapter are accessed via a contextual menu that appears when you right-
click most parameters in the Inspector. Different contextual menus are available based on where in the
Inspector you right-click. Specifically, right-clicking over parameter names or sliders displays a feature-
rich contextual menu that can add animation, expression fields, modifiers to extend functionality, as
well as publishing and linking capabilities, allowing you to adjust multiple controls simultaneously.
Using Modifiers
Parameters can be controlled with modifiers, which are extensions to a node’s toolset. Many modifiers
can automatically create animation that would be difficult to achieve manually. Modifiers can be as
simple as keyframe animation or linking the parameters to other nodes, or modifiers can be complex
expressions, procedural functions, external data, third-party plugins, or fuses.
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 316
A modifier’s controls are displayed in the Modifiers tab of the Inspector. When a selected node has a
modifier applied, the Modifiers tab will become highlighted as an indication. The tab remains grayed
out if no modifier is applied.
Modifiers appear with header bars and header controls just like the tools for nodes. A modifier’s title
bar can also be dragged into a viewer to see its output.
Publishing a Parameter
The Publish modifier makes the value of a parameter available, so that other parameters can connect
to it. This allows you to simultaneously use one slider to adjust other parameters on the same or
different nodes. For instance, publishing a motion path allows you to connect multiple objects to the
same path.
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 317
Publish a parameter in order to
link other parameters to it
Once a parameter is published, you can right-click another parameter and choose Connect To >
[published parameter name] from the contextual menu. The two values are linked, and changing the
parameter value of one in the Modifiers tab changes the other.
Using the pick whip between two parameters provides similar linking behavior with more
flexibility. Pick whipping between parameters is covered later in this chapter.
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 318
— CoordTransform Position: Calculates the current 3D position of a given object even after
multiple 3D transforms have repositioned the object through the node tree hierarchy.
— Cubic Spline: Adds a Cubic spline to the Spline Editor for animating the selected parameter.
— Expression: Allows you to add a variable or a mathematical calculation to a parameter, rather
than a straight numeric value. The Expression modifier provides controls in the Modifiers tab,
giving you more room and parameters than the SimpleExpression
— From Image: This modifier takes color samples of an image along a user-definable line and
creates a gradient from those samples.
— Gradient Color Modifier: Creates a customized gradient and maps it into a specified time range
to animate a value.
— KeyStretcher: Used to stretch keyframes in a Fusion Title template when trimming the template
in the Edit page or Cut page timeline.
— MIDI Extractor: Modifies the value of a parameter using the values stored in a MIDI file.
— Natural Cubic Spline: Adds a Natural Cubic spline to the Spline Editor for animating
the selected parameter.
— Offset (Angle, Distance, Position): The three Offset modifiers are used to create variances,
or offsets, between two positional values. For instance, when this modifier is added to a size
parameter, you can change the size of an object using the distance between two onscreen
controls (position and offset).
— Path: Produces two splines to control the animation of of an object: An onscreen motion path
(spacial) and a Time spline visible in the Spline Editor (temporal).
— Perturb: Generates smoothly varying random animation for a given parameter.
— Probe: Auto-animates a parameter by sampling the color or luminosity of a specific pixel or
rectangular region of an image.
— Publish: The first step in linking two non-animated parameters is to use the Publish modifier to
publish a parameter. That allows other parameters to use the Connect To submenu and link to the
published parameter.
— Resolve Parameter: Allows you to modify the duration of a Fusion transition template from the
Edit page Timeline. The Resolve Parameter Modifier is applied to any animated parameter instead
of keyframing the transition.
— Shake: Similar to Perturb, Shake generates smoothly varying random animation
for a given parameter.
— Track: Attaches a single point tracker to the selected parameter. The tracker can then track an
object onscreen to animate the parameter. This is quicker and more direct than using the normal
Tracker node; however, it offers less flexibility since the resulting tracker is only a single point and
can only be used for the selected parameter.
— Vector Result: Similar to the Offset modifier, Vector Result is used to offset position parameters
using origin, distance, and angle controls to create a vector. This vector can then be used to adjust
any other parameter.
— XY Path: Produces an X and Y spline in the Spline Editor to animate the position of an object..
For more information on all modifiers available in Fusion, see Chapter 62, “Modifiers,” in the Fusion
Reference Manual.
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 319
Performing Calculations
in Parameter Fields
You can enter simple mathematical equations directly in a number field to calculate a desired
value. For example, typing 2.0 + 4.0 in most number fields will result in a value of 6.0. This can be
helpful when you want a parameter to be the sum of two other parameters or a fraction of the
screen resolution.
Using SimpleExpressions
Simple Expressions are a special type of script that can be placed alongside the parameter it is
controlling. These are useful for setting simple calculations, building unidirectional parameter
connections, or a combination of both. You add a SimpleExpression by entering an equals sign directly
in the number field of the parameter and then pressing Return.
An empty field will appear below the parameter, and a yellow indicator will appear to the left. The
current value of the parameter will be entered into the number field. Using Simple Expressions, you
can enter a mathematical formula that drives the value of a parameter or even links two different
parameters. This helps when you want to create an animation that is too difficult or impossible to set
up with keyframing. For instance, to create a pulsating object, you can use the sine and time functions
on a Size parameter. Dividing the time function can slow down the pulsing while multiplying it can
increase the rate.
Inside the SimpleExpression text box, you can enter one-line scripts in Lua with some Fusion-specific
shorthand. Some examples of Simple Expressions and their syntax include:
Expression Description
Merge1:GetValue("Blend", time-5) This returns the value from another input, but
sampled at a different frame, in this case five
frames before the current one.
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 320
Expression Description
"\n from the comp "..ToUNC(comp. To get a new line in the Text, \n is used. Various
Filename) attributes from the comp can be accessed with
the comp variable, like the filename, expressed
as a UNC path.
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 321
TIP: When working with long SimpleExpressions, it may be helpful to drag the Inspector
panel out to make it wider or to copy/paste from a text editor or the Console.
After setting an expression that generates animation, you can open the Spline Editor to view the
values plotted out over time. This is a good way to check how your SimpleExpression evaluates
over time.
A sine wave in the Spline Editor, generated by the expression used for Text1: Size
For more information about writing Simple Expressions, see the Fusion Studio Scripting Guide, and
the official Lua documentation.
SimpleExpressions can also be created and edited within the Spline Editor. Right-click on the
parameter in the Spline Editor and select Set SimpleExpression from the contextual menu. The
SimpleExpression will be plotted in the Spline Editor, allowing you to see the result over time.
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 322
Removing SimpleExpressions
To remove a SimpleExpression, right-click the name of the parameter, and choose Remove Expression
from the contextual menu.
Custom controls can be added or edited via the Edit Control dialog, which you access by right-clicking
over the node’s name in the Inspector and choosing Edit Controls from the menu.
In the Edit Control dialog, you use the ID menu to select an existing parameter or create a new one.
You can name the control and define whether it is a text field, number field, or a point using the Type
attributes list.
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 323
You use the page list to assign the new control to one of the tabs in the Inspector. There are also
settings to determine the defaults and ranges, and whether it has an onscreen preview control. The
Input Ctrl box contains settings specific to the selected Type, and the View Ctrl attributes box contains
a list of onscreen preview controls to be displayed, if any.
All changes made using the Edit Controls dialog get stored in the current tool instance, so they can
be copied/ pasted to other nodes in the comp. However, to keep these changes for other comps, you
must save the node settings, and add them to the Bins in Fusion Studio or to your favorites.
Let’s say we wanted a more interactive way of controlling a linear blur in the viewer, rather than using
the Length and Angle sliders in the Inspector. Using a SimpleExpression, we’ll control the length and
angle parameters with the Center parameter’s onscreen control in the viewer. The SimpleExpression
would look something like this:
For Length:
sqrt(((Center.X-.5)*(self.Input.XScale))^2+((Center.Y-.5)*(self.Input.
YScale)*(self.Input.Height/self.Input.Width))^2)
For Angle:
atan2(.5-Center.Y , .5-Center.X) * 180 / pi
This admittedly somewhat advanced function does the job fine. Dragging the onscreen control
adjusts the angle and length for the directional blur. However, now the names of the parameters are
confusing. The Center parameter doesn’t function as the center anymore. It is the direction and length
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 324
of the blur. It should be named "Blur Vector" instead. You no longer need to edit the Length and Angle
controls, so they should be hidden away, and since this is only for a linear blur, we don’t need the
Type menu to include Radial or Zoom. We only need to choose between Linear and Centered. These
changes can easily be made in the Edit Controls dialog.
The new Blur Vector parameter now appears in the Inspector. The internal ID of the control is still
Center, so our SimpleExpressions did not change.
Finally, to remove Radial and Zoom options from the Type menu:
1 In the Edit Control dialog, select the Type from the ID list.
2 Select Controls from the Page list.
3 Select Radial from the Items list and click Del to remove it.
4 Select Zoom from the Items list and click Del to remove it.
5 Click OK.
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 325
The Type menu now includes only two options.
If you want to replace the Type menu with a new checkbox control, you can do that by creating a new
control and a very short expression.
To make this new checkbox affect the original Type menu, you’ll need to add a SimpleExpression to
the Type:
iif(TypeNew==0, 0, 2)
The "iif" operator is known as a conditional expression in Lua script. It evaluates something based on a
condition being true or false.
FusionScript
Scripting is an essential means of increasing productivity. Scripts can create new capabilities or
automate repetitive tasks, especially those specific to your projects and workflows. Inside Fusion,
scripts can rearrange nodes in a comp, manage caches, and generate multiple output files for
delivery. They can connect Fusion with other apps to log artist time, send emails, or update webpages
and databases.
FusionScript is the blanket term for the scripting environment in Fusion. It includes support for Lua as
well as Python 2 and 3 for some contexts. FusionScript also includes libraries to make certain common
tasks easier to do with Lua and Python within Fusion.
You can run interactive scripts in various situations. Common scripts include:
— Utility Scripts, using the Fusion application context, are found under the File > Scripts menu.
— Comp Scripts, using the composition context, are found under the Script menu or entered
into the Console.
— Tool Scripts, using the tool context, are found in the Tool’s context menu > Scripts.
Other script types are available as well, such as Startup Scripts, Scriptlibs, Bin Scripts, Event Suites,
Hotkey Scripts, Intool Scripts, and SimpleExpressions. Fusion Studio allows external and command-
line scripting as well and network rendering Job and Render node scripting.
FusionScript also forms the basis for Fuses and ViewShaders, which are special scripting-based
plugins for tools and viewers that can be used in both Fusion and Fusion Studio.
For more information about scripting, see the Fusion Scripting Documentation, accessible from the
Documentation submenu of the Help menu.
Fusion Fundamentals | Chapter 12 Using Modifiers, Expressions, and Custom Controls 326
Chapter 13
Bins
This chapter covers the bin system in Fusion Studio. Bins allow for
the storage and organization of clips, compositions, tool settings,
and macros, similar to the Media Pool and Effects Library in
DaVinci Resolve. It includes a built-in Studio Player for creating a
playlist of multiple shots and their versions.
Bins can be used in a server configuration for organizing shots and collaborating with other team
members across the studio.
Contents
Bins Overview������������������������������������������������������������������������������������������������������������ 328
Permissions������������������������������������������������������������������������������������������������������������������ 345
Bins Interface
The Bins window is separated into two panels. The sidebar on the left is a list of the bins, while the
Content panel on the right displays the selected bin’s contents.
The sidebar organizes content into bins, or folders, using a hierarchical list view. These folders can be
organized however they suit your workflow, but standard folders are provided for Clips, Compositions,
Favorites, Projects, Reels, Settings, Templates, and Tools. The Tools category is a duplicate of all the
tools found in the Effects library. The Tools bin is a parent folder, and parent folders contain subfolders
that hold the content. For instance, Blurs is a subfolder of the Tools parent folder. Parent folders can
be identified by the disclosure arrow to the left of their name.
When you select a folder from the sidebar, the contents of the folder are displayed in the Contents
panel as thumbnail icons.
Each bin in the sidebar can be set to List view or Icon view independently of each other. So while you
may have some bins you want to see as a list, others may be easier to view as icons.
You can also click the New Folder icon in the toolbar.
TIP: You cannot undo removing a folder from the Bins window.
TIP: Unsupported files like PDFs can also be saved in the bin, and the application that
supports it will launch when you double-click the file. It’s a useful location for scripts
and notes.
If you have an operating system file browser window open, you can drag files directly into a bin as
well. When adding an item to the bins, the files are not copied. A link is created between the content
and the bins, but the files remain in their original location.
Tool Settings
If you want to add a node with custom settings to a bin, first save the node as a setting by right-
clicking over it in the Node Editor and choosing Settings > Save As. Once the setting is saved, you can
add it to a bin by dragging it from a File Browser into the Bins window.
To ignore the image sequence and import only a single frame, hold Shift when you drag the frame into
a bin. This can be useful when trying to import a single still image from a series of still shots with a
DSLR. The numbers maybe sequential, but you just need one still image from the series.
Media
Dragging media into the Node Editor from a bin window creates a new Loader that points to the
media on disk. Still files or photos are automatically set to loop.
Compositions
To add a composition, you must right-click and choose Open. Dragging a comp item onto an open
composition will have no effect. When a composition is added, it is opened in a new window. It is not
added to the existing composition.
Settings and macros work a bit differently than tools. They can only be added to the Node Editor
by dragging and droping. Dragging a setting or macro allows you to place it in the Node Editor,
unconnected, or, if you drag it over a connection line, inserted between two existing tools.
You can choose Shuttle mode by right-clicking over the clip’s thumbnail in the bin and choosing Scrub
Mode > Shuttle.
Stamp Files
Stamp files are low-resolution, local proxies of clips, which are used for playback of clips stored on a
network server, or for very large clips.
The Status bar at the top of the Bins window shows the progress of the stamp creation. Since the
stamp creation is a background process, you can queue other stamps and can continue working with
the composition.
The Studio Player includes a large viewer, a Timeline, and a toolbar along the bottom.
Once you have the clip open in the Studio Player, you can click the Play button in the toolbar at the
bottom of the window.
To close the current clip in the Studio Player and return to the bins:
— Click the three-dot Options menu in the lower-left corner and choose Close.
Creating a Reel
A reel is a playlist or clip list that is viewed either as storyboard thumbnails or a timeline. In the bin,
you create a new reel item to hold and play back multiple clips. The thumbnail for a reel appears with a
multi-image border around it, making it easer to identify in the Bin window.
Once created and named, the reel appears in the current bin. Double-clicking the reel opens the
Studio Player interface along the bottom of the Bin window. An empty Studio Player appears in the top
half of the window.
The toolbar across the bottom of the interface has various controls for setting a loop, showing and
adjusting color, playback transport controls, collaboration sync, guide overlays, and frame number
and playback speed in fps.
The toolbar along the bottom of the Studio Player includes controls to customize playback.
Inserting Shots
Clips or comps from the bin can be dragged to the storyboard area of the reel to add and organize
a playlist.
Versions of a shot will appear as stacked icons in the storyboard reel. The number of stacks in
the icon indicate the number of versions included with that clip. The current version and the total
versions are indicated by the number in the lower-right hand corner of the icon. In the example
below, the first shot has three versions, the second shot has one version, and the last clip has
two versions.
Clip versions are visible by the number in the lower right and
graphically represented by the number of stacked icons.
Version Menu
You can choose which version to view by right-clicking over the clip in the storyboard and selecting it
from the Version > Select menu.
The Version menu also includes options to move or rearrange the order of the clip versions as well as
remove a version, thereby deleting it from the stack.
The Shot menu lets you modify the clip in the player.
When notes are added, they are time and date stamped as well as name
stamped. The naming is from the bin login name and computer name.
The Audio menu option can import an audio .wav file that will play back along with the
selected clip.
Options Menu
The three-dot Options menu in the lower left of the interface displays the menu that can be used to
switch between viewer and the bin in the top half of the window. It is also used to clear the memory
used in playback by selecting Purge Cache.
Selecting Reel > Notes opens the Notes dialog to add annotations text to the entire reel project.
The Reel > Export option saves the reel to disk as an ASCII readable format so it can be used
elsewhere or archived.
Guides
You can assign customizable guide overlays to three Guide buttons along the bottom of the Studio
Player. Fusion includes four guides to choose from, but you can add your own using the XML Guide
format and style information provided at the end of this chapter. You assign a customizable guide to
one of the three Guide buttons by right-clicking over a button and selecting a guide from the list. To
display the guide, click the assigned button.
Guides are a simple XML formatted text document saved with the .guide extension, as defined below.
This makes it easy to create and share guides.
Guide
{
Name = “10 Pixels”,
Elements =
{
HLine { Y1=”10T” },
HLine { Y1=”10B” },
VLine { X1=”10L” },
VLine { X1=”10R” },
},
}
Guide
{
Name = “Safe Frame”,
Elements =
{
HLine { Y1=”10%”, Pattern = 0xF0F0 },
HLine { Y1=”90%”, Pattern = 0xF0F0 },
HLine { Y1=”95%” },
HLine { Y1=”5%” },
VLine { X1=”10%”, Pattern = 0xF0F0 },
VLine { X1=”90%”, Pattern = 0xF0F0 },
VLine { X1=”95%” },
VLine { X1=”5%” },
HLine { Y1=”50%”, Pattern = 0xF0F0, Color = { R =
1.0, G = 0.75, B = 0.05, A=1.0 } },
VLine { X1=”50%”, Pattern = 0xF0F0, Color = { R =
1.0, G = 0.75, B = 0.05, A=1.0 } },
},
}
Guide Styles
The style of a guide is defined by a set of properties that appear in the format shown below:
— HLine: Draws a horizontal line and requires a Y-value, which is measured from the top of the
screen. The Y-value can be given either in percent (%) or in absolute pixels (px).
— Vline: Draws a vertical line and requires an X-value, which is measured from the left of the screen.
The X-value can be given either in percent (%) or in absolute pixels (px).
— Pattern: The Pattern value is made up of four hex values and determines the visual appearance of
the line.
Examples for such patterns include:
— Color: The Color value is composed of four groups of two hex values each. The first three groups
define the RGB colors; the last group defines the transparency. For instance, the hex value for
pure red would be #FF000000, and pure lime green would be #00FF0000
— FillMode: Applies to rectangles only and defines whether the inside or the outside of the
rectangle should be filled with a color. Leave this value out to have just a bounding rectangle
without any fill.
>>FillMode = (“None”|”Inside”|”Outside”)
— FillColor: Applies to rectangles only and defines the color of the filled area specified by FillMode.
>>FillColor=“FF000020”
Then add a User name and Password if one is needed to access the server.
The Library field lets you name the bins. So if you want to create a bin for individual projects, you
would name it in the Library field, and each project would gets its own bin.
The Application field allows larger studios to specify some other program to serve out the
bin requests.
Once you’ve finished setting up the bin server information and clicked Save in the Preferences
window, you can open the Bins window to test your bin server. Opening the Bins window is the first
time your connection to the server will be tested. If it cannot connect, the bin server will still be listed,
with access denied or unavailable marked next to the name on the bins sidebar.
Permissions
Unlike other directories on a server, your access to bins on a network is stored in the bin document.
The bins themselves contain all the users and passwords in plain text, making it easy for someone to
administer the bins.
The Sync button is a three-way toggle button: Off, Slave, and Master.
When the Sync function is On, the transport controls can be set to control the playback or follow
the master controller.
Fusion Connect
This chapter goes into detail on how to use the Fusion Connect
AVX2 plug-in with an Avid Media Composer editing system.
The Fusion Connect AVX plug-in is only available with Fusion Studio.
Contents
Fusion Connect Overview������������������������������ 348 Create New Version�������������������������������������������� 352
Edit Effect Also Launches Fusion������������������ 351 Fields and Variables������������������������������������������� 358
Fusion can be started automatically by the plug-in if Fusion is installed on the same system, or it can
be used on remote computers to modify the composition.
System Requirements
Fusion Connect has the following requirements:
Once the layer count is selected, Fusion Connect will be applied to the Timeline.
— Select the layer count equal to the number of video track layers you want to ingest into Fusion.
— Filler can be used as a layer.
— Fusion Connect will allow a maximum of eight layers.
You can use the Avid dialog boxes or smart tools to adjust the length and offset of
the transition to Start, Center, End, or Custom.
When using Fusion on the same computer as Media Composer, there is no need to export the clips
explicitly by checking the Export Clips checkbox. Without this option enabled, Fusion Connect saves
the source frames each time images are displayed, scrubbed, or played back from the Timeline.
Depending on your Media Composer Timeline settings, these interactively exported images might
be half-resolution based on Avid proxy settings. When scrubbing around the Timeline, only a few
frames—namely, those that are fully displayed during scrubbing—might be written to disk.
TIP: Set your Timeline Video Quality button to Full Quality (green) and 10-bit color bit
depth. If the Timeline resolution is set to Draft Quality (green/yellow) or Best Performance
(yellow), Fusion receives subsampled, lower-resolution images.
Edit Effect
After exporting the clips, the Edit Effect button performs three subsequent functions:
— Creates a Fusion composition, with Loaders, Savers, and Merges (for layers), or a Dissolve
(for transitions). This function is only performed the first time a comp is created when the
Fusion Connect AVX2 plug-in is applied.
The path settings field in the Effects Editor updates to show the current location. If you apply Fusion
Connect to another clip in the Timeline, the last location is remembered.
Any changes that are rendered in the copy will be written to a new folder and become another version
of the rendered result played on the Avid Timeline. Previous versions of the comp and their rendered
results are accessible using the Version slider.
Version
This slider selects which version of the comp is used in the Media Composer Timeline. It can be used
to interactively switch from one version to the other in order to compare the results.
— In the manual workflow, Fusion Studio is not required to be installed on the Avid system itself but
can reside on any other computer.
— For Auto-Render, Fusion Studio must be installed on the local computer.
The following diagram shows typical workflows for manual and automatic renders.
Create a comp file when first clicked, but will not Creates Fusion RAW files as Export Clip would do.
overwrite this file when clicked again. Attempts to Also creates a comp file when first clicked, but will
launch Fusion and load the comp. If Fusion is not not overwrite this file when clicked again. Launches
installed locally, the comp can be accessed manually Fusion and loads the comp.
from a different machine via the network.
In this case, both the Fusion comp and the Avid clip
In all three node tree layouts outlined above, there will also be a Saver node. The Saver node is
automatically set to render to the directory that is connected back to the Media Composer Timeline
with the correct format. If for some reason the file format or file path are changed in the Saver node,
the Fusion Connect process will not render correctly.
TIP: Due to the design of the AVX2 standard, hidden embedded handles are not supported.
To add handles, prior to exporting to Fusion, increase the length of your clip in the Media
Composer Timeline to include the handle length.
Fusion Node Editor representations of a single clip segment effect in the Media Composer
Timeline (left), a multi-layered composite (center), and the transition (right)
TIP: If segments in the Avid Timeline have different durations, move the longest clip to the
top layer and apply Fusion Connect to that clip. This will ensure all clips are brought into
Fusion without being trimmed to the shortest clip.
Directory Structure of
Fusion Connect Media
Fusion Connect creates a logical folder structure that is not affiliated with the Avid Media Files folder
but rather the Fusion Connect AVX2 plug-in. Based on data gathered during the AVX application to
the Timeline, a logical folder hierarchy is automatically created based on Avid projects, sequences, and
clips. This structure allows for multiple instances of Fusion Studio to access the media and multiple
instances of the AVX to relate to a single Fusion comp. In a re-edit situation, media is never duplicated
but is overwritten to avoid multiple copies of identical media.
Avid SequenceName
Bob_v01.comp
Charlie_v01.comp
Bob
Charlie
Avid
Charlie_0000.raw
Charlie
Charlie_0000.raw
Charlie_0000.raw
Directory named after the
exported clip. Image sequence named
after the exported clip.
Dave Dave_0000.raw
Dave_0000.raw
Optional directory named after Dave_0000.raw
the exported clip.
Optiona second source clip
for Charlie_v01.comp.
xyz_name
xyz_0000.raw
Optional directory named after xyz_0000.raw
the exported clip. xyz_0000.raw
Fusion
Charlie_0000.raw
Render_v01
Charlie_0000.raw
Charlie_0000.raw
Directory named ‘Render_’ plus the
version number of the Fusio comp. Rendered images sequence
named after the Avid clip.
Render_v02
If you apply the effect to a transition, the naming behavior might be somewhat different.
By default, Media Composer refers to the two clips of a transition as “Clip_001” and “Clip_002”. Based
on the naming convention, Fusion Connect will create folders with matching names. If such folders
already exist, because another transition has already used up “Clip_001” and “Clip_002”, the numbers
will be incremented automatically.
Fusion Connect AVX creates folder structures in the OS to save media and
Fusion compositions. Those names are reflected in the Timeline.
You will notice that the Fusion Connect icon is a green dot (real-time) effect. If your hardware is fast
enough, the results that populate the plug-in will play in real time. However, it’s recommended that
you render the green dot effect, which will force an MXF precompute to be created to guarantee real-
time playback.
Default paths can be configured using variables similarly as on Windows, but for added
convenience it is possible to enter any desired path defaults directly into fields in the dialog,
without the need for using environment variables.
Fusion Connect can define the user variables directly in the Fusion Connect plug-in. Click the
Configure Path Defaults button to launch the path defaults dialog editor. In the Options section of
the Fusion Connect AVX2 plug-in, click the triangle to reveal the path details.
Project Name $DRIVE CONNECT_DRIVE Drive or folder for all Connect projects
Sequence Path
$GROUP – Unique name of this Connect instance
User Variables
Click on the link that says “Edit environment variables for your account.”
System Variables
Click on the link that says “Edit the system environment variables.”
User Variables
For system-wide operations, place the environment variable in ~/.bash_profile
TIP: System variables control the environment throughout the operating system, no
matter which user is logged in.
User variables always overrule any system variable; therefore, the user variable always wins
if control for a specific function is duplicated in the user and system variable.
System Variables
For system-wide operations, place the environment variable in /etc/profile
TIP: If you type directly in Fusion Connect’s Path Editor, you do not have to type the
variable, just the value. You also can make modifications without having to restart
the Media Composer! The only caveat is that in order to remove a variable, you must
exit Media Composer and clear the environment variable in the Windows interface or
macOS Terminal and restart the Media Composer.
Values Description
$DRIVE This will force the directory to the drive where the Avid media is stored.
This will force a directory based on the Avid project name for which the media
$PROJECT
was digitized/imported or AMA linked.
This will force a directory based on the Avid SEQUENCE name for which the
$SEQUENCE
media was digitized/imported or AMA linked.
Here is an example of how a variable can be set up to support project and sequence names within
your directory.
Preferences
This chapter covers the various options that are
available from the Fusion Preferences Window.
Contents
Preferences Overview������������������������������������� 362 Script������������������������������������������������������������������������ 387
In DaVinci Resolve, to open the Fusion Preferences window, do one of the following:
— On macOS, switch to the Fusion page and choose Fusion > Fusion Settings.
— On Windows, switch to the Fusion page and choose Fusion > Fusion Settings.
— On Linux, switch to the Fusion page and choose Fusion > Fusion Settings.
In Fusion Studio, to open the Fusion Preferences window, do one of the following:
— On macOS, choose Fusion Studio > Preferences.
— On Windows, choose File > Preferences.
— On Linux, choose File > Preferences.
The Global preferences are used to set options specific to Fusion’s overall behavior as well as defaults
for each new composition. The Composition preferences in Fusion Studio can further modify the
currently open composition without affecting the Global preferences or any other composition that is
open but not displayed.
3D View
The 3D View preferences offer control over various parameters of the 3D Viewers, including grids,
default ambient light setup, and Stereoscopic views.
Defaults
The Defaults preferences are used to select default behavior for a variety of options, such as
animation, global range, timecode display, and automatic tool merging.
Flow
You use the Flow preferences to set many of the same options found in the Node Editor’s contextual
menu, like settings for Tile picture, the Navigator, and pipe style.
General
The General preferences contain options for the general operation, such as auto save, and gamma
settings for color controls.
Path Map
Path Map preferences are used to configure virtual file path names used by Loaders and
Savers as well as the folders used by Fusion to locate comps, macros, scripts, tool settings, disk
caches, and more.
Script
The Script preferences include a field for passwords used to execute scripts externally, programs to
use for editing scripts, and the default Python version to use.
Splines
Options for the handling and smoothing of animation splines, Tracker path defaults, onion-skinning,
roto assist, and more are found in the Splines preference.
Timeline
The Timeline preferences is where you create and edit Timeline/Spline filters and set default options
for the Keyframes Editor.
Tweaks
The Tweaks preferences handle miscellaneous settings for modifying the behavior when loading
frames over the network and queue/network rendering.
User Interface
These preferences set the appearance of the user interface window and how the Inspector is
displayed.
View
The View preferences are used to manage settings for viewers, including default control colors,
Z-depth channel viewing ranges, default LUTs, padding for fit, zoom, and more.
VR Headsets
The VR Headsets preferences allow configuration of any connected Virtual Reality headsets, including
how stereo and 3D scenes are viewed.
Import
The Import settings contain options for EDL Import that affect how flows are built using the data
from an EDL.
3D View
The 3D View preferences contain settings for various defaults in the 3D Viewers, including grids,
default ambient light setup, and Stereoscopic views.
Grid
The Grid section of the 3D View preferences configures how the grid in 3D Viewers are drawn.
— Grid Antialiasing: Some graphics hardware and drivers do not support antialiased grid
lines, causing them to sort incorrectly in the 3D Viewer. Disabling this checkbox will disable
antialiasing of the grid lines. To turn off the grid completely, right-click in a 3D Viewer and choose
3D Options > Grid.
— Size: Increasing the Size value will increase the number of grid lines drawn. The units used for the
spacing between grid lines are not defined in Fusion. A “unit” is whatever you want it to be.
— Scale: Adjusting the overall scaling factor for the grid is useful, for example, if the area of the grid
appears too small compared to the size of your geometry.
— Near Plane/Far Plane: These values set the nearest and furthest point any object can get to or
from the camera before it is clipped. The minimum setting is 0.05. Setting Near Plane too low and
Far Plane too far results in loss of depth precision in the viewer.
— Eye Separation/Convergence/Stereo Mode: This group of settings defines the defaults when
stereo is turned on in the 3D Viewer.
Orthographic Views
Similar to the Perspective Views, the Orthographic Views (front, top, right, and left views) section sets
the nearest and furthest point any object can get to or from the viewer before clipping occurs.
Fit to View
The Fit to View section has two value fields that manage how much empty space is left around objects
in the viewer when the F key is pressed.
— Fit Selection: Fit Selection determines the empty space when one or more objects are selected
and the F key is pressed.
— Fit All: Fit All determines the empty space when you press F with no objects selected.
Default Lights
These three settings control the default light setup in the 3D Viewer.
The default ambient light is used when lighting is turned on and you have not added a light to the
scene. The directional light moves with the camera, so if the directional light is set to “upper left,” the
light appears to come from the upper-left side of the image/camera.
AVI
The AVI preference is only available in Fusion Studio on Windows. It configures the default AVI codec
settings when you select AVI as the rendering file format in the Saver node.
— Compressor: This drop-down menu displays the AVI codecs available from your computer. Fusion
tests each codec when the application opens; therefore, some codecs may not be available if the
tests indicate that they are unsuitable for use within Fusion.
— Quality: This slider determines the amount of compression to be used by the codec. Higher
values produce clearer images but larger files. Not all codecs support the Quality setting.
— Key Frame Every X Frames: When checked, the codec creates keyframes at specified intervals.
Keyframes are not compressed in conjunction with previous frames and are, therefore, quicker to
seek within the resulting movie. Not all codecs support the keyframe setting.
— Limit Data Rate To X KB/Second: When checked, the data rates of the rendered file are limited to
the amount specified. Not all codecs support this option. Enter the data rate used to limit the AVI
in kilobytes (kB) per second, if applicable. This control does not affect the file unless the Limit Data
Rate To option is selected.
Default Animate
The Default Animate section is used to change the type of modifier attached to a parameter when
the Animate option is selected from its contextual menu. The default option is Nothing, which uses a
Bézier spline to animate numeric parameters and a path modifier for positional controls.
— Number With and Point With: Drop-down lists are used to select a different modifier for the
new default. For example, change the default type used to animate position by setting the Point
with the drop-down menu to XY Path.
Choices shown in this menu come from installed modifiers that are valid for that type of
parameter. These include third-party plug-in modifiers, as well as native modifiers installed
with Fusion.
Auto Tools
The Auto Tools section determines which tools are added automatically for the most common
operations of the Background tools and Merge operations.
— Background: When set to None, a standard Background tool is used; however, the drop-down
menu allows you to choose from a variety of tools including 2D and 3D tools to customize the
operation to your workflow.
Global Range
Using the Start and End fields, you can define the Global Start and End frames used when creating
new compositions.
Time Code
You use this option to determine whether new compositions will default to showing SMPTE Time Code
or frames (Feet + Frames) to represent time.
Flow
Many of the same options found in the Node Editor’s contextual menu, like settings for Tile Picture,
the Navigator, and Pipe Style, are found in this category.
Force
The Force section can set the default to display pictures in certain tool tiles in the Node Editor rather
than showing plane tiles. The Active checkbox sets pictures for the actively selected tool, the All
checkbox enables pictures for all tiles, and the Source and Mask checkbox enables tile pictures for just
Source and Mask tools.
— Show Modes/Options: Enabling this option will display icons in the tool tile depicting various
states, like Disk Caching or Locked.
— Show Thumbnails: When this checkbox is selected, tool tiles set to show tile pictures will
display the rendered output of the tool. When the checkbox is cleared, the default icon for the tool
is used instead.
Options
The Options section includes several settings that control or aid in the layout and alignment of tools in
the Node Editor.
— Arrange to Grid: This enables a new node tree’s Snap to Grid option to force the tool layout to
align with the grid marks in the flow.
— Arrange to Connected: Tools snap to the vertical or horizontal positions of other tools
they are connected to.
— Auto Arrange: This option enables the Node Editor to shift the position of tools as needed to
make space when inserting new tools or auto-merging layers.
— Show Grid: This enables or disables the display of the Node Editor’s background grid.
— Auto Remove Routers: Pipe Routers or “elbow nodes” in the Node Editor are considered
“orphaned” if the tools connected to either the input or output are deleted. When this option is
enabled, Orphaned Routers are automatically deleted.
— Pipes Always Visible: When enabled, the connection lines between tools are drawn over the top
of the tool tiles.
— Keep Tile Picture Aspect: Enabling this option forces tool tile thumbnail pictures to preserve the
aspect of the original image in the thumbnail.
— Full Tile Render Indicators: Enabling this checkbox causes the entire tile to change color
when it is processing. This can make it easier to identify which tools are processing in a large
composition. The coloring itself will form a progress bar to alert you to how close slower tools are
to finishing their process.
— Show Instance Links: This option is used to select whether Instance tools will show links,
displayed as green lines, between Instance tools.
— Navigator: The Navigator is a small square overview of the entire composition. It is used to
quickly navigate to different parts of a node tree while you are zoomed in. The checkboxes in this
section determine when the Navigator is displayed, if at all.
— On: The Navigator will always be visible.
— Off: The Navigator will always be hidden.
— Auto: The Navigator will only be visible when the Node Editor’s contents exceed the currently
visible Work area.
— Pipe Style: This drop-down menu selects which method is used to draw connections between
tools. The Direct method uses a straight line between tools, and Orthogonal uses horizontal and
vertical lines.
Group Opacity
This slider controls the opacity of an expanded group’s background in the Node Editor.
Frame Format
Frame Format preferences allow you to select the resolution and frame rate for the nodes that
generate images like Background, fast noise, and Text+. It also sets the color bit depth for final
renders, previews, and interactive updates in the viewer. The color bit depth settings only apply to
Fusion Studio. Rendering in DaVinci Resolve always use 32-bit float.
Use the Edit boxes to change any of the default settings. When creating a new setting, press the New
button and enter a name for the setting in the dialog box that appears and enter the parameters.
Settings
The Settings section defines the format that is selected in the Default Format menu. You can modify
an existing format or create a new one.
— Width/Height: When creating a new format for the menu or modifying an existing menu item,
you specify the Width or Height in pixels of the format using these fields.
— Frame Rate: Enter or view the frames per second played by the format. This sets the default
Frame Rate for previews and final renders from the Saver tool. It also sets the playback for the
comp itself, as well as the frame to time code conversion for tools with temporal inputs.
— Has Fields: When this checkbox is enabled, any Creator or Loader tool added to the Node Editor
will be in Fields process mode.
— Film Size: This field is used to define how many frames are found in one foot of film. The value is
used to calculate the display of time code in Feet + Frames mode.
— Aspect Ratio: These two fields set the pixel aspect ratio of the chosen frame format.
— Guide 1: The four fields for Guide 1 define the left, top, right, and bottom guide positions for
the custom guides in the viewer. To change the position of a guide, enter a value from 0 to 1.
The bottom-left corner is always 0/0, the top-right corner is always 1/1. If the entered value’s
aspect does not conform to the frame format as defined by the Width and Height parameters,
an additional guide is displayed onscreen. The dotted line represents the image aspect centered
about Guide 1’s Center values.
— Guide 2: This setting determines the image aspect ratio in respect to the entire frame format
width and height. Values higher than 1 cause the height to decrease relative to the width. Values
smaller than 1 cause height to increase relative to width.
— New: You use the New button to create a new default setting in the drop-down menu. Once you
click the button, you can name the setting in the dialog box that appears.
— Copy: The Copy button copies the current setting to create a new one for customization.
— Delete: The Delete button will remove the current setting from the default drop-down list.
Color Depth
The three menus in the Color Depth section are used to select the color mode for processing preview
renders, interactive renders, and full (final) renders. Processing images at 8-bit is the lowest color
depth and is rarely sufficient for final work these days but is acceptable for fast previews. 16-bit color
has much higher color fidelity but uses more system resources. 16-bit and 32-bit float per channel
uses even more system resources and is best for digital film and HDR rendered images.
Generally, these options are ignored by the composition unless a Loader or Creator tool’s Color Depth
control is set to Default.
Usability
Usability has a number of project, Node Editor, and user interface settings that can make the
application easier to work with, depending on your workflow.
— Auto Clip Browse: When this checkbox is enabled, the File Browser is automatically displayed
when a new Loader or Saver is added to the Node Editor.
— New Comp on Startup: When checked, a new, empty project is created each time Fusion Studio is
launched. This has no effect in DaVinci Resolve’s Fusion page.
— Summarize Load Errors: When loading node trees or “comps” that contain unknown tools
(e.g., comps that have been created on other computers with plugins not installed on the current
machine), the missing tools are summarized in the console rather than a dialog being presented
for every missing tool.
— Save Compressed Comps: This option enables the saving of compressed node trees, rather than
ASCII based text files. Compressed node trees take up less space on disk, although they may take
a moment longer to load. Node trees containing complex spline animation and many paint strokes
can grow into tens of megabytes when this option is disabled. However, compressed comps
cannot be edited with a text editor unless saved again as uncompressed.
— Show Video I/O Splash: This toggles whether the Splash image will be displayed over the video
display hardware. This is only applies to Fusion Studio.
Auto Save
The Auto Save settings only apply to Fusion Studio. To set auto backups for the Fusion page in
DaVinci Resolve, use the DaVinci Resolve Project Load and Save Preferences.
When Auto Save is enabled in Fusion Studio, comps are automatically saved to a backup file at regular
intervals defined by the Delay setting. If a backup file is found when attempting to open the comp, you
are presented with the choice of loading either the backup or the original.
If the backup comp is opened from the location set in the Path Map preference, saving the backup
will overwrite the original file. If the backup file is closed without saving, it is deleted without
affecting the original file.
— Save Before Render: When enabled, the comp is automatically saved before a preview or final
render is started.
— Delay: This preference is used to set the interval between Auto Saves. The interval is set using
mm:ss notation, so entering 10 causes an Auto Save to occur every 10 seconds, whereas entering
10:00 causes an Auto Save every 10 minutes.
Proxy
— Update All, Selective, No Update: The Update mode button is located above the toolbar. You
can use this preference to determine the default mode for all new comps. Selective is the usual
default. It renders only the tools needed to display the images in the Display view. All will render all
tools in the composition, whereas None prevents all rendering.
— Standard and Auto: These sliders designate the default ratio used to create proxies when the
Proxy and Auto Proxy modes are turned on. These settings do not affect the final render quality.
Even though the images are being processed smaller than their original size, the image viewing scales
in the viewers still refer to original resolutions. Additionally, image processing performed in Proxy
Scale mode may differ slightly from full-resolution rendering.
The Proxy and Auto Proxy size ratios may be changed from within the interface itself by right-clicking on
the Prx and APrx buttons above the toolbar and selecting the desired value from the contextual menu.
In Fusion Studio, the GPU preference is used to specify the GPU acceleration method used for
processing, based on your computer platform and hardware capabilities. It is also used for enabling
caching and debugging GPU devices and tools.
Options
The GPU options include radio buttons to select whether the GPU is used when processing and, if so,
which computer framework is used for communicating with the GPU.
— GPU Tools: This preference has three settings: Auto, Disable, and Enable. When set to Disable, no
GPU acceleration is used for tools or third-party plugins. Fuses may still require GPU acceleration.
If Enable is selected, GPU acceleration is available for tools and plugins, if appropriate
drivers are installed.
— API: The API setting selects the GPU processing method to use.
— Device: The Device setting determines which GPU hardware to use in the case of multiple GPUs.
The Auto setting gives priority to GPU processing; however, if it is unavailable, Fusion uses the
platform default. Currently, both the AMD and CPU options require either the AMD Catalyst 10.10
Accelerated Parallel Processing (APP) technology Edition driver or the ATI Stream SDK 2.1 or later
to be installed. The Select setting allows you to choose the device explicitly.
— Verbose Console Messages: Enabling this option causes information to be shown in the Console.
For example, Startup Logs, Compiler Warnings, and Messages.
— OpenGL Sharing: Enabling this option shares system RAM with onboard GPU RAM to create a
larger, but slower, OpenGL memory pool.
— Clear Cache Files: This option will clear already compiled GPU code and then
recompile the kernels.
Layout
The Layout preferences are only available in Fusion Studio. To save a Layout in DaVinci Resolve’s
Fusion page, use the Workspace > Layout Presets menu. The Layout options are used to control the
layout, size, and position of various windows in Fusion’s interface at startup or when a comp is created.
There are a lot of options, but in practice, you simply organize the interface the way you prefer it
on startup and when a new composition is created, then open this Preferences panel and click on
the three buttons to grab the Program Layout, the Document Layout and the Window Settings.
— Grab Program Layout: Pressing this button stores the application’s overall current
position and size.
— Run Mode: This menu is used to select the application’s default mode at startup.
You choose between a Maximized application window, a Minimized application, or a Normal
application display.
— Use the Following Position and Size: When checked, the values stored when Grab Program
Layout was selected will be used when starting Fusion Studio.
— Create Floating Views: When checked, the position and size of the floating viewers will be saved
when the Grab Program Layout button is used.
Document Layout
The Document Layout is used to save the layout of panels and windows for the current Fusion comp.
— Recall Layout Saved In Composition: When checked, all Document Layout settings in the
controls below will be recalled when a saved composition is loaded.
— Grab Document Layout: Pressing this button stores the entire interface setup, including all the
internal positions and sizes of panels and work areas.
— Window: When multiple windows on the same composition are used, this menu is used to select
the window to which the Window Settings will apply.
Window Settings
Rather than saving entire comp layouts, you can save position and size for individual floating windows
and panels within a comp using the Window Settings.
— Automatically Open This Window: When checked, the selected window will automatically be
opened for new flows.
— Grab Window Layout: Pressing this button stores the size and position of the selected window.
— Run Mode: Select the default run mode for the selected window. You can choose between a
Maximized window, a Minimized window, or a Normal window display.
— Use Grabbed Position and Size: When checked, the selected window will be created using the
stored position and size.
Defaults
The Defaults section includes two settings to determine how color depth and aspect ratio are handled
for Loaders.
— Loader Depth: The Loader Depth defines how color bit depth is handled when adding a Loader.
Choosing Format means that the correct bit depth is automatically selected, depending on the file
format and the information in the file’s header. Choosing Default sets the bit depth to the value
specified in the Frame Format preferences.
Cache
The Cache preferences allow you to control how disk caching operates in Fusion. You can set how and
where the cache is generated, when the cache is removed, how the cache reacts when source files are
not available, as well as many other cache related options. This is not to be confused with RAM cache,
which is controlled in the Memory preferences.
— If Original File Is Missing: This setting provides three options to determine the caching behavior
when the original files can’t be found. The Fail option behaves exactly as the Default Loader in
Fusion. The Loader will not process, which may cause the render to halt. The Load Cache option
loads the cache even though no original file is present.The Delete Cache option clears missing
files from the cache.
— Cache Location: For convenience, this is a copy of the LoaderCache path set in the
Path Maps preferences.
— Explore: This button opens the LoaderCache path in the macOS X Finder window
or a Windows Explorer window.
— Clear All Cache Files: This button deletes all cached files present in the LoaderCache path.
Memory
The Memory preferences are only available in Fusion Studio. To control Fusion’s memory when using
the Fusion page in DaVinci Resolve, open DaVinci Resolve’s Memory and GPU preferences.
Occasionally, it will be necessary to adjust the Memory preferences in order to make the best use of
available memory on the computer. For example, some people prefer a higher cache memory for
faster interactive work, but for final renders the cache memory is often reduced, so there’s more
memory available for simultaneous processing of tools or multiple frames being rendered at once.
Caching Limits
The Caching Limits include options for Fusion’s RAM cache operation. Here, you can determine how
much RAM is allocated to the RAM cache for playing back comps in the viewer.
— Limit Caching To: This slider is used to set the percentage of available memory used for
the interactive tool cache. Available memory refers to the amount of memory installed
in the computer.
When the interactive cache reaches the limit defined in this setting, it starts to remove lower
priority frames in the cache to clear space for new frames.
Interactive Render
The Interactive Render option allows you to optimize Fusion’s processing based on the amount of
RAM you have installed in your system.
— Simultaneous Branching: When checked, more than one tool will be processed at the same time.
Disable this checkbox if you are running out of memory frequently.
Final Render
These settings apply to memory usage during a rendering session, either preview or final, with no
effect during an interactive session.
— Render Slider: This slider adjusts the number of frames that are rendered at the same time.
— Simultaneous Branching: When checked, more than one branch of a node tree will be
rendered at the same time. If you are running low on memory, turn this off to increase
rendering performance.
To re-enable editing of the master name and IP, create the environment variable FUSION_NO_
MANAGER and set the value to True. Check your operating system user guide for how to create
environment variables.
General
The General preferences are designed with the most used options at the top in the General section.
These options determine in what capacity the system is used during network rendering.
— Make This Machine a Render Master: When enabled, Fusion will accept network render
compositions from other computers and manage the render. It does not necessarily mean that
this computer will be directly involved in the render, but it will submit the job to the render nodes
listed in the Render Manager dialog.
Email Notification
You can use the Email Notification section to set up who gets notified with status updates regarding
the render jobs and the network.
— Notify Options: These checkboxes cause emails to be sent when certain render events take
place. The available events are Queue Completion, Job Done, and Job Failure.
— Send Email to: Enter the address or addresses to which notifications should be sent. You
separate multiple addresses with a semicolon.
— Override Sender Address: Enter an email address that will be used as the sender address. If this
option is not selected, no sender address is used, which may cause some spam filters to prevent
the message from being delivered to the recipient.
Server Settings
This section covers Clustering and Network Rendering. For more information on these settings and
clustering, see Chapter 4, “Rendering Using Saver Nodes,” in the Fusion Reference Manual.
Path Maps
Path Maps are virtual paths used to replace segments of file paths with variables. For example,
define the path ‘movie_x’ as actually being in X\Shows\Movie_X. Using this example, Fusion would
understand the path ‘movie_x\scene_5\ scan.000.cin’ as actually being X:\Shows\ Movie_X\scene_5\
scan.000.cin.
For Fusion Studio, there are two main advantages to virtual path maps instead of actual file paths.
One is that you can easily change the path to media connected to Loaders (for example, when moving
a comp from one drive to another), without needing to make any changes in the composition. The
other advantage is when network rendering, you can bypass the different OS filename conventions.
— Enable Reverse Mapping of Paths Preferences: This checkbox is at the top of the Path Map
settings. When enabled, Fusion uses the built-in path maps for entries in the path’s settings
when applying mapping to existing filenames. The main benefit is for Fusion Studio. Enabling
this checkbox causes Loaders to automatically use paths relative to the location of the saved
composition when they are added to the Node Editor. For more information on using relative
paths for Loaders, see Chapter 44, “I/O Nodes,” in the Fusion Reference Manual.
As with other preferences in Fusion Studio, paths maps are available in both Global and Composition
preferences. Global preferences are applied to all new compositions, while Composition path maps
are only saved with the current composition. Composition path maps will override Global path maps
with the same name.
— System Path Maps: The operating system determines system path maps, and they define
Fusion’s global locations. You can override specific System path maps using the Defaults or
current Composition Path Map settings. If you change your mind at a later time, you are always
able to return to Fusion’s “factory” defaults using the System path maps. There are several top-
level path maps established in the System Path Map settings.
— AllData: The folder where Fusion saves all shared application data.
— AllDocs: The folder where Fusion saves the public/shared document folder.
— AllLUTs: The nested LUTs path in the Defaults section, where Fusion saves LUTs.
— Fusion: The folder where Fusion Studio app is installed. For example, if you open Fusion from
C:\Program Files\Fusion, then the path Fusion:\Help refers to C:\Program Files\Fusion\Help. If
you instead used a copy of Fusion found in \\post-server\fusion\16, then Fusion:\Help would
expand to \\post-server\fusion\16\Help.
— FusionLibs: The Fusion libraries used for the application.
— Profile: The folder where default Fusion preferences file is saved.
— Profiles: The folder where Fusion individual user preferences are saved.
— Programs: The location of Fusion Studio or DaVinci Resolve.
— SystemFonts: The folder where the OS saves fonts that appear for Text+ and Text 3D nodes.
— Temp: The system’s temporary folder.
— Default Path Maps: The Defaults are user-editable path maps. They can reference the System
paths, as part of their paths. For instance. the Temp folder is defined in the System path and used
by the Default DiskCache path map to refine the nested location (Temp:DiskCache). Default path
maps can also redirect paths without using the Global System path maps. After you change a
Default, the updated setting can be selected in the Preferences window, and a Reset button at the
bottom of the Preferences window will return the modified setting to the System default.
— AutoSaves: This setting determines the Fusion Comp AutoSave document’s location, set in
the Fusion General preferences.
— Bins: Sets the location of Fusion Studio bins. Since the bins use pointers to the content, the
content is not saved with the bin. Only the metadata and pointers are saved in the bins.
— Brushes: Points Fusion to the folder that contains custom paintbrushes.
— Comps: The folder where Fusion Studio compositions are saved. On macOS or Windows, the
default location is in Users/YourUserName/Documents/Blackmagic Design/Fusion.
— Config: Stores Configuration files used by Fusion Studio during its operation.
— Defaults: Identifies the location of node default settings so they can be restored if overwritten.
— DiskCache: Sets the location for files written to disk when using the Cache to Disk feature.
This location can be overridden in the Cache to Disk window.
— Edit templates: The location where Fusion macros are saved in order to appear as templates
in the DaVinci Resolve Effects Library.
— Filters: Points to a folder containing Convolution filters like sharpen, which can be used for
the Custom Filter node.
— Fonts: The default path map for Fonts points to the operating system fonts folders. Changing
this will change the fonts that are available in the Text+ or Text 3D nodes as well as any Fusion
Title Template. In DaVinci Resolve. This path map does not affect the five additional Edit page
titles (L Lower 3rd, R Lower 3rd, M Lower 3rd, Scroll, and Text.)
— Fuses: Points to a folder containing Fusion Fuses plugins.
— FusionTemplates: Location where Fusion macros are saved in order to appear as templates
in Fusion’s Effects Library.
— Guides: Location where custom viewer guide overlays are stored.
— Help: Identifies where Fusion Studio PDF files are located.
— Layouts: Location where Fusion Studio custom window layouts are saved.
— Libraries: Points to a support folder where custom Effects Library items can be stored.
— LoaderCache: The Fusion Studio Loader preferences allow the Loader to cache when
reading from a slow network. This path map point to the local drive location for that cache.
— LuaModules: Location for Lua Scripting modules.
— LUTs: Points to a folder containing Look Up Tables (LUTs).
— Macros: Points to the location for user created macros. The macros saved to this location appear
in the macros category of the Effects Library and in the right-click Edit Macro contextual menu.
— User Path Maps: User paths are new paths that you have defined that do not currently exist in
the Defaults settings.
— Comp refers to the folder where the current composition is saved. For instance, saving media
folders within the same folder as your Fusion Studio comp file is a way to use relative file paths
for Loaders instead of actual file paths.
In the Preview preferences, you configure the creation and playback options for preview renders.
Options
— Render Previews Using Proxy Scaling: When checked, this option scales down the images to the
preview size for the Loader and Creator tools. This causes much faster rendering. If this option is
disabled, frames will be rendered at full size and are then scaled down.
— Skip Frames to Maintain Apparent Framerate: When checked, frames are skipped during
playback of Flipbooks and file sequences to maintain the frame rate setting.
— Show Previews for Active Loaders: This setting determines whether the preview playback
controls are shown below the Inspector when a Loader with a valid file is activated.
— Show Previews for Active Savers: This setting determines whether the preview playback
controls below the Inspector are shown when a Saver with a valid file is activated.
— Display File Sequences On: This setting determines which viewer or external monitor is used for
the interactive and file sequence playbacks as well as for the scrubbing function in the bins.
— Compressor: This drop-down menu displays the QuickTime codecs available from your computer.
Fusion tests each codec when the program is started; therefore, some codecs may not be
available if the tests indicate that they are unsuitable for use within Fusion.
— Quality: This slider is used to determine the amount of compression to be used by the codec.
Higher values produce clearer images but larger files. Not all codecs support the Quality setting.
— Key Frame Every X Frames: When checked, the codec will create key frames at specified
intervals. Key frames are not compressed in conjunction with previous frames and are, therefore,
quicker to seek within the resulting movie. Not all codecs support the key frame setting.
— Limit Data Rate To X KB/Second: When checked, the data rates of the rendered file will be
limited to the amount specified. Not all codecs support this option. Enter the data rate used to
limit the QuickTime in kilobytes (kB) per second, if applicable. This control will have no effect if the
Limit Data Rate To option is not selected.
Script
The preferences for Scripting include a field for passwords used to execute scripts from the command
line and applications for use when editing scripts.
Login
There are three login options for running scripts outside of the Fusion application.
— No Login Required to Execute Script: When enabled, scripts executed from the command line,
or scripts that attempt to control remote copies of Fusion, do not need to log in to the workstation
in order to run.
— Specify Custom Login: If a username and password are assigned, Fusion will refuse to process
incoming external script commands (from FusionScript, for example), unless the Script first logs in
to the workstation. This only affects scripts that are executed from the command line, or scripts
that attempt to control remote copies of Fusion. Scripts executed from within the interface do not
need to log in regardless of this setting. For more information, see the Scripting documentation.
— Use Windows Login Validation: When using Fusion on Windows, enabling this option verifies
the user name and password (also known as credentials) with the operating system before
running the script.
Options
— Script Editor: Use this preference to select an external editor for scripts. This preference is used
when selecting Scripts > Edit.
Python Version
— Two options are presented here for selecting the version of Python that you plan on using
for your scripts.
— Independent Handles: Enabling this option allows the In or Out direction handle on newly
created key frames to be moved independently without affecting the other. This option is also
available via the Options submenu when right-clicking in the Spline Editor graph.
— Follow Active: The Spline Editor focuses on the currently active tool. This option is also available
via the Options submenu when right-clicking in the Spline Editor graph.
— Show Key Markers: Small colored triangles will be displayed at the top of the Spline Editor
Time Ruler to indicate key frames on active splines. The colors of the triangles match the colors
of the splines. This option is also available via the Show submenu when right-clicking in the Spline
Editor graph.
— Independent Handles: Enabling this option allows the In or Out direction handle on newly
created key frames to be moved independently without affecting the other.
— Show Key Markers: Small colored triangles will be displayed at the top of the Spline Editor Time
Ruler to indicate key frames on active splines. The colors of the triangles match the colors of
the splines.
— Show Tips: Toggles whether tooltips are displayed.
Splines
Options for the handling and smoothing of animation splines, tracker path defaults, and rotoscoping
are found in the Splines preferences.
— Autosmooth: Automatically smooths out any newly created points or key frames on the splines
selected in this section. You can choose to automatically smooth animation splines, B-Splines,
polyline matte shapes, LUTs, paths, and meshes.
— B-Spline Modifier Degree: This setting determines the degree to which the line segments
influence the resulting curvature when B-Splines are used in animation. Cubic B-Splines determine
a segment through two control points between the anchor points, and Quadratic B-Splines
determine a segment through one control point between the anchor points.
— B-Spline Polyline Degree: This setting is like the one above but applies to B-Splines
used for masks.
— Tracker Path Points Visibility: This setting determines the visibility of the control points on
tracker paths. You can show them, hide them, or show them when your cursor hovers over the
path, which is the default behavior.
— Tracker Path: The default tracker creates Bézier-style spline paths. Two other options in this
setting allow you to choose B-Spline or XY Spline paths.
— Polyline Edit Mode on Done: This setting determines the state of the Polyline tool after you
complete the drawing of a polyline. It can either be set to modify the existing control points on the
spline or modify and add new control points to the spline.
— Onion Skinning: The Onion Skinning settings determine the number of frames displayed while
rotoscoping, allowing you to preview and compare a range of frames. You can also adjust if the
preview frames only from the frame prior to the current frame, after the current frames, or split
between the two.
Filter/Filter to Use
The Filter menu populates the hierarchy area below the menu with that setting. It lets you edit
the filters. The Filter to Use menu selects the default filter setting located in the Keyframes Editor
Options menu.
Timeline Options
The Timeline Options configure which options in the Keyframe Editor are enabled by default. A series
of checkboxes correspond to buttons located in the Timeline, allowing you to determine the states
of those buttons at the time a new comp is created. For more information on the Keyframes Editor
functions, see Chapter 9, “Animating in Fusion’s Keyframes Editor,” in the Fusion Reference Manual.
Tweaks
The Tweaks preferences handle a collection of settings for fine-tuning Network rendering in Fusion
Studio and graphics hardware behavior.
— Maximum Missed Heartbeats: This setting determines the maximum number of times the
network is checked before terminating the communication with a Render node.
— Heartbeat Interval: This sets the time between network checks.
— Load Composition Timeout: This timeout option determines how long the Render Manger will
wait for a composition to load before moving on to another task.
— Last Slave Restart Timeout: This timeout option determines how long the Render Manager will
wait for a render salve to respond before using another render slave.
File I/O
The File I/O options are used to control the performance when reading frames or large media files
from both direct and networked attached storage.
— I/O Canceling: This option enables a feature of the operating system that allows queued
operations to be canceled when the function that requested them is stopped. This can improve
the responsiveness, particularly when loading large images over a network.
Enabling this option will specifically affect performance while loading and accessing formats that
perform a large amount of seeking, such as the TIFF format.
This option has not been tested with every hardware and OS configuration, so it is recommended
to enable it only after you have thoroughly tested your hardware and OS configuration using drive
loads from both local disks and network shares.
— Enable Direct Reads: Enabling this checkbox uses a more efficient method when loading a large
chunk of contiguous data into memory by reducing I/O operations. Not every operating system
employs this ability, so it may produce unknown behavior.
— Read Ahead Buffers: This slider determines the number of 64K buffers that are use to read
ahead in a file I/O operation. The more buffers, the more efficient loading frames from disk will
be, but the less responsive it will be to changes that require disk access interactively.
Area Sampling
The Area Sampling options allow you to fine-tune the RAM usage on Render nodes by trading off
speed for lower RAM requirements.
— Automatic Memory Usage: This checkbox determines how area sampling uses available
memory. Area sampling is used for Merges and Transforms. When the checkbox is enabled
(default), Fusion will detect available RAM when processing the tool and determine the
appropriate trade-off between speed and memory.
If less RAM is available, Fusion will use a higher proxy level internally and take longer to render.
The quality of the image is not compromised in any way, just the amount of time it takes to render.
In node trees that deal with images larger than 4K, it may be desirable to override the automatic
scaling and fix the proxy scale manually. This can preserve RAM for future operations.
— Pre-Calc Proxy Level: Deselecting the Automatic Memory will enable the Pre-Calc Proxy Scale
slider. Higher values will use less RAM but take much longer to render.
— Disable View LUT Shaders: OpenGL shaders can often dramatically accelerate View LUTs, but
this can occasionally involve small trade-offs in accuracy. This setting will force Fusion to process
LUTs at full accuracy using the CPU instead. Try activating this if View LUTs do not seem to be
giving the desired result.
— Use Float16 Textures: If your graphics hardware supports 16-bit floating-point textures,
activating this option will force int16 and float32 images to be uploaded to the viewer as float16
instead, which may improve playback performance.
— Texture Depth: Defines in what depth images are uploaded to the viewer.
— Auto: The Auto option (recommended) lets Fusion choose the best balance of performance
and capability.
— int8: Similar to the Use Float16 Textures switch, this option can be used to force images to be
uploaded to the Display View as int8, which can be faster but gives less range for View LUT
correction.
— Native: The Native option uploads images at their native depth, so no conversion is done.
— Image Overlay: The Image Overlay is a viewer control used with Merge and Transform tools
to display a translucent overlay of the transformed image. This can be helpful in visualizing the
transformation when it is outside the image bounds but may reduce performance when selecting
the tool if cache memory is low. There are three settings to choose from: None, Outside, and All.
— None: This setting never displays the translucent overlay or controls, which can reduce the
need for background renders, in some cases resulting in a speed up of the display.
— Outside: This will display only those areas of the control that are outside the bounds of the
image, which can reduce visual confusion.
— All: Displays all overlays of all selected tools.
— Smooth Resize: This setting can disable the viewer’s Smooth Resize behavior when displaying
floating-point images. Some older graphics cards are not capable of filtering floating-point
textures or may be very slow. If Smooth Resize does not work well with float images, try setting
this to flt16 or int.
— Auto Detect Graphics Memory (MB): Having Fusion open alongside other OpenGL programs
like 3D animation software can lead to a shortage of graphics memory. In those cases, you can
manually reduce the amount of memory Fusion is allowed to use on the card. Setting this too low
or too high may cause performance or data loss.
— Use 10-10-10-2 Framebuffer: If your graphics hardware and monitor support 30-bit color (Nvidia
Quadro/AMD Radeon Pro, and some Nvidia GeForce/AMD Radeon), this setting will render viewers
with 10 bits per primary accuracy, instead of 8 bits. Banding is greatly reduced when displaying 3D
renders or images deeper than 8-bit.
Appearance
When enabled, the Use Gray Background Interface checkbox will change the color of the background
in Fusion’s panels to a lighter, more neutral shade of gray.
Controls
This group of checkboxes manages how the controls in the Inspector are displayed.
— Auto Control Open: When disabled, only the header of the selected node is displayed in the
Inspector. You must double-click the header to display the parameters. When enabled, the
parameters are automatically displayed when the node is selected.
— Auto Control Hide: When enabled, only the parameters for the currently active tool (red outline)
will be made visible. Otherwise, all tool headers will be visible and displayed based on the Auto
Control Open setting.
Video Monitoring
This setting is only available in Fusion Studio. Control over video hardware for the Fusion Page is
done in the DaVinci Resolve preferences. The Video Monitoring preferences are used to configure
the settings of Blackmagic Design capture and playback products such as DeckLink PCIe cards and
UltraStudio i/O units.
Video Output
This group of drop-down menus allows you to select the type of video I/O device you have installed,
the output resolution, and the pixel format. These settings have nothing to do with your rendered
output; it is only for your display hardware.
The Output HDR over HDMI settings are used to output the necessary metadata when sending high
dynamic range signals over HMDI 2.0a and have it correctly decided by an HDR capable video display.
The Auto setting detects the image’s values and outputs HDR. This will not affect non HDR images.
The Always setting forces HDR on all the time. This can can be useful when checking non HDR and
HDR grades.
When Auto or Always is selected, you can then set the “nit” level (slang for cd/m2) to whatever peak
luminance level your HDMI connected HDR display is capable of.
Stereo Mode
This group of settings configures the output hardware for displaying stereo 3D content.
Control Colors
The Control Colors setting allows you to determine the color of the active/inactive onscreen controls.
VR Headsets
The VR Headsets preferences allow configuration of any connected Virtual Reality headsets, including
how stereo and 3D scenes are viewed.
Headset Options
The Headset options are used to select the type of VR headset you are using to view the composite as
well as the video layout of the 360° view.
API
— Disabled: Disabled turns off and hides all usage of headsets.
— Auto: Auto will detect which headset is plugged in.
— Oculus: Oculus will set the VR output to the Oculus headset.
— OpenVR: OpenVR will support a number of VR headsets like the HTC Vive.
Stereo
Similar to normal viewer options for stereo 3D comps, these preferences control how a
stereo 3D comp is displayed in a VR headset.
Mode
— Mono: Mono will output a single non stereo eye.
— Auto: Auto will detect the method with which the stereo images are stacked.
— Vstack: Vstack stereo images are stacked vertically as left on top and right at the bottom.
— Hstack: Hstack stereo images are stacked horizontally as left and right.
— Swap Eyes: Swap eyes will swap the eyes if stereo is reversed.
3D
Similar to normal viewer options for 3D comps, these preferences control how a 3D comp is displayed
in a VR headset.
Lighting
— Disabled lighting is off.
— Auto will detect if lighting is on in the view.
— On will force lighting on in the VR view.
Sort Method
— Z buffer sorting is the fast OpenGL method of sorting polygons.
— Quick Sort will sort the depth of polygons to get better transparency rendering.
— Full Sort will use a robust sort and render method to render transparency .
— Shadows can be on or off.
— Show Matte Objects will make matte objects visible in view or invisible.
Users List
The Users List is a list of the users and their permissions. You can select one of the entries to edit their
settings using the User and Password edit boxes.
— Add: The Add button is used to add a new user to the list by entering a username and password.
— Remove: Click this button to remove the selected entry.
User
This editable field shows the username for the selected Bin Server item. If the username is unknown,
try “Guest” with no password.
Password
Use this field to enter the password for the Bin user entered in the Users list.
— Read: This will allow the user to have read-only permission for the bins.
— Create: This will allow the user to create new bins.
— Admin: This gives the user full control over the bins system.
— Modify: This allows the user to modify existing bins.
— Delete: This allows the user to remove bins.
Bins/Server
These preferences are used to add Bin Servers to the list of bins Fusion will display in the Bins dialog.
Servers
This dialog lists the servers that are currently in the connection list. You can select one of the entries
to edit its settings.
User
This editable dialog shows the username for the selected Bin Server item.
Password
Use this field to enter the password for the server entered in the Server list.
Library
The Library field lets you name the bins. If you wanted to create a bin for individual projects, you
would name it in the Library field and each project would gets its own bin.
Application
The Application field allows larger studios to specify some other program to serve out the
Bin requests.
Bins/Settings
These preferences are used to control the default behavior of bins.
Stamp Format
This drop-down list determines whether the Stamp thumbnails will be saved as compressed or
uncompressed.
Options
— Open Bins on Startup: When Open Bins on Startup is checked, the bins will open automatically
when Fusion is launched.
— Checker Underlay: When the Checker Underlay is enabled, a checkerboard background is used
for clips with alpha channels. When disabled, a gray background matching the Bin window is used
as the clip’s background.
EDL Import
The EDL Import options are used to determine how compositions are created from imported
CMX‑formatted EDL files.
— Loader Per Clip: A Loader will be created for each clip in the EDL file.
— A-B Roll: A node tree with a Dissolve tool will be created automatically.
— Loader Per Transition: A Loader with a Clip list will be created, representing the imported EDL list.
Customization
The following section covers the customization of preferences that are not technically part of
the Preferences window. Using Fusion Studio’s Hotkey Manager window, you can customize the
keyboard shortcuts, making the entire process of working in Fusion not only faster but potentially
more familiar if you are migrating from another software application. You can also customize Fusion
with environment variables to switch between different preferences files, allowing different working
setups based on different users or job types. Both of these customization options are only available in
Fusion Studio.
Shortcuts Customization
Keyboard shortcuts can be customized in Fusion Studio. You can access the Hotkey Manager by
choosing Customize HotKeys from the View menu.
Fusion has active windows to focus attention on those areas of the interface, like the Node Editor,
the viewers, and the Inspector. When selected, a gray border line will outline that section. The
shortcuts for those sections will work only if the region is active. For example, Command-F in the
On the right is a hierarchy tree of each section of Fusion and a list of currently set hotkeys. By
choosing New or Edit, another dialog will appear, which will give specific control over that hotkey.
Creating a new keyframe will give you the key combo to press, and this Edit Hotkey dialog will
appear where the Action can be defined at top right: pressed, repeated, or released. The Name
and abbreviated Short Name can be set, as can the Arguments of the action.
Customizing Preferences
Fusion Studio’s preferences configure Fusion’s overall application default settings and settings for
each new composition. Although you access and set these preferences through the Preferences
window, Fusion saves them in a simple text format called Fusion.prefs.
These default preferences are located in a \Profiles\Default folder and shared by all Fusion users on
the computer. However, you may want to allow each user to have separate preferences and settings,
and this requires saving the preferences to different locations based on a user login.
To change the saved location of the preferences file requires the use of environment variables.
Typically, all users share the same preferences.If you want each user to save separate preferences
within their home folder, you must create another environment variable with the name FUSION_
PROFILE (e.g., FUSION_PROFILE=jane). Using this second environment variable, Fusion will look for
the preferences in the PROFILE_DIR of the user profile. Using a login script, you can make sure the
FUSION_PROFILE is set to the name of the logged in user.
FUSION_MasterPrefs must contain the full path to at least one preferences file. If you have multiple
preferences paths, separate them using semicolons. Fusion does not write to these prefs files, and
they may contain a subset of all available settings. You may change settings in these files and use
them only where local prefs do not already exist unless you set the Locked flag.
Locking Preferences
If the line “Locked = true,” appears in the main table of a master file, all settings in that file are locked
and override any other preferences. Locked preferences cannot be altered by the user.
PART 2 — CONTENTS
21 Paint�������������������������������������������������������������������������������������������������������������������������������������� 519
Controlling
Image Processing
and Resolution
This chapter covers the overall image-processing pipeline.
It discusses color bit-depth and how to control the output
resolution in a resolution-independent environment.
Contents
Fusion’s Place in the DaVinci Resolve Image-Processing Pipeline����� 411
TIP: The decoding or debayering of RAW files occurs prior to all other operations, and as
such, any RAW adjustments will be displayed correctly in the Fusion page.
This means you have access to the entire source clip in the Fusion page, but the render range is set
to match the duration of the clip in the Timeline. You also use the full resolution of the source clip,
even if the Timeline is set to a lower resolution. However, none of the Edit or Cut page Inspector
adjustments carry over into the Fusion page, with the exception of the Lens Correction adjustment.
When you make Zoom, Position, Crop, or Stabilization changes in the Edit or Cut page, they are not
visible in the Fusion page. The same applies to any Resolve FX or OpenFX third-party plugins. If you
add these items to a clip in the Edit or Cut page, and then you open the Fusion page, you won’t
see them taking effect. All Edit and Cut page timeline effects and Inspector adjustments, with the
exception of the Lens Correction adjustment, are computed after the Fusion page but before the
Color page. If you open the Color page, you’ll see the Edit and Cut page transforms and plugins
applied to that clip, effectively as an operation before the grading adjustments and effects you apply
in the Color page Node Editor.
With this in mind, the order of effects processing in the different pages of DaVinci Resolve can be
described as follows:
TIP: Retiming applied to the clip in the Edit page Timeline is also not carried over into the
Fusion page.
— The Edit page source viewer: Always shows the source media, unless you’re opening a
compound clip that’s been saved in the Media Pool. If Resolve Color Management is enabled, then
the Edit page source viewer shows the source media at the Timeline color space and gamma.
— The Edit page Timeline viewer: Shows clips with all Edit page effects, Color page grades,
and Fusion page effects applied, so editors see the program within the context of all effects
and grading.
— The Fusion page viewer: Shows Media Pool source clips at the Timeline color space and gamma,
but no Edit page Inspector adjustments or Resolve FX effects and no Color page grades.
— The Color page viewer: Shows clips with all Edit page effects, Color page grades, and
Fusion page effects applied.
TIP: The output of the Fusion page is placed back into the Edit page Timeline based
on DaVinci Resolve’s Image Sizing setting. By default, DaVinci Resolve uses an image
sizing setting called Scale to Fit. This means that even if the Fusion page outputs a
4K composition, it conforms to 1920 x 1080 if that is what the project or a particular
Timeline is set to. Changing the image sizing setting in DaVinci Resolve’s Project Settings
affects how Fusion compositions are integrated into the Edit page Timeline.
These four nodes are located in the Transform category of the Effects library. Resize is also located in
the toolbar.
— Crop: Sets the output resolution of the node using a combination of X and Y size along with X and
Y offset to cut the frame down to the size you want. Crop removes pixels from the image, so if you
later use a Transform node and try to move the image, those pixels are not available.
— Letterbox: Sets the output resolution of the node by adding horizontal or vertical black edges
where necessary to format the frame size and aspect ratio.
— Resize: Sets the output resolution of the node using absolute pixels.
— Scale: Sets the output resolution of the node using a relative percentage of the current input
image size.
TIP: To change resolution and reposition a frame without changing the pixel resolution of a
clip, use the Transform node.
Often, it’s easiest to control the comp resolution right at the start by connecting a node with
the desired output resolution you want to the orange background input on the Merge node. A
Background node is often used in this situation because it consumes meager system resources.
The Background node sets the output size, and the foreground image is cropped if it is larger.
The order of sizing effects in the different pages of DaVinci Resolve can be described as follows:
16-bit integer color depth doubles the amount of precision, eliminating problems with banding.
Although you can select 16-bit integer processing for an 8-bit clip, it does not reduce banding that
already exists in the original file. Still, it can help when adding additional effects to the clip. This
sounds like the best solution until you realize that many digital cameras like Blackmagic Design URSA
Mini Pro and others record in formats that can capture over-range values with shadow areas below 0.0
and super highlights above 1.0, which are truncated in 16-bit integer.
The 16-bit float color depth sacrifices a small amount of the precision from standard 16-bit integer
color depth to allow storage of color values less than 0 and greater than 1.0. 16-bit float, sometimes
called half-float, is most often found in the OpenEXR format and contains more than enough
dynamic range for most film and HDR television purposes yet requires significantly less memory and
processing time than is required for full float, 32-bit images.
Fusion Studio automatically uses the color depth that makes the most sense for each file format.
For example, if you read in a JPEG file from disk, then the color depth for the Loader is set to 8 bits
per channel. Since the JPEG format is an 8-bit format, loading the image at a greater color depth
would generally be wasteful. If a 16-bit TIFF is loaded, the color depth is set to 16 bits. Loading a DPX
file defaults to 32-bit float, whereas OpenEXR generally defaults to 16-bit float. However, you can
override the automatic format color depth using the settings found in the Import tab of the Loader
node’s Inspector. The Loader’s Inspector, as well as the Inspector for images generated in Fusion (i.e.,
text, gradients, fast noise, and others), has a Depth menu for 8-bit, 16-bit integer, 16-bit float, and
32‑bit float.
To improve performance as you work on your comp, you can set the Interactive and Preview depth to
8-bits per channel, while final renders can be set to 16-bit integer. However, if your final render output
is 16-bit float or 32-bit float, you should not use the integer options for the interactive setting. The
final results may look significantly different from interactive previews set to integer options.
TIP: When working with images that use 10-bit or 12-bit dynamic range or greater, like
Blackmagic RAW or Cinema DNG files, set the Depth menu in the Inspector to 16-bit float or
32-bit float. This preserves highlight detail as you composite.
Greater Accuracy
Using 16- or 32-bit floating-point processing prevents the loss of accuracy that can occur when using
8- or 16-bit integer processing. The main difference is that integer values cannot store fractional or
decimal values, so rounding occurs in all image processing. Floating-point processing allows decimal
or fractional values for each pixel, so it is not required to round off the values of the pixel to the closest
integer. As a result, color precision remains virtually perfect, regardless of how many operations are
applied to an image.
If you have an 8-bit pixel that has a red value of 200 (bright red) and a Color Gain tool is used to double
the brightness of the red channel, the result is 200 x 2, or 400. However, 8-bit color values are limited
to a range of 0 through 255. So the pixel‘s value is clipped to 255, or pure red. If now the brightness is
halved, the result is half of 255, or 127 (rounded), instead of the original value of 200.
When processing floating-point colors, pixel values brighter than white or darker than black are
maintained. There is no value clipping. The pixel is still shown in the viewer as pure red, but if float
processing is used instead of 8-bit, the second operation where the gain was halved would have
restored the pixel to its original value of 200.
Enabling this display mode rescales the color values in the image so that the brightest color in the
image is remapped to a value of 1.0 (white), and the darkest is remapped to 0.0 (black).
The 3D Histogram subview can also help visualize out-of-range colors in an image. For more
information, see Chapter 7, “Using Viewers,” in the Fusion Reference Manual.
For example, there may be files that contain out-of-range alpha values. Since the alpha channel
represents the opacity of a pixel, it makes little sense to be more than completely transparent or
more than fully opaque, and compositing such an image may lead to unexpected results. To easily
clip alpha values below 0 and above 1, add a Brightness/Contrast toolset to Clip Black and Clip
White, with only the Alpha checkbox selected.
Alternatively, you can clip the range by adding a Change Depth node and switching to 8-bit or
16-bit integer color depths.
Managing Color
for Visual Effects
This chapter discusses LUTs, color space conversions, and the value
of compositing with linear gamma while previewing the image in
the viewer using the gamma of your choice.
Contents
Color Management�������������������������������������������������������������������������������������������������� 421
Each capture device records images using a nonlinear tonal curve or gamma curve to compensate
for this difference. Specifically, Rec. 709 HD gamma curves are designed so that when shown on
HD displays, the images have built-in compensation for the display. The result is that HD images on
HD displays appear normal to us.
Digital cinema cameras have taken the concept of gamma curves further. They use gamma curves as
a way to maximize the bit depth of an image and store a wider dynamic range. Digital cinema cameras’
gamma curve (often collectively referred to as log gamma), give more attention to the darker mid-
tones where the human eye is most sensitive. This allows them to save images with brighter highlights
and more detail in shadows.
A Rec. 709 HD gamma curve (left) and a nonlinear, or log gamma, curve (right)
The problem is that these images do not look normal on any monitor. Clips recorded with a log
gamma curve typically have a low contrast, low saturated appearance when viewed on an sRGB
computer display or Rec. 709 HD video monitor. This problem is easy to fix using a LookUp Table, or
LUT. A LUT is a form of gamma and color correction applied to the viewer to normalize how the image
is displayed on your screen.
A clip displayed with a nonlinear, log gamma curve (left) and corrected in the viewer using a LUT (right)
You can see a more practical example when you apply filtering effects, such as a blur, to an image
with any gamma setting. The image probably looks fine. However, if you convert the image to a linear
gamma first and then apply the blur, the images (especially those with extremely bright areas) are
processed with greater accuracy, and you should notice a different and superior result.
Introducing Color
Management in Fusion
Images loaded into Fusion by default are not color managed. The image is displayed directly from
the file to the viewer without any interpretation or conversion. However, Fusion includes nodes that
convert the output of each image to linear gamma at the beginning of your composite. The same
nodes can convert from linear back to your desired output gamma at the end of your composite, just
prior to the Saver or MediaOut node.
TIP: 3D rendered CGI images are often generated as EXR files with linear gamma, and
converting them is not necessary. However, you should check your specific files to make
sure they are using linear gamma.
Fusion includes several kinds of nodes you can use to convert the image out of each MediaIn or
Loader node to linear gamma at the beginning of your composite, and then convert from linear back
to your desired output gamma at the end of your composite. These include:
— CineonLog node: The CineonLog node, found in the Film category of the Effects Library,
performs a conversion from any of the formats in the Log Type menu to linear, and also reverses
the process, adding log gamma back to a clip. This is most often used for images coming from
common digital cinema cameras like BlackMagic Design, Arri, or Red. The CineonLog node is
added directly after a MediaIn or Loader node. The Mode menu chooses the direction of the
conversion to or from linear.
When converting media to linear gamma, set the Source Space menu to the color space of your
source material. For instance, if your media is full 1080 HD ProRes, then choose ITU-R BT.709
(scene) for gamma of 2.4. Then, enable the Remove Gamma checkbox if it isn’t already enabled, to
use linear gamma.
When converting from linear gamma for output, you insert the Gamut node before your output
node, which is a Saver in Fusion Studio or a MediaOut node in DaVinci Resolve’s Fusion page.
Make sure the Source Space menu is set to No Change, and set the Output Space to your output
color space. For instance, if your desired output is full 1080 HD, then choose either sRGB or ITU-R
BT.709 (scene) for gamma of 2.4. Then, enable the Add Gamma checkbox if it isn’t already enabled,
to format the output of the Gamut node for your final output.
— MediaIn and Loader nodes: MediaIn and Loader nodes have Source Gamma Space controls in
the Inspector that let you identify and remove the gamma curve without the need to add another
node. If your files include gamma curve metadata like RAW files, the Auto setting for the Curve
Type drop-down menu reads the metadata and uses it when removing the gamma curve. When
using intermediate files or files that do not include gamma curve metadata, you can choose either
a log gamma curve by choosing Log from the Curve Type menu or a specific color space using
the Space option from the menu. Clicking the Remove Curve checkbox then removes the gamma
curve, converting the image to linear gamma.
— FileLUT node: The FileLUT node, found in the LUT category of the Effects Library, lets you do a
conversion using any LUT you want, giving you the option to manually load LUTs in the ALUT3,
ITX, 3DL, or CUBE format to perform a gamma and gamut conversion. Although LUTs are very
commonly placed at the end of a node tree for final rendering, you’ll get more accurate gamma
and color space conversions using the Gamut and CineonLog nodes to transform your MediaIn
and Loader nodes into linear.
A clip displayed with a nonlinear, log gamma curve (left) and the clip transformed to linear gamma (right)
It would be impossible to work if you couldn’t view the image as it’s supposed to appear within the
final gamut and gamma you’ll be outputting. For this reason, each viewer has a LUT menu that lets
you enable a “preview” color space and/or gamma conversion, while the node tree is processing
correctly in linear gamma.
To preview the images in the viewer using sRGB or Rec. 709 color space:
1 Enable the LUT button above the viewer.
2 From the Viewer LUT drop-down menu, choose either a Gamut View LUT, or a LUT from the VFX IO
category that transforms linear to Rec. 709 or sRGB.
Whether you use the Gamut View LUT or a LUT for your specific monitor calibration, you can save the
viewer setup as the default.
For every comp, the viewer will now be preconfigured based on the saved defaults.
For more information on Viewer LUTs, see Chapter 7, “Using Viewers,” in the Fusion Reference Manual.
To override the input color space for differently recorded clips in the Media Pool:
1 Enable DaVinci YRGB Color Management as explained above.
2 Save and close the Settings dialog.
3 In the Media Pool, select the clip or clips you want to assign a new Input Color space.
4 Right-click one of the selected clips.
5 Choose the Input Color Space that corresponds to those clips from the contextual menu.
Using RCM eliminates a few steps, since the input color space math used to transform the source
preserves all wide-latitude image data, making highlights easily retrievable without any extra steps.
With RCM enabled, there is no need to insert CineonLog or Gamut nodes while in the Fusion page.
The transforms from and to linear are done automatically based on the RCM settings. Switching to
the Fusion page converts the images to linear and enables the LUT button in the viewers with the
Managed LUT selected. The Managed LUT uses the RCM settings to take a linear image and display it
based on the RCM output color space.
For more information on Resolve Color Management, see Chapter 9, “Data Levels, Color Management,
and ACES” in DaVinci Resolve Reference Manual.
ACES works by assigning an IDT (Input Device Transform) to every camera and acquisition device.
The IDT specifies how media from that device is converted into the ACES color space. At the end of
the pipeline, an ODT (Output Device Transform) is applied to convert the image data from ACES color
space into the gamut of your final output.
Similar to setting up RCM, DaVinci Resolve’s color management project settings can be configured
for ACES, which carries through the Edit, Fusion, and Color pages.
NOTE: When using Fusion Studio, the OpenColorIO (OCIO) framework is used for ACES
color management.
The Color Science drop-down menu in the Color Management panel of the Project Settings is used to
set up the ACES color management in DaVinci Resolve.
When ACES is enabled, IDT and ODT are used to identify input and output devices.
— Color Science: Using this drop-down menu, you can choose either ACEScct or ACEScc color
science. This is primarily a personal preference since they are mostly identical, but the shadows
respond differently to grading operations. In the Fusion page, images are automatically converted
to linear, so whoever does the grading has more of a reason to choose one or the other.
— ACES Version: When you’ve chosen one of the ACES color science options, this menu becomes
available to let you choose which version of ACES you want to use.
— ACES Input Device Transform: This menu lets you choose which IDT (Input Device Transform)
to use for the dominant media format in use.
— ACES Output Device Transform: This menu lets you choose an ODT (Output Device Transform)
with which to transform the image data to match your required deliverable.
— Process Node LUTs In: This menu lets you choose how you want to process LUTs in the
Color page and does not affect the Fusion page.
— Disable tone mapping for Fusion conversion: Checking this box will remove any tone mapping
from the ACES color management.
For more information on ACES within DaVinci Resolve, see Chapter 9, “Data Levels, Color Management,
and ACES” in DaVinci Resolve Reference Manual.
OpenColorIO (OCIO) is an open-source color management framework for visual effects and computer
animation. OCIO is compatible with the Academy Color Encoding Specification (ACES). Three
OCIO nodes located in the Color category of the Effects Library allow you to use OCIO color space
transforms in Fusion.
— OCIO CDL Transform node allows you to create, save, load, and apply a Color Decision List (CDL) grade.
— OCIO Color Space allows sophisticated color space conversions based on an OCIO config file.
— OCIO File Transform allows you to load and apply a variety of LookUp Tables (LUTs).
Using OCIO for converting MediaIn or Loader nodes to linear gamma is based on the OCIO Color
Space node. Placing the OCIO Color Space node directly after a Loader (or MediaIn in DaVinci Resolve)
displays the OCIO Source and Output controls in the Inspector.
Clicking the Browse button in the Inspector will allow you to navigate to the downloaded config
file. From the download, locate the ACES 1.0.3 or later folder and select the file config.ocio.
The Source menu is used to choose the color profile for your Loader or MediaIn node. The default
raw setting shows an unaltered image, essentially applying no color management to the clip. The
selection you make from the menu is based on the recording profile of your media.
The Output menu is set based on your deliverables. When working in Fusion Studio, typically the
Output selected is ACEScg, to work in a scene linear space.
By default, the same standard options are available in the View LUT. However, clicking the Browse
button allows you to load the same config file you loaded into the OCIO Color Space node. Once
loaded, all the expanded OCIO options are available. If you selected the OCIO Color Space node
TIP: If your monitor is calibrated differently, you will need to select a LUT that matches your
calibration.
Whether you use the OCIO Color Space LUT or a LUT for your specific monitor calibration, you can
save the viewer setup as the default.
To save the OCIO ColorSpace LUT setup as the default viewer setup:
— Right-click in the viewer, and then choose Settings > Save Defaults. Now, for every comp, the
viewer is preconfigured based on the saved defaults.
Understanding
Image Channels
This chapter seeks to demystify how Fusion handles image
channels and, in the process, show you how different nodes
need to be connected to get the results you expect.
It also explains the mysteries of premultiplication, and presents a full explanation of how Fusion
is capable of using and even generating auxiliary data.
Contents
Channels in Fusion��������������������������������������������� 434 Alpha Channel Status
in MediaIn and Loader Nodes������������������������ 449
Types of Channels Supported by Fusion���� 434
Controlling Premultiplication
Fusion Node Connections
Carry Multiple Channels����������������������������������� 435 in Color Correction Nodes������������������������������� 450
If you’re new to compositing, or you’re new to the Fusion workflow, you ignore this chapter at your
peril, as it provides a solid foundation to understanding how to predictably control image data as you
work in this powerful environment.
Alpha Channels
An alpha channel is an embedded fourth channel that defines different levels of transparency in an
RGB image. Alpha channels are typically embedded in RGB images that are generated from computer
graphics applications. In Fusion, white denotes solid areas, while black denotes transparent areas.
Grayscale values range from more opaque (lighter) to more transparent (darker).
If you’re working with an imported alpha channel from another application for which these
conventions are reversed, never fear. Every node capable of using an alpha channel is also capable of
inverting it.
Single-Channel Masks
While similar to alpha channels, mask channels are single channel images, external to any RGB image
and typically created by Fusion within one of the available Mask nodes. Mask nodes are unique in
that they propagate single-channel image data that defines which areas of an image should be solid
and which should be transparent. However, masks can also define which parts of an image should be
affected by a particular operation, and which should not. Mask channels are designed to be connected
to specific mask inputs of nodes including Effect Mask, Garbage Mask, and Solid Mask inputs.
Auxiliary Channels
Auxiliary channels (covered in more detail later in this chapter), describe a family of special-purpose
image data that typically expose 3D data in a way that can be used in 2D composites. For example,
Z-Depth channels describe the depth of each pixel in an image along a Z axis, while an XYZ
Normals channel describes the orientation (facing up, facing down, or facing to the left or right) of
each pixel in an image. Auxiliary channel data is generated by rendering 3D images, so it usually
accompanies or is embedded with RGB images generated by 3D modeling and animation applications.
The reason to use auxiliary data is that 3D rendering is computationally expensive and time-
consuming, so outputting descriptive information about a 3D image that’s been rendered empowers
compositing artists to make sophisticated alterations in 2D. You can add motion blur, perform
relighting, and composite with depth information faster than re-rendering the 3D source material
over and over.
TIP: You can view any of a node’s channels in isolation using the Color drop-down menu
in the viewer. Clicking the Color drop-down menu reveals a list of all channels within the
currently selected node, including red, green, blue, or auxiliary channels.
In the following example, the two MediaIn nodes output RGB data. However, the Delta Keyer creates
an alpha channel and combined it with MediaIn2’s RGB image. The RGB-A of the Delta Keyer becomes
the foreground image that the Merge node can use to create a two-layer composite.
NOTE: Node trees shown in this chapter may display MediaIn nodes found in
DaVinci Resolve’s Fusion page; however, Fusion Studio Loader nodes are interchangeable
unless otherwise noted.
Running multiple channels through single connection lines makes Fusion node trees simple to read,
but it also means you need to keep track of which nodes process which channels to make sure that
you’re directing the intended image data to the correct operations.
When connecting nodes, a node’s output carries the same channels no matter how many times the
output is “branched.” You cannot send one channel out on one branch and a different channel out on
another branch.
For example, the MatteControl node has a background input and a foreground input, both of which
accept RGBA channels. However, it also has SolidMatte, GarbageMatte, and EffectsMask inputs
that accept alpha or mask channels to modify the transparency of the Node’s output. If you want to
perform the extremely common operation of using a MatteControl node to create an alpha channel
using a Polygon node for rotoscoping an image, you need to make sure that you connect the Polygon
node to the GarbageMatte input to obtain the correct result. The GarbageMatte input is automatically
set to alter the alpha channel of the foreground image. If you connect to any other input, your Polygon
mask may not produce expected results.
In another example, the DeltaKeyer node has a primary input (labeled “Input”) that accepts RGBA
channels, but it also has three Matte inputs. These SolidMatte, GarbageMatte, and EffectsMask inputs
on the Delta Keyer accept alpha or mask channels to modify the matte being extracted from the
image in different ways.
If you position your pointer over any node’s input or output, a tooltip appears in the Tooltip bar at
the bottom of the Fusion window, letting you know what that input or output is for, to help guide you
to using the right input for the job. If you pause for a moment longer, another tooltip appears in the
Node Editor itself.
Side by side, dropping a connection on a node’s body to connect to that node’s primary input
Side by side, dropping a connection on a specific node input, note how the inputs
rearrange themselves afterwards to keep the node tree tidy-looking
TIP: If you hold the Option key down while you drag a connection line from one node onto
another, and you keep the Option key held down while you release the pointer’s button to
drop the connection, a menu appears that lets you choose which specific input you want to
connect to, by name.
In other cases, connecting the wrong image data to the wrong node input won’t give you any error,
it simply fails to produce the result you were expecting, necessitating you to troubleshoot the
composition. If this happens to you, check the Fusion Effects section of this manual to see if the node
you’re trying to connect to has any limitations as to how it must be attached.
TIP: This chapter tries to cover many of the easy-to-miss exceptions to node connection
that are important for you to know, so don’t skim too fast.
When you first connect any node’s output to a multi-input node, you usually want to connect the
background input first. This is handled for you automatically when you first drop a connection line
onto the body of a new multi-input node. The orange-colored background input is almost always
connected first (the exception is Mask nodes, which always connect to the first available Mask input).
This is good because you want to get into the habit of always connecting the background input first.
TIP: The only node to which you can safely connect the foreground input prior to the
background input is the Dissolve node, which is a special node that can be used to either
dissolve between two inputs, or automatically switch between two inputs of unequal duration.
Because each Fusion node has a specific function, they’re categorized by type to make it easier to
keep track of which nodes require what types of image channels as input, and what image data you
can expect each node to output. These general types are described here.
Because these are sources of images, both kinds of nodes can be attached to a wide variety of other
nodes for effects creation besides just 2D nodes. For example, you can also connect MediaIn nodes to
Image Plane 3D nodes for 3D compositing, or to pEmitter nodes set to “Bitmap” for creating different
particle systems. Green Generator nodes can be similarly attached to many different kinds of nodes,
Shape nodes are also green, although they must be attached to a specialized set of gray modifier
and render nodes (all of which begin with the letter “s” and appear in the Shape category of the
Effects Library).
Additionally, some 2D nodes such as Fog and Depth Blur (in the Deep Pixel category) accept and use
auxiliary channels such as Z-Depth to create different perspective effects in 2D.
TIP: Two 2D nodes that specifically don’t process alpha channel data are the Color
Corrector node and the Gamut node. The Color Correction node lets you color correct a
foreground layer to match a background layer without affecting an alpha channel. The
Gamut node lets you perform color space conversions to RGB data from one gamut to
another without affecting the alpha channel.
2D nodes also typically operate upon all channel data routed through that node. For example, if
you connect a node’s output with RGBA and XYZ Normals channels to the input of a Vortex node,
all channels are equally transformed by the Size, Center, and Angle parameters of this operation,
including the alpha and XYZ normals channels, as seen in the following screenshot.
(Left) The Normal Z channel output by a rendered torus, (Right) The Normal Z channel after the output is
connected to a Vortex node; note how this auxiliary channel warps along with the RGB and A channels
This is appropriate because in most cases, you want to make sure that all channels are transformed,
warped, or adjusted together. You wouldn’t want to shrink the image without also shrinking the alpha
channel along with it, and the same is true for most other operations.
On the other hand, some nodes deliberately ignore specific channels when it makes sense. For
example, the Color Corrector and Gamut nodes, both of which are designed to alter RGB data
specifically, do not affect auxiliary channels. This makes them convenient for color-matching
foreground and background layers you’re compositing, without worrying that you’re altering the
depth information accompanying that layer.
TIP: If you’re doing something exotic and you actually want to operate on a channel that’s
usually unaffected by a particular node, you can always use the Channel Booleans node
to reassign the channel. When doing this to a single image, it’s important to connect that
image to the background input of the Channel Booleans node, so the alpha and auxiliary
channels are properly handled.
For example, if you wanted to use the Transform node to affect only the green channel of an image,
you can turn off the Green, Blue, and Alpha buttons. As a result, the green channel is processed by
this operation, and the red, blue, and alpha channels are copied straight from the node’s input to the
node’s output, skipping that node’s processing to remain unaffected.
Transforming only the green color channel of the image with a Transform effect
Blur, Brightness/Contrast, Erode/Dilate, and Filter are examples of nodes that all have RGBA buttons in
the main Controls tab of the Inspector, in addition to the Settings tab.
In the case of extracting an alpha matte from a green screen image, you typically connect the image’s
RGB output to the “Input” input of a Keyer node such as the Delta Keyer, and you then use the keyer’s
controls to extract the matte. The Keyer node automatically inserts the alpha channel that’s generated
alongside the RGB channels, so the output is automatically RGBA. Then, when you connect the keyer’s
output to a Merge node to composite it over another image, the Merge node automatically knows to
use the embedded alpha channel coming into the foreground input to create the desired composite,
as seen in the following screenshot.
A simple node tree for keying; note that only one connection links the DeltaKeyer to the Merge node
Rotoscoping, or manually drawing a mask shape using a Polygon or other Mask node is another
technique used to create the matte channel. There are many ways to configure the node tree for this
task, but the simplest setup is just to connect a Polygon or B-Spline mask node to the Effect Mask
input of a MediaIn or Loader node.
TIP: When rotoscoping, it is best to leave the Mask node disconnected from the image
while you draw the shape. This allows you to view the MediaIn node while drawing. Connect
the Matte node once you have finished drawing the shape.
In both cases, you can see how the node tree’s ability to carry a single channel or multiple
channels of image data over a single connection line simplifies the compositing process.
Auxiliary channels, on the other hand, are handled in a much more specific way. When you
composite two image layers using the Merge node, auxiliary channels only propagate through
the image that’s connected to the background input. The rationale for this is that in most CGI
composites, the background is most often the CG layer that contains auxiliary channels, and the
foreground is a live-action green screen plate.
Since most compositions use multiple Merge nodes, it pays to be careful about how you connect
the background and foreground inputs of each Merge node to make sure that the correct
channels flow properly.
TIP: Merge nodes are also capable of combining the foreground and background inputs
using Z-Depth channels using the “Perform Depth Merge” checkbox, in which case every
pair of pixels are compared. Which one is in front depends on its Z-Depth and not the
connected input.
— Channel Boolean: This is a 3D node used to remap and modify channels of 3D materials using a
variety of simple pre-defined math operations.
— Channel Booleans: Used to shuffle or rearrange RGBA and auxiliary channels within a single
input image, or among two input images, to create a single output image. If you only connect
a single image to this node, it must be connected to the background input to make sure
everything works.
— Copy Aux: The Copy Aux node is used to remap channels between RGBA channels and
auxiliary data channels in a single 2D image. The Copy Aux node is mostly a convenience
node, as the copying can also be accomplished with more effort (and flexibility) using a
Channel Booleans node.
— Matte Control: Designed to do any combination of the following: (a) re-combining mattes, masks,
and alpha channels in various ways, (b) modifying alpha channels using dedicated matte controls,
and (c) copying alpha channels into the RGB stream of the image connected to the background
input in preparation for compositing. You can copy specific channels from the foreground input to
the background input to use as an alpha channel, or you can attach masks to the garbage matte
input to use as alpha channels as well.
Understanding Premultiplication
Now that you understand how to direct and recombine RGB images and alpha channels in Fusion,
it’s time to go more deeply into alpha channels to make sure you always combine RGB and alpha
channels correctly for each operation you perform in your composite. This might seem simple, but
small mistakes are easy to make and can result in unsightly artifacts. This is arguably one of the most
confusing areas of visual effects compositing, so don’t skip this section.
When alpha channel and RGB pixels are both contained within a media file, such as a 3D rendered
animation that contains RGB and transparency, or a motion graphics movie file with transparency
baked in, there are two different ways they might be combined, and it’s important to know which
is in use.
The term Premultiplied alpha is a term that has historically been used by editors, visual effects artists,
and motion graphics designers, but it’s imprecise. The alpha channel itself is not multiplied. The R,
G, and B channels are multiplied by the alpha. In the end, the alpha channel stays the same, but the
values contained in the R, G, and B channels are modified.
Non-premultiplied images, sometimes called “straight” alpha channels, have RGB channels that are
unaltered (not multiplied) by the alpha channel. The result is that the RGB image has no anti-aliased
edges and no semi-transparency. It’s usually obvious where the RGB image ends and the alpha matte
begins. The image below is an example of the ragged edges seen in the RGB channels when using a
non-premultiplied alpha channel. But notice the smooth semi-transparent edges found in the alpha.
A detailed view of a non-premultiplied RGB image (left) and its alpha channel (right)
A premultiplied alpha channel means the RGB pixels are multiplied by the alpha channel. This method
guarantees that the RGB image pixels include semi-transparency where needed, like anti-aliased
edges. Most computer-generated images are premultiplied for convenience, because they’re easier to
review smoothly without actually being placed inside of a composite.
A detailed view of a premultiplied image (left) and its alpha channel (right)
On the other hand, it is always preferred to color correct a non-premultiplied RGBA image, because
you don’t want to alter the pixel values of an image after the RGB channels have been multiplied by
the alpha channel.
— RGB pixel value x 0 = 0: The black transparent areas of an alpha channel have a pixel value
of 0. When you take the value of an RGB pixel and multiply it by 0 (n x 0 = 0) then by the laws of
multiplication, the RGB value becomes 0, or fully transparent.
— RGB Pixel value x 1 = RGB Pixel: The solid or opaque white areas have a value of 1.0. When you
take the value of an RGB pixel and multiply it by 1 (n x 1 = n), then the RGB value stays the same,
fully opaque.
— RGB Pixel value x 0.3 = A different color: Along the edges of an alpha channel are gray pixels,
indicating semi-transparency. These semi-transparent pixels have a value falling somewhere
between 1.0 and 0.0. To apply the alpha channel’s anti-aliased edges to the RGB channels, you
multiply the pixel values. The multiplication process mixes some percentage of the transparent
pixels (black) with the RGB pixels. Although this is desired to get good anti-aliased edges, you
can not color correct the image because it alters the smooth semi-transparency you created
once it is done.
If you are compositing with a non-premultiplied alpha, you can fix these bright edges by changing the
Merge to perform a Subtractive merge in the Inspector.
TIP: When an RGB image and a Mask node are combined using, for instance, a Matte
Control node, if the RGB image is not multiplied by the mask in the Matte control, the
checkerboard background pattern in the viewer will appear only semi-transparent when it
should be fully transparent.
For this reason, the rule is always to divide the semi-transparent pixels before performing any color
correction on an image with an alpha channel. You can do this turning on the Pre-Divide/Post Multiply
checkbox in any node that performs color correction. Alternatively, you can use the Alpha Divide
and Alpha Multiply nodes to do the same thing. These methods are covered in more detail later in
this chapter.
Controlling Premultiplication
in Color Correction Nodes
Most nodes that require you to explicitly deal with the state of premultiplication of an RGBA image
have a “Pre-Divide, Post-Multiply” checkbox. This includes simple color correction nodes such as
Brightness Contrast and Color Curves, as well as the Color Correct node, which has the “Pre-Divide/
Post-Multiply” checkbox in the Options panel of its Inspector settings.
This checkbox allows you to connect an RGBA premultiplied image to the node and perform a color
correction operation. It takes the RGBA image input, performs a divide operation to remove the semi-
transparency and then performs a multiplication operation before outputting the color corrected
image. This way, the color correction is done using a nonpremultiplied image but the resulting output
is a Merge-friendly premultiplied image.
A node tree with explicit Alpha Divide and Alpha Multiply nodes
To have the flexibility you need to make common changes to 3D images after-the-fact, the various
attributes that make up the 3D scene are separated and rendered as different image sequences, often
referred to as render passes. For example, render passes are often created for attributes like raw
color, shadows, and reflections, that can be recombined as a 2D composite to produce the final result.
Having different attributes rendered into different image sequences gives you a significant amount
of flexibility, since now each image attribute can be color corrected, blurred, or further processed
independently of the other attributes of the image, with fast-processing operations in Fusion.
The most common render passes that are typically generated come from the RGBA channels of the
3D scene. These are collectively called beauty passes and can consist of attributes like color, shadows,
lighting, reflections, environment, and others.
Render passes can also contain non-RGB data. Different effects applications have different names for
these passes, such as Data Channels, or AOVs (Arbitrary Output Variables). In Fusion, these channels
are called Auxiliary Channels, and they contain 3D data such as Depth, Normals, Motion Vectors, and
UV Coordinates (to name just a few).
When compositing a 3D render consisting of multiple render passes, the beauty passes are handled
using one technique, and the Auxiliary Channels are handled with another. Since Fusion nodes carry
RGBA channels by default, we’ll cover beauty passes first, and then explain how to work with Auxiliary
Channels later in this chapter.
A single MediaIn or Loader node only handles a single beauty pass since only one set of RGBA
channels gets output per node. Setting up your composite in Fusion requires you to use a separate
MediaIn or Loader node for each pass.
TIP: It is wise to rename each Loader or MediaIn to represent the beauty pass it contains.
The MediaIn’s Image tab includes a Layer menu. Any pass included in a multi-part EXR image
sequence can be selected from this menu and automatically assigned to the RGBA channels.
In most cases, the menu shows the combined channel passes, meaning the individual red, green,
blue, and alpha channels cannot be selected. Because the alpha channel is not included in many
beauty passes, you sometimes need to borrow the alpha channel from a different beauty pass.
For this reason, it’s often better to use the Channels tab for mapping the individual channels of a
beauty pass to the channels of the MediaIn node.
TIP: Different 3D applications will label beauty passes in different ways. For instance,
the name for an Ambient Occlusion beauty pass may be AO, AM_OCC, or some other
abbreviation.
The Ambient Occlusion beauty pass does not include an alpha channel. To composite it, you can
reuse the alpha channel pass from another beauty pass. In the image below, the alpha channel is
mapped using the combined render pass’ alpha channel.
TIP: When using the Format tab in the Loader node, the checkbox next to each channel
needs to be turned on for the corresponding channel to become available in the
node’s output.
Compositing multiple beauty passes into a single output image is relatively straight forward. 3D
rendering applications typically output linear gamma, so no Gamut or other color space conversion
nodes are required if you’re keeping the image in a linear color space for ease of compositing.
The basic compositing is accomplished with either a Merge node or a Channels Booleans node. Both
allow for additive combining of render passes. There’s no strict requirement for compositing each
pass in any particular way, although in most situations a simple additive composite should work
just fine.
One of the exceptions to the steps above are Shadow passes, such as Ambient Occlusion. In that case,
a multiply Apply mode is usually employed.
As straightforward as this sounds, compositing using a recipe doesn’t always work for every shot.
When using different images, you may need to experiment with varying techniques of compositing for
the best results.
To add an alpha channel into your assembled beauty pass composite, do the following:
1 Connect the last Merge or Channel Booleans output into the background input of a Matte
Control node.
2 Connect the render pass that contains the alpha into the green Foreground input of the Matte
Control node.
3 In the Matte Control’s Inspector, choose Combine Alpha from the Combine menu.
4 Choose Copy from the Combine Op menu.
TIP: Alpha channels from 3D renderings are typically premultiplied. That being the case,
be sure to turn on the Pre Divide/Post Multiply checkbox on any node that performs color
correction. If using more than one node in a row to perform color correction, use the Alpha
Divide and Alpha Mult nodes instead.
Similar to the use of multiple beauty passes, one of the most common reasons to use auxiliary
data is to eliminate the need to re-render computationally expensive 3D imagery, by enabling even
more aspects of rendered images to be manipulated after-the-fact. 3D rendering is computationally
expensive and time-consuming, so outputting descriptive information about a 3D image allows
sophisticated alterations to occur in 2D compositing, which is faster to perform and adjust.
— First, auxiliary data may be embedded within a clip rendered from a 3D application, most often
using the EXR file format. In this case, it’s best to consult your 3D application’s documentation to
determine which auxiliary channels can be generated and output.
— You may also obtain auxiliary channel data by generating it within Fusion, via 3D operations
output by the Renderer 3D node, by the Optical Flow node, or by the Disparity node.
TIP: The Color Inspector SubView can be used to read numerical values from all of
the channels.
Z-Depth
Each pixel in a Z-Depth channel contains a value that represents the relative depth of that pixel in the
scene. In the case of overlapping objects in a model, most 3D applications take the depth value from
the object closest to the camera when two objects are present within the same pixel since the closest
object typically obscures the farther object.
When present, Z-Depth can be used to perform depth merging using the Merge node or to control
simulated depth-of-field blurring using the Depth Blur node.
For this example, we’ll examine the case where the Z-Depth channel is provided as a separate
file. The Z- channel can often be rendered as an RGB image. You’ll need to combine the beauty
and Z pass using a Channel Booleans node. When the Z pass is rendered as an image in the RGB
channels, the Channels Booleans node is used to re-shuffle the Lightness of the foreground RGB
channel into the Z-channel.
The Depth Blur node is one of the nodes that take advantage of a Z-channel in order to create blurry
depth-of-field simulations. To set this up, the output of the MediaIn node connects to the background
input on the Depth Blur.
The Depth Blur’s controls in the Inspector are very dependent on the type of image you’re using.
It can be easier to begin by adjusting the controls in the Inspector to some better defaults. Start
by increasing the Blur Size to 10. This will make it easier to see even the smallest of changes.
Next, instead of using the Focal Point, you should pick a focal point in the image by dragging the
Sample button into the viewer and selecting a pixel that determines the part of the picture to
keep in focus.
The final setup steps are to lower Z Scale to somewhere around 0.2 (if you’re using a floating-
point image), and leave the Depth of Field alone for now. This should show you some blurring in
the image.
Once you see these experimental results, you can return to each parameter and refine it as
needed to achieve the actual look you want.
Z-Coverage
The Z-Coverage channel is a somewhat extinct render pass in most 3D applications. It was a way of
restoring antialiasing to rendered color masks and Z-Depth passes. It indicated pixels in the Z-Depth
that contained two objects. The value was used to indicate, as a percentage, how transparent that
pixel was in the final depth composite. It can still be used today if you are rendering files from one of
the few applications that can produce them.
TIP: The wide adoption of an open-source matte creation technology called Cryptomatte,
has somewhat superseded mattes created from Coverage, Background, Object ID, and
Material ID passes.
Background RGBA
This channel is a somewhat extinct render pass in most 3D applications. It contained the color values
from the objects behind the pixels described in the Z coverage.
Object ID
Most 3D applications are capable of assigning ID values to objects in a scene. Each pixel in the Object
ID channel will be identified by that ID number, allowing for the creation of masks.
If you want to use an Object ID in a comp, like all aux channels you must map the Object ID pass to the
Object ID channel in the MediaIn or Loader Node.
You can set up Material IDs using the Settings tab, similarly to how ObjectIDs are set.
UV Texture
The UV Texture channels contain information about how pixels in an image map to texture
coordinates. This is used to retexture an object in a 2D image. For instance, If you want to apply a logo
onto a rendered object, you can use the UV aux channel with the Texture node.
Texture (left) applied to 2D image (right) using UV channels and texture node.
UV channels from a MediaIn node used in a Texture node and merged over the original image
TIP: If you are using a separate UV render pass with the UV data in the RGB channels,
map red to U and green to V in a Channel Booleans node.
The Normals X, Y, and Z channels are often used with a Shader node to perform relighting
adjustments on a 2D rendered image.
XY Vector pass (left) used with Vector Motion Blur to generate motion blur on spaceship (right)
Often the vector pass will be rendered in a separate pass as an RGB image. The X and Y vector data is
located in the R and G channels. In order to place them in the vector channels, you can use a Channel
Booleans node.
The Vector render pass is combined with the beauty image using the Channels
Booleans node, which then feeds the Vector Motion Blur node.
The colors correspond to a pixel’s position in 3D, so if a pixel sits at 0/0/0 in a 3D scene, the resulting
pixel’s will have an RGB value of 0/0/0 or black. If a pixel sits at 1/0/0 in the 3D scene, the resulting pixel
is fully red. Due to the huge extent, 3D scenes can have the WPP channel should always be rendered
in 32-bit floating-point to provide the accuracy needed.
XYZ Position
XY Disparity
XY Disparity is the only channel listed here that is not generated in a 3D application. These channels
indicate where each pixel’s corresponding matte can be found in a stereo image. Each eye, left and
right, will use this vector to point to where that pixel would be in the other eye. This can be used for
adjusting stereo effects, or to mask pixels in stereo space.
Fusion does not natively support the Cryptomatte format. However, using a free plugin
from third-party developers, you can use Crypotmatte render passes in Fusion.
Crypotmatte for Fusion can be downloaded and installed for free: https://github.com/
Psyop/Cryptomatte
Or to use an easier installer, you can download Reactor, which comes bundled with
Cryptomatte and offers many other free, useful Fusion plugins. Reactor can be found at:
https://www.steakunderwater.com
However, when you composite two image layers using the Merge node, auxiliary channels only
propagate through the image that’s connected to the background input. The rationale for this is that
in most composites that include computer-generated imagery, the background is most often the CG
layer that contains auxiliary channels, while the foreground is a live-action green screen plate with
subjects or elements that are combined against the background, which lack auxiliary channels.
— Copy Aux: The Copy Aux tool can copy auxiliary channels to RGB and then copy them back.
It includes some useful options for remapping values and color depths, as well as removing
auxiliary channels.
— Channel Booleans: The Channel Boolean tool can be used to combine or copy the values from
one channel to another in a variety of ways.
— Custom Tool, Custom Vertex 3D, pCustom: The “Custom” tools can sample data from the
auxiliary channels per pixel, vertex, or particle and use that for whatever processing you
would like.
— Depth Blur: The Depth Blur tool is used to blur an image based on the information present in
the Z-Depth. A focal point is selected from the Z-Depth values of the image and the extent of the
focused region is selected using the Depth of Field control. The Scale value default is based on
an 8-bit image so it is important to lower the scale value when using the Depth Blur with 16- or
32‑bit float files.
— OpenEXR (*.exr): The OpenEXR file format is the primary format used to contain an arbitrary
number of additional image channels. Many renderers that will write to the OpenEXR format will
allow the creation of channels that contain entirely arbitrary data. For example, a channel with
specular highlights might exist in an OpenEXR. In most cases, the channel will have a custom
name that can be used to map the extra channel to one of the channels recognized by Fusion.
— SoftImage PIC (*.PIC, *.ZPIC and *.Z): The PIC image format (used by SoftImage) is an older
image format that can contain Z-Depth data in a separate file marked by the ZPIC file extension.
These files must be located in the same directory as the RGBA PIC files and must use the same
names. Fusion will automatically detect the presence of the additional information and load the
ZPIC images along with the PIC images.
— Wavefront RLA (*.RLA), 3ds Max RLA (*.RLA) and RPF (*.RPF): This is an older image format
capable of containing any of the image channels mentioned above. All channels are contained
within one file, including RGBA, as well as the auxiliary channels. These files are identified by the
RLA or RPF file extension. Not all RLA or RPF files contain auxiliary channel information, but most
do. RPF files have the additional capability of storing multiple samples per pixel, allowing different
layers of the image to be loaded for very complex depth composites.
— Fusion RAW (*.RAW): Fusion’s native RAW format is able to contain all of the auxiliary channels as
well as other metadata used within Fusion.
— Renderer 3D: Creates these channels in the same way as any other 3D application would,
and you have the option of outputting every one of the auxiliary data channels that the
Fusion page supports.
— Optical Flow: Generates Vector and Back Vector channels by analyzing pixels over consecutive
frames to determine likely movements of features in the image.
— Disparity: Generates Disparity channels by comparing stereoscopic image pairs.
Compositing
Layers in Fusion
This chapter is intended to give you a solid base for making the
transition from a layer-based compositing application to Fusion’s
node-based interface. It provides practical information about how
to start structuring a node tree for simple layered composites.
Contents
Applying Effects�������������������������������������������������������������������������������������������������������� 471
Clicking the Effect category reveals its contents. In this example we’ll use the TV effect.
In Fusion Studio, you must press the 1 or 2 key on the keyboard to load the selected node in
the viewer.
There are many other ways of adding nodes to your node tree, but it’s good to know how to browse
the Effects Library as you get started.
Clicking the last panel on any node opens the Settings panel. Every node has a Settings panel, and
this is where the parameters that every node shares, such as the Blend slider and RGBA buttons, are
found. These let you choose which image channels are affected, and let you blend between the effect
and the original image.
In the case of the TV effect, for example, the resulting image has a lot of transparency because
the scan lines being added are also being added to the alpha channel, creating alternating lines
of transparency. Turning off the Alpha checkbox results in a more solid image, while opening the
Controls tab (the first tab) and dragging the Scan Lines slider to the right to raise its value to 4 creates
a more visible television effect.
The original TV effect (left), and modifications to the TV effect to make the clip more solid (right)
Instead of clicking the Highlight node, which would add it after the currently selected node, dragging
and dropping a node from the Effects Library on top of a node in the Node Editor replaces the node in
the Node Editor.
In our example, the Highlight1 node takes the TV node’s place in the node tree, and the new effect can
be seen in the viewer, which in this example consists of star highlights over the lights in the image.
Each slider is limited to a different range of minimum and maximum values that is particular to the
parameter you’re adjusting. In this case, the Number of Points slider maxes out at 24. However, you
can remap the range of many (but not all) sliders by entering a larger value in the number field to
the right of that slider. Doing so immediately repositions the slider’s controls to the left as the slider’s
range increases to accommodate the value you just entered.
— In Fusion Studio, you do this by adding additional Loader nodes. If you add a new Loader node to
an empty area of the Node Editor, you’ll add an unconnected Loader2 node (incremented to keep
it unique) that you can then connect how you want.
— In the Fusion page, you can open the Media Pool and drag clips directly to the Node Editor to
add them to your node tree. If you drag a clip from the Media Pool to an empty area of the Node
Editor, you’ll add an unconnected MediaIn2 node (incremented to keep it unique) that you can
then connect in any way you want.
In both cases, the new MediaIn or Loader node automatically becomes the “foreground input”.
Dragging a node from the Media Pool onto a connection (left), and
dropping it to create a Merge node composite (right)
The Node Editor is filled with shortcuts like this to help you build your compositions more quickly.
Here’s one for when you have a disconnected node that you want to composite against another node
with a Merge node. Drag a connection from the output of the node you want to be the foreground
layer, and drop it on top of the output of the node you want to be the background layer. A Merge node
will be automatically created to build that composite. Remember: background inputs are orange, and
foreground inputs are green.
Click to select the Merge node for that particular composite, and look for the Subtractive/
Additive slider.
Drag the slider all the way to the left, to the Subtractive position, and the fringing disappears.
The Subtractive/Additive slider, which is only available when the Apply mode is set to Normal,
controls whether the Normal mode performs an Additive merge, a Subtractive merge, or a blend
of both. This slider defaults to Additive merging, which assumes that all input images with alpha
transparency are premultiplied (which is usually the case). If you don’t understand the difference
between Additive and Subtractive merging, here’s a quick explanation:
— An Additive merge, with the slider all the way to the right, is necessary when the foreground
image is premultiplied, meaning that the pixels in the color channels have been multiplied by
the pixels in the alpha channel. The result is that transparent pixels are always black, since any
number multiplied by 0 is always 0. This obscures the background (by multiplying with the inverse
of the foreground alpha), and then simply adds the pixels from the foreground.
— A Subtractive merge, with the slider all the way to the left, is necessary if the foreground image is
not premultiplied. The compositing method is similar to an Additive merge, but the foreground
image is first multiplied by its own alpha to eliminate any background pixels outside the
alpha area.
The Additive/Subtractive slider lets you blend between two versions of the merge operation, one
Additive and the other Subtractive, to find the best combination for the needs of your particular
For example, using Subtractive merging on a premultiplied image may result in darker edges, whereas
using Additive merging with a non-premultiplied image will cause any non-black area outside the
foreground’s alpha to be added to the result, thereby lightening the edges. By blending between
Additive and Subtractive, you can tweak the edge brightness to be just right for your situation.
The Merge node has a variety of controls built into it for creating just about every compositing effect
you need. Items you may be familiar with as Blend modes are located in the Apply Mode pop-up menu.
You can use these mathematical compositing modes to combine the foreground and background
layers together. A Blend slider allows you to fade the foreground input with the background.
NOTE: The Subtractive/Additive slider disappears when you choose any other Apply Mode
option besides Normal, because the math would be invalid. This isn’t unusual; there are
a variety of controls in the Inspector that hide themselves when not needed or when a
particular input isn’t connected.
The Screen node is perfect for simulating reflections, and lowering Blend a bit lets you balance the
foreground and background images. It’s subtle, but helps sell the shot.
TIP: You may have noticed that the Merge node also has a set of Flip, Center, Size,
and Angle controls that you can use to transform the foreground image without needing
to add a dedicated Transform node. It’s a nice shortcut for simplifying node trees large
and small.
The Text+ node is an incredibly deep tool for creating text effects, with six tabs of controls for
adjusting everything from text styling, to different methods of layout, to a variety of shading
controls including fills, outlines, shadows, and borders. As sophisticated a tool as this is, we’ll only be
scratching the surface in this example.
We’ll start with a MediaIn node that will serve as our background selected in the Node Editor. Clicking
the Text+ button automatically creates a new Text+ node connected as the foreground input of a
Merge node. The same behavior occurs if you are using Fusion Studio, with a Loader node.
Selecting a Text node opens the default Text panel parameters in the Inspector, and it also adds a
toolbar at the top of the viewer with tools specific to that node. Clicking on the first tool at the left lets
you type directly into the viewer, or you can type into the Styled Text field in the Inspector.
TIP: Holding down the Command key while dragging any control in the Inspector “gears
down” the adjustment so that you can make smaller and more gradual adjustments.
Selecting the Manual Kerning tool in the viewer toolbar (second tool from the left) reveals small red
dots underneath each letter of text.
Clicking a red dot under a particular letter puts a kerning highlight over that letter.
Connecting a MediaIn2 or Loader2 node onto the Merge1 node’s foreground input causes the entire
viewer to be filled with the MediaIn2 (assuming we’re still viewing the Merge node). At this point, we
need to insert the Text1 node’s image as an alpha channel into the MediaIn2 node’s connection, and
we can do that using a MatteControl node.
The MatteControl node has numerous uses. Among them is taking one or more alpha channels,
mattes, or images that are connected to the Garbage Matte, Solid Matte, and/or foreground inputs,
combining them, and using the result as an alpha channel for the image that’s connected to the
background input. It’s critical to make sure that the image you want to add an alpha channel to is
connected to the background input of the MatteControl node, or the MatteControl node won’t work.
With this done, connecting the Text+ node’s output, which has the alpha channel, to the MatteControl
node’s Garbage Matte input, is a shortcut we can use to make a mask, matte, or alpha punch out a
region of transparency in an image.
Keep in mind that it’s easy to accidentally connect to the wrong input. Because inputs rearrange
themselves depending on what’s connected and where the node is positioned (and, frankly, the colors
can be hard to keep track of when you’re first learning), it’s key to make sure that you always check the
tooltips associated with the input you’re dragging a connection over to make sure that you’re really
connecting to the correct one. If you don’t, the effect won’t work, and if your effect isn’t working, the
first thing you should always check is whether you’ve connected the proper inputs.
Once the Text1 node is properly connected to the MatteControl node’s Garbage Matte input, a text-
shaped area of transparency is displayed for the graphic if you load the MatteControl node into
the viewer.
NOTE: When connecting two images of different sizes to a Merge node, the resolution of
the background image defines the output resolution of that node. Keep that in mind when
you run into resolution issues.
Background on video track 1 (top left), green-screen clip on video track 2 (bottom),
and graphic file on video track 3 (top right)
Implied in a timeline-based system is that higher numbered video tracks appear as the more forward,
or frontmost, element in the viewer. Video track 1 is the background to all other video tracks. Video
track 3 is in the foreground to both video track 1 and video track 2.
TIP: If using DaVinci Resolve, you can bring all three layers from the Edit page into Fusion
by creating a Fusion clip. For more information on creating Fusion Clips, see Chapter 3,
“Getting Clips into Fusion,” in the Fusion Reference Manual.
In Fusion, each video clip is represented by a MediaIn in the Fusion page or a Loader in
Fusion Studio.
In our example below, the MediaIn2 is video track 2, and MediaIn 1 is video track 1. These two
elements are composited using a Merge node (foreground over background, respectively). The
composite of those two elements becomes the output of the first Merge node, which becomes
the background to a second Merge. There is no loss of quality or precomposing when you chain
Merges together. MediaIn3 represents video track 3 and is the final foreground in the node tree
since it is the topmost layer.
The initial node tree of the three clips we turned into a Fusion clip
With this node tree assembled to mimic the video layers, we can focus the rest of this example on
adding the nodes we’ll need to each branch of this tree to create the green-screen composite.
The DeltaKeyer node is the main tool used for green-screen keying. It attaches to the output of
the node that represents the green screen—in our example, that is the MediaIn2 node. With the
MediaIn2 selected, pressing Shift-Space opens the Select Tool dialog where you can search for
and insert any node. Below we have added the DeltaKeyer after the MediaIn2 node but prior to
being merged with the background.
The DeltaKeyer node is a sophisticated keyer that is capable of impressive results by combining
different kinds of mattes and a clean-plate layer, but it can also be used very simply if the
background that needs to be keyed is well lit. And once the DeltaKeyer creates a key, it embeds
the resulting alpha channel in its output, so in this simple case, it’s the only node we need to add.
It’s also worth noting that, although we’re using the DeltaKeyer to key a green screen, it’s not
limited to keying green or blue only; the DeltaKeyer can create impressive keys on any color in
your image.
With the DeltaKeyer selected, we’ll use the Inspector controls to pull our key by quickly sampling
the shade of green from the background of the image. To sample the green-screen color, drag the
Eyedropper from the Inspector over the screen color in the viewer.
As you drag in the viewer, an analysis of the color picked up by the location of the Eyedropper
appears within a floating tooltip, giving some guidance as to which color you’re really picking.
Meanwhile, if viewing the Merge in a second viewer, we get an immediate preview of the
transparency and the image we’ve connected to the background.
The original image (left), and after sampling the green screen
using the Eyedropper from the Inspector (right)
When we’re happy with the preview, releasing the pointer button samples the color, and the
Inspector controls update to display the value we’ve chosen.
Loading the DeltaKeyer into the viewer and clicking the Color
button to view the alpha channel being produced
Black in a matte represents the transparent areas, while white represents the opaque areas. Gray
areas represent semi-transparency. Unless you are dealing with glass, smoke, or fog, most mattes
should be pure white and pure black with no gray gray areas. If a close examination of the alpha
channel reveals some fringing in the white foreground of the mask, the DeltaKeyer has integrated
controls for post-processing of the key and refining the matte. Following is a quick checklist of the
primary adjustments to make.
After making the screen selection with the Eyedropper, try the following adjustments to
improve the key.
— Adjust the Gain slider to boost the screen color, making it more transparent. This can adversely
affect the foreground transparency, so adjust with care.
— Adjust the Balance slider to tint the foreground between the two non-screen colors. For a green
screen, this pushes the foreground more toward red or blue, shifting the transparency in the
foreground.
Clicking the third of the seven tabs of controls in the DeltaKeyer Inspector opens up a variety of
controls for manipulating the matte.
Initial adjustments in the matte tab may include the following parameters:
— Adjust the lower and upper thresholds to increase the density in black and white areas.
— Very subtly adjust the Clean Foreground and Clean Background sliders to fill small holes in the
black and white matte. The more you increase these parameters, the more harsh the edges of
your matte become.
In this case, raising the Clean Foreground slider a bit eliminates the inner fringing we don’t want,
without noticeably compromising the edges of the key.
The original key (left), and the key after using the Clean Foreground slider (right)
With this accomplished, we’re happy with the key, so we load the Merge1 node back into the
viewer, and press C to set the Color control of the viewer back to RGB. We can see the graphic
in the background, but right now it’s too small to cover the whole frame, so we need to make
another adjustment.
The final key is good, but now we need to work on the background
Spill can now be handled using a color correction node placed directly after the DeltaKeyer or
branched from the original MediaIn or Loader node and combined with a MatteControl.
Branching the original image with one branch for the DeltaKeyer and a second for color correction
Masking a Graphic
Next, it’s time to work on the top video track: the news graphic that will appear to the left of the
newscaster. The graphic we will use is actually a sheet of different logos, so we need to cut one out
using a mask and position it into place.
The easiest way to crop a MediaIn or Loader node is to add one of the mask shapes from the
toolbar directly to it. Selecting the MediaIn or Loader node and clicking the Rectangle mask from
the toolbar will crop, or mask off, the graphic.
Now, all we need to do is to use the onscreen controls of the Rectangle mask to crop the area we
want to use, dragging the position of the mask using the center handle, and resizing it by dragging
the top/bottom and left/right handles of the outer border.
As an extra bonus, you can take care of rounded corners when masking a graphic by using the Corner
Radius slider in the Inspector controls for the Rectangle mask to add the same kind of rounding.
With the Crop node selected, the viewer toolbar includes a Crop tool.
You can crop the image by dragging a bounding box around it. Unlike a mask which creates
a small window you view the image through, a crop effectively changes the resolution of the
graphic to the crop bounding box size.
Dragging a bounding box using the Crop tool (left), and the
cropped logo now centered on the frame (right)
NOTE: The Resize, Letterbox, and Scale nodes also change the resolution of an image.
Placing the logo using the foreground input transform controls of the Merge2 node
Rotoscoping
with Masks
This chapter covers how to use masks to rotoscope,
one of the most common tasks in compositing.
Contents
Introduction to Masks and Polylines�������� 496 Deleting Selected Points���������������������������������� 510
Mask Nodes
Mask nodes create an image that is used to define transparency in another image. Unlike other image
creation nodes in Fusion, mask nodes create a single channel image rather than a full RGBA image.
The most used mask tool, the Polygon mask tool, is located in the toolbar.
For more information on these mask tools, see Chapter 46, “Mask Nodes,” in the Fusion
Reference Manual.
Polygon Mask
Polygon masks are user-created Bézier shapes. This is the most common type of polyline and the
basic workhorse of rotoscoping. Polygon mask tools are automatically set to animate as soon as you
add them to the Node Editor.
B-Spline Masks
B-Spline masks are user-created shapes made with polylines that are drawn using the B-Splines. They
behave identically to polyline shapes when linear, but when smoothed the control points influence the
shape through tension and weight. This generally produces smoother shapes while requiring fewer control
points. B-Spline mask tools are automatically set to animate as soon as you add them to the Node Editor.
Bitmap Masks
The Bitmap mask allows images from the Node Editor to act as masks for nodes and effects. Bitmap
masks can be based on values from any of the color, alpha, hue, saturation, luminance, and the
auxiliary coverage channels of the image. The mask can also be created from the Object or Material ID
channels contained in certain 3D-rendered image formats.
Wand Mask
A Wand mask provides a crosshair that can be positioned in the image. The color of the pixel under
the crosshair is used to create a mask, where every contiguous pixel of a similar color is also included
in the mask. This type of mask is ideal for isolating color adjustments.
Ranges Mask
Similar to the Bitmap mask, the Ranges mask allows images from the Node Editor to act as masks for
nodes and effects. Instead of creating a simple luminance-based mask from a given channel, Ranges
allows spline-based selection of low, mid, and high ranges, similar to to the Color Corrector node.
Polyline Types
You can draw polylines using either B-Spline or Bézier spline types. Which you choose depends on the
shape you want to make and your comfort with each spline style.
Bézier Polylines
Bézier polylines are shapes composed of control points and handles. Several points together are used
to form the overall shape of a polyline.
Each control point has a pair of handles used to define the exact shape of the polyline segments
passing through each control point. Adjusting the angle or length of the direction handles will
affect whether that segment of the polyline is smooth or linear.
B-Spline Polylines
A B-Spline polyline is similar to a Bézier spline; however, these polylines excel at creating smooth
shapes. Instead of using a control point and direction handles for smoothness, the B-Spline polyline
uses points without direction handles to define a bounding box for the shape. The smoothness of the
polyline is determined by the tension of the point, which can be adjusted as needed.
When converting from one type to another, the original shape is preserved. The new polyline
generally has twice as many control points as the original shape to ensure the minimum change to the
shape. While animation is also preserved, this conversion process will not always yield perfect results.
It’s a good idea to review the animation after you convert spline types.
Masks are single-channel images that can be used to define which regions of an image you want to
affect. Masks can be created using primitive shapes (such as circles and rectangles), complex polyline
shapes that are useful for rotoscoping, or by extracting channels from another image.
Each mask node is capable of creating a single shape. However, Mask nodes are designed to be
added one after the other, so you can combine multiple masks of different kinds to create complex
shapes. For example, two masks can be subtracted from a third mask to cut holes into the resulting
mask channel.
Fusion offers several different ways you can use masks to accomplish different tasks. You can attach
Mask nodes after other nodes in which you want to create transparency, or you can attach Mask
nodes directly to the specialized inputs of other nodes to limit or create different kinds of effects.
To use this setup, you’ll load the MatteControl node into the viewer and select the Polygon node to
expose its controls so you can draw and modify a spline while viewing the image you’re rotoscoping.
The MatteControl node’s Garbage Matte > Invert checkbox lets you choose which part of the image
becomes transparent.
When you’re finished rotoscoping, you simply connect the Polygon node’s output to the Loader
node’s input, and an alpha channel is automatically added to that node.
When a Mask node’s input is attached to another mask, a Paint Mode drop-down menu appears,
which allows you to choose how you want to combine the two masks.
TIP: If you select a node with an empty effect mask input, adding a Mask node
automatically connects to the open effect mask input.
While masks (or mattes) are connected via an input, they are actually applied “post effect,” which
means the node first applies its effect to the entire image, and then the mask is used to limit the result
by copying over unaffected image data from the input.
Although many nodes support effects masking, there are a few where this type of mask does not
apply—notably Savers, Time nodes, and Resize, Scale, and Crop nodes.
TIP: Effects masks define the domain of definition (DoD) for that effect,
making it more efficient.
Pre-Masking Inputs
Unlike effect masks, a pre-mask input (the name of which is usually specific to each node using them)
is used by the node before the effect is applied. This usually causes the node to render more quickly
and to produce a more realistic result. In the case of the Highlight and the Glow nodes, a pre-mask
restricts the effect to certain areas of the image but allows the result of that effect to extend beyond
the limits of the mask.
The advantage to pre-masking is that the behavior of glows and highlights in the real world can be
more closely mimicked. For example, if an actor is filmed in front of a bright light, the light will cause
a glow in the camera lens. Because the glow happens in the lens, the luminance of the actor will be
affected even though the source of the glow is only from the light.
In the case of the DVE node, a pre-mask is used to apply a transformation to a selected portion of
the image, without affecting portions of the image outside of the mask. This is useful for applying
transformations to only a region of the image.
You choose whether a garbage matte is applied to a keying node as opaque or transparent in the
Inspector for the node to which it’s connected.
Solid Matte
Solid Matte inputs (colored white) are intended to fill unwanted holes in a matte, often with a less
carefully pulled key producing a dense matte with eroded edges, although you could also use a
polygon or mask paint to serve this purpose. In the example below, a gentle key designed to preserve
the soft edges of the talent’s hair leaves holes in the mask of the woman’s face, but using another
DeltaKeyer to create a solid matte for the interior of the key that can be eroded to be smaller than the
original matte lets you fill the holes while leaving the soft edges alone. This is also sometimes known
as a hold-out matte.
Filling in holes in the mask pulled by the DeltaKeyer1 node (left) with another, harder but
eroded key in DeltaKeyer2 that’s connected to the SolidMatte input of DeltaKeyer1 (right)
If you hover the pointer over any of the Polyline toolbar buttons, a tooltip that describes the button’s
function appears. Clicking on a button will affect the currently active polyline or the selected polyline
points, depending on the button.
You can change the size of the toolbar icons, add labels to the buttons, or make other adjustments
to the toolbar’s appearance in order to make polylines easier to use. All the options can by found by
right-clicking on the toolbar and selecting from the options displayed in the contextual menu.
When a shape is closed, the polyline is automatically switched to Insert and Modify mode.
Although the Click Append mode is rarely used with paths, it can be helpful when you know the overall
shape of a motion path, but you don’t yet know the timing.
TIP: Holding Shift while you draw a mask constrains subsequent points to 45-degree
angles relative to the previous point. This can be very helpful when drawing
regular geometry.
Insert and Modify mode is also the default mode for creating motion paths. A new control point
is automatically added to the end of the polyline, extending or refining the path, any time a
parameter that is animated with a motion path is moved.
Protection Modes
In addition to the modes used to create a polyline, two other modes are used to protect the points
from further changes after they have been created.
Modify Only
Modify Only mode allows existing points on the polyline to be modified, but new points may not be
added to the shape.
TIP: Even with Modify Only selected, it is still possible to delete points from a polyline.
Done
The Done mode prohibits the creation of any new points, as well as further modification of any
existing points on the polyline.
Closing Polylines
There are several ways to close a polyline, which will connect the last point to the first.
All these options are toggles that can also be used to open a closed polygon.
To add or remove points from the current selection, do one of the following:
— Hold the Shift key to select a continuous range of points.
— Hold Command and click each control point you want to add or remove.
— Press Command-A to select all the points on the active polyline.
TIP: Once a control point is selected, you can press Page Down or Page Up on the keyboard
to select the next control point in a clockwise or counterclockwise rotation. This can be very
helpful when control points are very close to each other.
To move selected control points using the keyboard, do one of the following:
— Press the Up or Down Arrow keys on the keyboard to nudge a point up or down in the viewer.
— Hold Command-Up or Down Arrow keys to move in smaller increments.
— Hold Shift-Up or Down Arrow keys to move in larger increments.
The position of the pointer when the transformation begins becomes the center used for the
transformation.
TIP: Deleting all the points in a polyline does not delete the polyline itself. To delete a
polyline, you must delete the node or modifier that created the polyline.
Dragging a direction handle makes adjustments to the curve of the segment that emerges from the
control point. The direction handle on the opposing side of the control point will also move to maintain
the relationship between these two handles.
To break the relationship between direction handles and adjust one independently, hold Command
while dragging a handle. Subsequent changes will maintain the relationship, unless Command is held
during each adjustment.
If you want to adjust the length of a handle without changing the angle, hold Shift while moving a
direction handle.
The dialog box contains the X- and Y-axis values for that point. Entering new values in those boxes
repositions the control point. When multiple control points are selected, all the points move to the
same position. This is useful for aligning control points along the X- or Y-axis.
If more than one point is selected, a pair of radio buttons at the top of the dialog box determines
whether adjustments are made to all selected points or to just one. If the Individual option is selected,
the affected point is displayed in the viewer with a larger box. If the selected point is incorrect, you can
use the Next and Previous buttons that appear at the bottom of the dialog to change the selection.
In addition to absolute values for the X- and Y-axis, you can adjust points using relative values from
their current position. Clicking once on the label for the axis will change the value to an offset value.
The label will change from X to X-offset or from Y to Y-offset.
If you are not sure of the exact value, you can also perform mathematical equations in the dialog box.
For example, typing 1.0-5 will move the point to 0.5 along the given axis.
Reduce Points
When freehand drawing a polyline or an editable paint stroke, the spline is often created using more
control points than you need to efficiently make the shape. If you choose Reduce Points from the
polyline’s contextual menu or toolbar, a dialog box will open allowing you to decrease the number of
points used to create the polyline.
The overall shape will be maintained while eliminating redundant control points from the path.
When the value is 100, no points are removed from the spline. As you drag the slider to the left,
you reduce the number of points in the path.
Shape Box
If you have a polyline shape or a group of control points you want to scale, stretch, squish, skew, or
move, you can use the shape box to easily perform these operations.
If there are selected points on the polyline when the Shape Box mode is enabled, the shape box is
drawn around those points. Otherwise, you can drag the shape box around the area of control points
you want to include.
If you want to freely resize the shape box horizontally and vertically, you can drag a corner handle.
Dragging a handle on the side of the shape box resizes the polyline along a specific axis.
You use these options to simplify the screen display when adjusting control points placed closely
together and to avoid accidentally modifying controls and handles that are adjacent to the
intended target.
Stop Rendering
While points along the polyline are being moved, the results are rendered to the viewer to provide
constant interactive feedback. Although extremely useful, there are situations where this can be
distracting and can slow down performance on a complex effect. To disable this behavior so renders
happen only when the points stop moving, you can toggle the Stop Rendering button in the toolbar or
select this option from the polyline contextual menu.
Creating Softness
Using Double Polylines
The standard soft edge control available in all Mask nodes softens the entire mask equally. However,
there are times, particularly with a lot of motion blur, when softening part of the curve while keeping
other portions of the curve sharp is required.
This form of softness is called non-uniform softness, which is accomplished by converting the shape
from a single polyline to a double polyline. The double polyline is composed of two shapes: an inner
and an outer shape. The inner shape is the original shape from the single polyline, whereas the outer
shape is used to determine the spread of the softness. The further the outer shape gets from the
inner shape, the softer that segment of the shape becomes.
The shape will be converted into an inner and an outer polyline spline. Both polylines start with exactly
the same shape as the original single polyline. This keeps the mask sharp to start with and allows any
animation that may have already been applied to the shape to remain.
A dashed line drawn between the points indicates the relationship between the points on the inner
and outer shapes.
Once the outer polyline is selected, you can drag any of the points away from the inner polyline to add
some softness to the mask.
TIP: Press Shift-A to select all the points on a shape, and then hold O and drag to offset the
points from the inner shape. This gives you a starting point to edit the falloff.
The farther the outer shape segment is from the inner shape, the larger the falloff will be in that area.
Each polyline stores its animation separately; however, if a point is adjusted on the inner shape
that is parented to a point on the outer shape, a keyframe will be set for both splines. Adjusting a
parented point on the outer shape only sets a keyframe for the outer shape’s spline. If a point that
is not parented is adjusted, it will only set a keyframe on the relevant spline. You can disable this
behavior entirely for this polyline by selecting Polygon: Outer Polygon > Follow Inner Polyline from the
contextual menu.
Any animation already applied to either point is preserved when the points become parented.
To unlock a point so it is no longer parented, select the point, right-click in the viewer, and deselect
Lock Point Pairs from the contextual menu.
TIP: The center point and rotation of a shape are not auto-animated. Only the control
points are automatically animated. To animate the center position or rotation, enable
keyframes for that parameter in the Inspector.
To adjust the overall timing of the mask animation, you edit the Keyframe horizontal position spline
using the Spline Editor or Timeline Editor. Additional points can be added to the mask at any point to
refine the shape as areas of the image become more detailed.
This default keyframing behavior is convenient when quickly animating shapes from one form
to another, but it doesn’t allow for specific individual control points that need to be keyframed
independently of all other control points for a particular shape. If you’re working on a complex mask
that would benefit from more precise timing or interpolation of individual control points, you can
expose one or more specific control points on a polyline by publishing them.
Be aware that publishing a control point on a polyline removes that point from the standard animation
spline. From that point forward, that control point can only be animated via its own keyframes
on its own animation spline. Once removed, this point will not be connected to paths, modifiers,
expressions, or trackers that are connected to the main polyline spline.
A new coordinate control is added to the Polyline mask controls for each published point, named Point
0, Point 1, and so on.
The onscreen control indicates published points on the polyline by drawing that control point
much larger. Once a published point is created, it can be connected to a tracker, path, expression,
or modifier by right-clicking on this control and selecting the desired option from the point’s
contextual menu.
When a point of an effect mask is set to follow points, the point will be drawn as a diamond shape
rather than a small box.
When this mode is enabled, the new “following” control points will maintain their position relative
to the motion of any published points in the mask, while attempting to maintain the shape of that
segment of the mask. Unlike published points, the position of the following points can still be
animated to allow for morphing of that segment’s shape over time.
Paint
This chapter describes how to use Fusion’s non-destructive Paint
tool to repair images, remove objects, and add creative elements.
Contents
Paint Overview���������������������������������������������������������������������������������������������������������� 520
Inverting the Steady Effect to Put the Motion Back In�������������������������������� 540
— The Paint node is located in the Paint category of the Effects Library.
— The Mask Paint node is located in the Mask category of the Effects Library.
The main difference between these two Paint tools is that the Mask Paint tool only paints on the Alpha
channel, so there are no channel selector buttons. The Paint tool can paint on any or all channels.
The majority of this chapter covers the Paint node, since it shares identical parameters and settings
with the Mask Paint node.
Because of this, the Paint node requires a background input to set the resolution of the “canvas” you’ll
be painting upon. To do this, you can set up a Paint node in the node tree in one of two ways: painting
directly on an image or using Paint as the foreground.
The duration of each of these stoke types is one frame by default, but this can be changed using the
Stroke Duration slider in the Inspector. Multistroke and Clone MultiStroke are basically the same tools,
except Clone Multistroke automatically configures the tool for cloning. In contrast, the Multistroke
requires you to set up the tool for cloning manually.
By default, the Stroke type does not expose control points for the shape of the path. You can move
and track the center and rotation of the Stroke, but the individual control points that create the
spline are hidden. To reveal the control points, you can open the Stroke Controls at the bottom of the
Inspector and click the Make Editable button.
Although the Stroke type is the most flexible, that flexibility can come at a performance penalty if
you’re painting hundreds of strokes on a frame. For larger numbers of strokes that do not need to be
animated, it’s better to use Multistroke or Clone Multistroke, as these are more processor efficient.
If a motion path is published, right-clicking on the Shape Animation label at the bottom of the Polyline
Stroke’s Stroke Controls allows you to use the Connect To menu to assume the shape of a motion path
or mask. You can also use this method if you import SVG graphics and want to “paint-on” the outlines.
All of the Copy [Shape Name] stroke types require that you connect the source node you are
cloning from into the Paint node, and set the Fill Type menu to Image.
When you paint, each stroke is unpremultiplied, so adjusting the Alpha slider in the Inspector does not
affect what you apply to the RGB channels. However, changing opacity affects all four channels.
You can use the Clone Apply Mode to clone from the same image connected to the Paint node’s
background input or a different source from the node tree.
5 Paint over the area you want to cover up using the source pixels.
When trying to erase objects or artifacts from a clip using the Clone Apply Mode, it can sometimes
be easier if you sample from a different frame on the same clip. This works well when the object you
are trying to clone out moves during the clip, revealing the area behind the object. Sampling from a
different frame can use the revealed background by offsetting the source frame.
8 Paint over the area you want to cover up using the source pixels.
The plane is half painted out using on the Overlay with Time Offset
TIP: To select multiple strokes, you can Shift-click or Command-click to select and deselect
multiple specific strokes, or you can drag a selection box around all strokes you want
to select.
The Stroke or Polyline Stroke type can be edited by selecting the stroke in the viewer
Although you can make changes in the Tools tab in the Inspector, the Paint node uses both the
Tools tab and the Modifiers tab. In the Tools tab, you can create new brush strokes and select a
stroke in the viewer to edit. The Modifiers tab presents a list of all the strokes for the selected
Paint node, which makes it easy to modify any previously created paint stroke.
The same controls you used in the Tools tab to create the strokes are located in the Modifier’s tab to
modify them. You can also animate each individual stroke.
Once you stop painting a stroke, it’s added to the Modifiers tab along with an additional Stroke
modifier that represents the next stroke. For instance, if you paint your first stroke, the Modifiers tab
shows your stroke as Stroke1 and then a stroke 2 as well, which represents the next stroke you create.
You always have one more stroke in the Modifiers tab than strokes in the viewer.
To delete all paint strokes you’ve made on every frame, do one of the following:
— Click the reset button in the upper-right corner of the Inspector.
— Delete the Paint node in the Node Editor.
To auto-animate the stroke, you can choose one of the three Write options or the Trail option.
Choosing Write On automatically creates a write-on animation. The duration is set by two keyframes
that get added when you choose Write On from the menu. The Start keyframe is set on the frame
where you first created the stroke. The End keyframe is added on the current frame when you choose
Write On from the menu. The remaining options in the menu set their Start and End keyframes
similarly but change the direction of the animation based on the menu selection.
Selecting all the strokes and then clicking the Paint Group
button collects all the strokes into a single group
The group’s onscreen controls replace the controls for each paint stroke, and the Modifiers tab in the
Inspector shows the group’s parameters. The individual strokes are still editable by selecting Show
Subgroup Controls in the Modifiers tab of the Inspector. The group then comes with a Center, Angle,
and Size control for connecting to a tracker.
Because this is a clip in motion, we can’t just paint out the scars on the man’s forehead; we need
to deal with the motion so that the paint work we do stays put on his face. In this case, a common
workflow is to analyze the motion in the image and use it to apply a “steady” operation, pinning
down the area we want to paint in place so we can paint on an unmoving surface.
With the PlanarTracker node selected and loaded in the viewer, a viewer toolbar appears with a
variety of tools for drawing shapes and manipulating tracking data. The Planar Tracker works by
tracking flat surfaces that you define by drawing a shape around the feature you want to track.
When you first create a PlanarTracker node, you can immediately begin drawing a shape, so in this
case, we draw a simple polygon over the man’s forehead since that’s the feature we want to steady
in preparation for painting.
We draw a simple box by clicking once each on each corner of the man’s forehead to create
control points, and then clicking the first one we created to close the shape.
Drawing a shape over the man’s forehead to prepare for Planar Tracking
In the Inspector, the PlanarTracker node has tracking transport controls that are similar to those of
the Tracker. However, there are two buttons, Set and Go, underneath the Operation Mode menu,
which defaults to Track, since that’s the first thing we need to do. The Set button lets you choose
which frame to use as the “reference frame” for tracking, so you click the Set button first before
clicking the Track Forward button below.
TIP: The Set button lets you supervise a Planar Track in progress and stop it if you see it
slipping, making adjustments as necessary before clicking Set at the new frame to set a
new reference before continuing to track forward towards the end of the clip.
The Pattern controls let you set up how you want to handle the analysis. Of these controls, the Motion
Type menu is perhaps the most important. In this particular case, Perspective tracking is the analysis
we want. Still, in other situations, you may find you get better results with the Translation, Translation/
Rotation, and Translation/Rotation/Scale options.
Once you initiate the track, a series of dots appears within the track region shape you created to
indicate trackable pixels found. A green progress bar at the bottom of the Timeline ruler lets you see
how much of the shot is remaining to track.
Clicking the Track from First Frame button to set the Planar Track in progress; green
dots on the image and a green progress bar let you know the track is happening
Once the track is complete, you can set the Operation Mode of the PlanarTracker node’s controls
in the Inspector to Steady.
You’ll immediately see the image warped as much as is necessary to pin the tracked region in
place for whatever operation you want to perform. If you scrub through the clip, you should see
that the image dynamically cornerpin-warps as much as is necessary to keep the forehead region
within the shape you drew pinned in place. In this case, this sets up the man’s head as a canvas
for paint.
Steadying the image results in warping as the forehead is pinned in place for painting
Choosing the Stroke tool from the Paint node’s tools in the viewer toolbar
With the Stroke tool selected in the Paint toolbar, the Clone mode selected in the Inspector controls,
and the Source for cloning added to the Source Tool field, we’re ready to start painting. If we move the
pointer over the viewer, a circle shows us the paint tool, ready to go.
Setting an offset to sample for cloning (left), and dragging to draw a clone stroke (right)
If you don’t like the stroke you’ve created, you can undo with Command-Z and try again. We repeat the
process with the other scar on the man’s forehead, possibly adding a few other small strokes to make
sure there are no noticeable edges, and in a few seconds, we’ve taken care of the issue.
Original image (left), and after painting out two scars on the
man’s forehead with the Stroke tool set to Clone
TIP: You can adjust the size of the brush right in the viewer, if necessary, by holding down
the Command key and dragging the pointer left and right. You’ll see the brush outline
change size as you do this.
We select and copy the PlanarTracker node coming before the Merge node, and paste a copy of it
after. This copy has all the analysis and tracking data of the original PlanarTracker node.
Pasting a second copy of the PlanarTracker node after the Paint node
This is just one example of how to set up a Planar Tracker and Paint node. In some instances, you
made need to do more work with masks and layering, but the above example gives you a good
starting point.
To create the clean plate, you connect the paint node to the output of the Time Stretcher. Clone
over the areas you want to hide, and you now have a single clean frame. Now you need to
composite the clean area over the original.
Add a MatteControl node with a garbage mask to cut out the painted forehead
TIP: When it comes to using masks to create transparency, there are a variety of
ways to connect one—for example, (a) attach the image to the background input of a
Brightness/Contrast node and attach a Polygon mask node to the effect mask input.
On the Brightness/Contrast node, enable the Alpha channel and lower the Gain slider
to darken a hole, or (b) using Channel Booleans to copy channel data to the alpha
from a Polygon node attached to the foreground input and the image attached to the
background input.
Drawing shapes using the Polygon node is similar to shape drawing in other spline-based
environments, including the Color page:
Before fixing this, we drag the Soft Edge slider in the Inspector to the right to blur the edges
just a bit.
Inverting the Garbage Matte input (Left), and the resulting inverted mask inverting the forehead (right)
We create a Merge node connected to the output of the PlanarTracker node, and then we connect the
MatteControl’s output to the green foreground input of the Merge node. This puts the cropped and
fixed forehead on top of the original image.
Selecting the first PlanarTracker node that comes right after the MediaIn node, and choosing
Track from the Operation Mode menu, reveals a Create Planar Transform button at the bottom of
the listed controls. Clicking this button creates a new, disconnected Planar Transform node in the
Node Editor, which has the transforms from the Planar Tracker baked in. Unlike the Planar Tracker,
We can insert this new node into the node tree to use it by holding down the Shift key and
dragging the node over the connection between the Polygon node and the MatteControl node,
dropping it when the connection highlights.
With the new Planar Transform node inserted, the Polygon automatically moves to match the
motion of the forehead that was tracked by the original PlanarTracker node, and it animates to
follow along with the movement of the shot. At this point, we’re finished!
The final painted image, along with the final node tree
Using the
Tracker Node
This chapter shows the many capabilities of the Tracker node in
Fusion, starting with how trackers can be connected in your node
trees, and finishing with the different tasks that can be performed.
Contents
Introduction to Tracking�������������������������������� 548 Stabilization Using the
Tracker Match Move Mode������������������������������ 564
Tracker Node Overview���������������������������������� 548
Smoothing Motion���������������������������������������������� 565
Modes of the Tracker Node����������������������������� 548
Using the Tracker Node
Basic Tracker Node Operation��������������������� 549
for Match Moving���������������������������������������������� 566
Connect to a Tracker’s Background Input� 549
Simple Match Moving���������������������������������������� 566
Analyze the Image to be Tracked������������������ 550
Corner Positioning Operations���������������������� 567
Apply the Tracking Data����������������������������������� 550
Perspective Positioning Operations������������ 567
Viewing Tracking Data
Connecting to Trackers’ Operations��������� 567
in the Spline Editor�������������������������������������������� 553
Steady Position���������������������������������������������������� 568
Tracker Inspector Controls���������������������������� 553
Steady Angle��������������������������������������������������������� 568
Motion Tracking Workflow In Depth�������� 555
Offset Position����������������������������������������������������� 568
Connect the Image to Track���������������������������� 555
Unsteady Position����������������������������������������������� 568
Add Trackers��������������������������������������������������������� 555
Steady Size������������������������������������������������������������� 568
Refine the Search Area�������������������������������������� 558
Using the Outputs of a Tracker���������������������� 569
Perform the Track Analysis����������������������������� 558
Using the Tracker as a Modifier������������������ 571
Tips for Choosing a Good Pattern���������������� 559
Match Moving Text Example������������������������ 573
Using the Pattern Flipbooks��������������������������� 561
Adding a Layer to Match Move���������������������� 573
Using Adaptive Pattern Tracking������������������ 561
Setting Up Motion Tracking���������������������������� 574
Dealing with Obscured Patterns���������������� 562
A Simple Tracking Workflow��������������������������� 575
Dealing with Patterns
Connecting Motion Track
That Leave the Frame���������������������������������������� 562
Data to Match Move������������������������������������������� 578
Setting Up Tracker Offsets������������������������������ 563
Offsetting the Position
Stabilizing with the Tracker Node�������������� 564 of a Match Moved Image��������������������������������� 580
Each tracker type has its own chapter in this manual. This chapter covers the tracking techniques with
the Tracker node.
Stabilizing
You can use one or more tracked patterns to remove all the motion from the sequence or to smooth
out vibration and shakiness. When you use a single tracker pattern to stabilize, you stabilize only
the X and Y position. Using multiple patterns together, you are able to stabilize position, rotation,
and scaling.
Match Moving
The reverse of stabilizing is match moving, which detects position, rotation, and scaling in a clip using
one or more patterns. Instead of removing that motion, it is applied to another image so that the two
images can be composited together.
Perspective Positioning
Perspective positioning again tracks four patterns to identify the four corners of a rectangle. Each corner
is then mapped to a corner of the image, rescaling and warping the image to remove all apparent
perspective. The Planar Tracker node is often a better first choice for removing perspective from a clip.
1 Attach an image you want to track to the yellow background input of the Tracker node.
2 Set the tracking pattern and analyze the clip to create a path.
3 Apply the tracking data to stabilize, match move, corner pin, or remove perspective.
You can insert the Tracker node serially with other nodes if you intend to use the Tracker node itself to
do a simple stabilization operation or if you want to use it to perform the function of a Merge node in
a match move or corner-pin operation.
However, if you’re just using a Tracker node to analyze data for use with multiple nodes elsewhere
in the comp, you could choose to branch it and leave its output disconnected to indicate that
Tracker node is a data repository. Please note that this is not necessary; serially connected Tracker
nodes can be linked to multiple other nodes as well.
The Tracker set up as a branch and connected using the Connect To menu
The ellipse needs to follow the motion of the ray gun, so a Tracker node is used to analyze the
movement of the gun tip so that tracking data can be used to animate the ellipse. The ellipse is
not connected to the tracker directly via the foreground input but indirectly through the Connect
To contextual menu.
This is made easier by renaming the Tracker you created to something descriptive of what’s
being tracked.
Once the tip of the ray gun has been tracked, this tracking data is then connected to the Center
parameter of an Ellipse node that’s limiting a Glow effect by right-clicking the label of the Center
parameter in the Inspector, and choosing Tracker1 > Ray Gun Glow: Offset position from the Connect
to submenu of the contextual menu. All the data from every Tracker node in your node tree and
every tracking pattern appears within this submenu, and since we named the Tracker, it’s easy to find.
Choosing Offset position because it will place the center of the ellipse directly over the path. However,
it also gives us the flexibly to offset the ellipse if need be, using the offset controls in the Inspector.
You can connect the data from a Tracker node to any other node’s parameter; however, you’ll most
typically connect track data to center, pivot, or corner X/Y style parameters. When you use tracking
data this way, it’s not necessary to connect the output of the Tracker node itself to anything else in
your node tree; the data is passed from the Tracker to the Center parameter by linking it with the
Connect To submenu.
The Tracker uses a displacement spline by detail that indicates how far the tracking point is based on
the original location. It is great for modifying velocity, but it doesn’t tell you anything about direction.
If you need to nudge a few points in a certain direction, you can convert the displacement spline to an
X and Y coordinate spline.
Right-click in the viewer to bring up a contextual menu. At the very bottom is a reference to the path
the Tracker created, called Tracker1Tracker1Path:Polyline. Choosing it calls up a longer submenu
where you can choose Convert to XY Path.
For more information on Displacement Splines, see Chapter 10, “Animating in Fusion’s Spline Editor,” in
the Fusion Reference Manual.
— The Tracker Control tab: This is where you create onscreen trackers with which to target
patterns, and where the controls appear that let you perform the required track analysis.
— The Operations tab: This is where you decide how the tracking data is used.
— The Display Options tab: This is where you can customize how the onscreen controls
look in the viewer.
Add Trackers
Although each Tracker node starts with a single tracker pattern, a single node is capable of analyzing
multiple tracking patterns that have been added to the Tracker List, enabling you to track multiple
features of an image all at once for later use and to enable different kinds of transforms. Additional
trackers can be added by clicking the Add button immediately above the Tracker List control.
Multiple patterns are useful when stabilizing, match moving, or removing perspective from a clip.
They also help to keep the Node Editor from becoming cluttered by collecting into a single node
what would otherwise require several nodes.
Clicking any part of a tracker’s onscreen controls will select it. Selected pattern boxes are red, while
deselected pattern boxes are green.
If you need to select a new pattern, you can move the pattern box by dragging the small (and
easily missed) handle at the top left of the inner pattern box.
While moving the pattern box, an overlay pop-up appears, showing a zoomed version of the pixels
contained within the rectangle to help you precisely position the pattern via the crosshairs within.
The pattern rectangle can also be resized by dragging on the edges of the rectangle. You want
to size the pattern box so that it fits the detail you want to track, and excludes area that doesn’t
matter. Ideally, you want to make sure that every pixel of the pattern you’re tracking is on the
same plane, and that no part of the pattern is actually an occluding edge that’s in front of what
you’re really tracking. When you resize the pattern box, it resizes from the center, so one drag lets
you create any rectangle you need.
For example, tracking a pattern that is moving quickly across the screen from left to right requires a
wide search area but does not require a very tall one, since all movement is horizontal. If the search
area is smaller than the movement of the pattern from one frame to the next, the Tracker will likely fail
and start tracking the wrong pixels, so it’s important to take the speed and direction of the motion
into consideration when setting the search area.
Once your options are set, you can use any of the tracking transport buttons at the top of the
Inspector to start tracking. Once tracking has started, you cannot work in the Node Editor until
it has completed.
Pattern tracking will stop automatically when it reaches the end of the render range (or the start when
tracking backward), but you can also interrupt it and stop tracking at any time.
When tracking is complete, the path will be connected to the pattern. The path from that pattern
can now be connected to another node or used for more advanced operations like stabilization
and corner positioning.
Once the track is complete, assuming it’s good, you can use the various techniques in this chapter to
use the track in your composition.
The first step in pattern selection is to review the footage to be tracked several times. Watch for
candidate patterns that are visible through the entire range of frames, where the contrast is high and
the shape of the pattern does not change over time. The more unique the pattern, the more likely the
track is to be successful.
In addition to locating high contrast, defined patterns, watch for the frames where the pattern moves
the most. Identifying the maximum range of a pattern’s motion will help to determine the correct size
for the pattern search area.
You can override the automatic channel selection by clicking the buttons beneath the bars for each
channel to determine the channel used for tracking.
You can choose any one of the color channels, the luminance channels, or the alpha channel
to track a pattern.
When choosing a channel, the goal is to choose the cleanest, highest contrast channel for use in the
track. Channels that contain large amounts of grain or noise should be avoided. Bright objects against
dark backgrounds often track best using the luminance channel.
Try not to select just any potentially valid pattern in the sequence, as some patterns will make the
solution worse rather than better. To help with your selection, use the following guidelines when
selecting patterns for stabilization.
— Locate patterns at the same relative depth in the image. Objects further in the background will
move in greater amounts compared to objects in the foreground due to perspective distortion.
This can confuse the stabilization calculations, which do not compensate for depth.
— Locate patterns that are fixed in position relative to each other. Patterns should not be capable
of moving with reference to each other. The four corners of a sign would be excellent candidates,
while the faces of two different people in the scene would be extremely poor choices for patterns.
Each pattern that’s stored is added to a Flipbook. Once the render is complete, you can play this
Pattern Flipbook to help you evaluate the accuracy of the tracked path. If you notice any jumps in the
frames, then you know something probably went wrong.
None
When the Adaptive mode is set to None, the pattern within the rectangle is acquired when the pattern
is selected, and that becomes the only pattern used during the track.
Every Frame
When Every Frame is chosen, the pattern within the rectangle is acquired when the pattern is
selected, and then reacquired at each frame. The pattern found at frame 1 is used in the search on
frame 2, the pattern found on frame 2 is used to search frame 3, and so on. This method helps the
Tracker adapt to changing conditions in the pattern.
Every Frame tracking is slower and can be prone to drifting from sub-pixel shifts in the pattern from
frame to frame. Its use is therefore not recommended unless other methods fail.
As a comparison between the two Adaptive modes, if a shadow passes over the tracker point, the
Every Frame tracking mode may start tracking the shadow instead of the desired pattern. The Best
Match mode would detect that the change from the previous frame’s pattern was too extreme and
would not grab a new pattern from that frame.
The Adaptive mode is applied to all active patterns while tracking. If you only want some patterns to
use the Adaptive mode, disable all other patterns in the list before tracking.
In these situations, you divide the render range up into two ranges, the range before the pattern is
obscured and the range after the pattern becomes visible again. After tracking the two ranges individually,
the Tracker will automatically interpolate between the end of the first range and the start of the second.
If you need to edit the resulting motion path to account for any non-linear motion that takes place
between the two tracked ranges, you can select the track path to expose a Node toolbar with controls
for adjusting the control points on this path. For example, you can choose Insert and Modify mode to
insert points in the non-tracked range to compensate for any nonlinear motion in the tracked pattern.
Tools for modifying tracker paths in the Node toolbar of the viewer
When selecting a pattern to use in appending to an existing path, a pattern that is close to the old
pattern and at the same apparent depth in the frame generates the best results. The further away
the new pattern is, the more likely it is that the difference in perspective and axial rotation will reduce
accuracy of the tracked result.
The X and Y Offset controls allow for constant or animated positional offsets to be created relative
to the actual Tracker’s pattern center. The position of the offset in the viewer will be shown by a
dashed line running from the pattern center to the offset position. You can also adjust the offset
in the viewer using the Tracker Offset button. Clicking the button enables you to reposition the
path while keeping the Tracker pattern in place.
The Tracker Offset tool in the Node toolbar of the viewer; a track of
the orange dot is being offset to the center of the ray gun
Here are some common scenarios for stabilization that are handled when the Tracker is set to Match Move.
Stabilization can correct for position with as little as one pattern. Two or more patterns are required to
correct for rotation or scaling within the image.
When the Operation menu is set to Match Move, choosing BG only from the Merge operation menu
stabilizes the background (yellow input) clip. Only the controls that are applicable for stabilization
operations will appear in the Operation tab.
Several of the stabilization controls are always available, collected under the Match Move Settings
disclosure button. These controls are available at all times because the Steady and Unsteady positions
of a tracker are always published. This makes them available for connection by other controls, even
when the Tracker’s operation is not set to match moving.
Edges
The Edges menu determines whether the edges of an image that leave the visible frame are cropped,
duplicated, or wrapped when the stabilization is applied. Wrapping edges is often desirable for
some methods of match moving, although rarely when stabilizing the image for any other purpose.
For more information on the controls, see Chapter 57, “Tracker Nodes,” in the Fusion Reference Manual.
Position/Rotation/Scaling
Use the Position, Rotation, and Scaling checkboxes to select what aspects of the motion are corrected.
Pivot Type
The Pivot Type for the stabilization is used to calculate the axis of rotation and scaling calculations.
This is usually the average of the combined pattern centers but may be changed to the position of
a single tracker or a manually selected position.
Reference
The Reference controls establish whether the image is stabilized to the first frame in the
sequence, the last frame, or to a manually selected frame. Any deviation from this reference by the
tracked patterns is transformed back to this ideal frame.
As a general rule, when tracking to remove all motion from a clip, set the Merge mode to BG Only,
the Pivot Type to Tracker Average or Selected Tracker, and the Reference control to Start, End, or
Select Time.
Smoothing Motion
When confronted with an image sequence with erratic or jerky camera motion, instead of trying to
remove all movement from the shot, you often need to preserve the original camera movement while
losing the erratic motion.
The Start & End reference option is designed for this technique. Instead of stabilizing to a reference
frame, the tracked path is simplified. The position of each pattern is evaluated from the start of the
path and the end of the path along with intervening points. The result is smooth motion that replaces
the existing unsteady move.
When tracking to create smooth camera motion, ensure that the Start & End reference mode is
enabled and set the Merge mode to BG Only. It is recommended to leave the Pivot Type control set to
Tracker Average.
Some clips may need to be stabilized so that an element from another source can be added to the
shot. After the element or effect has been composited, the stabilization should be removed to make
the shot look natural again.
When using this Merge menu, you connect a foreground image to the Tracker node’s input
connection in the Node Editor.
The Corner Positioning operation of the Tracker requires the presence of a minimum of four patterns.
If this operation mode is selected and there are not four patterns set up in the Tracker already,
additional patterns will automatically be added to bring the total up to four.
When this mode is enabled, a set of drop-down boxes will appear to select which tracker relates to
each corner of the rectangle. It has no effect when the Merge control option is set to BG Only.
The Perspective Positioning operation of the Tracker requires the presence of a minimum of four
patterns. If this operation mode is selected and there are not four patterns set up in the Tracker
already, additional patterns will automatically be added to bring the total up to four.
When this mode is enabled, a set of drop-down boxes will appear to select which tracker relates to
each corner of the rectangle. It has no effect when the Merge control option is set to BG Only.
In addition to the path (called Offset Position), each pattern in a tracker publishes four other values for
use as connections that are available to other nodes in the Node Editor.
You connect a node’s position parameters to a tracker by selecting the connection type from
the controls contextual menu (for example, Transform 1: Center > Connect To > Tracker 1 >
Offset Position).
There are five connection types automatically published by the tracker to connect to a position
parameter in another node.
Steady Angle
The Steady Angle mode can be used to stabilize footage in both X and/or Y to remove camera shake
and other unwanted movement. When you connect a control, for example the Angle of a Transform,
to the Steady Angle of the Tracker, it will be placed at 0 degrees by default at frame 1. This can be
changed by means of the Reference mode in the Tracker’s Operation tab. From there on, the resulting
motion of the Steady Angle mode will rotate into the opposite direction of the original motion.
So if the angle at frame 10 is 15 degrees, the result of the Steady Angle will be -15 degrees.
To use Steady Angle, you need at least two tracked patterns in your tracker. With just one point, you
can only apply (Un)Steady Position.
Offset Position
An Offset Position is essentially the path generated by the tracker. It is the one you select when you
want an object to follow the path. It is available for each single tracker in the Tracker node and refers
to that single tracker only. When you connect the Center X and Y parameters to the offset position of
the Tracker, the node’s center will follow exactly the path of that tracker. Connecting to single trackers
is always useful when you want to match elements with object motion in your footage. For example,
you could track a hand of your actor and attach a ball to the Tracker‘s offset position, so that the ball
follows the exact motion of the hand. Or you could track an element that needs rotoscoping and
attach the mask’s center to the Tracker’s offset position.
Unsteady Position
After using the Steady Position, the Unsteady Position is used to reintroduce the original movement
on an image after an effect or new layer has been added. The resulting motion from Unsteady
Position is basically an offset in the same direction as the original motion.
Steady Size
The Steady Size connection outputs the inverse of the tracked pattern’s scale. When you connect a
parameter, for example the Size of a Transform, to the Steady Size of the Tracker, it will be placed with
a Size of 1 (i.e., the original size) by default at frame 1. This can be changed by means of the Reference
mode in the Tracker’s Operation tab. The resulting size of the Steady Size mode will then counteract
the size changes of the original motion. So if the actual size at frame 10 is 1.15, the result of the Steady
Size will be 1 - (1.15 - 1) = 0.85.
To use Steady Size, you need at least two tracked patterns in your tracker. With just one point, you can
only apply (Un)Steady Position.
Rather than using the Tracker node to perform the Merge operation, an alternative and common
way to use these published outputs is to create a match move by connecting the outputs to
multiple nodes. A tracker is used to track a pattern, and then that data can be connected to
multiple other nodes using the Connect To submenu.
As an example, to use the Connect To menu to perform a match move, do the following:
1 Track the background clip using at least two tracking patterns in the tracker.
2 In a different branch, add a Transform node to the background clip.
3 Right-click over the Transform’s Center and choose Connect to > Tracker1 > Steady Position.
4 Connect the foreground to a corner-positioned node, so you can position the corners of the
foreground appropriately over the background.
5 Add another Transform node to the Node Editor after the Merge.
A second Transform after the Merge is used to add back in the original motion with Unsteady Poisition
6 Connect the new Transform’s Center to the Tracker’s Unsteady Position. The image will be
restored to its original state with the additional effect included.
To better understand how this works, imagine a pattern that is selected at frame 1, at position 0.5,
0.5. The pattern does not move on frame 2, so its position is still 0.5, 0.5. On the third frame, it moves
10 percent of the image’s width to the right. Now its position is 0.6, 0.5.
If a transform center is connected to the Steady Position output of the Tracker, the Transform node’s
center is 0.5, 0.5 on the first and second frames because there has been no change. On frame 3, the
center moves to 0.4, 0.5. This is the inverse of the horizontal motion that was tracked in the pattern,
moving the image slightly to the right by 10 percent of the image width to counteract the movement
and return the pattern of pixels back to where they were found.
The differences between a Tracker modifier and a Tracker node are as follows:
— The Tracker modifier can only track a single pattern.
— A source image must be set for the Tracker modifier.
The Tracker modifier can only output a single value and cannot be used for complex stabilization
procedures, but it is a nice quick way to apply a tracker to a point that you need to follow.
3 Click the Modifiers tab in the Inspector and drag the MediaIn1 node that you want to track into the
Tracker Source field.
4 Click the Track Forward button to begin tracking the person’s eye.
5 Insert a Soft Glow node directly after the MediaIn and connect the Ellipse Mask to the white
Glow Mask input.
You can set a different source image for the Tracker modifier by typing in the name of the node
or dragging and dropping the node from the Node Editor into the Tracker Source field control.
If you have a node (let’s call it node#1) connected to the node that contains the modifier (let’s call
it node#2), the source image for the Tracker modifier will automatically the node #1
For more information on the Tracking parameters, see Chapter 57, “Tracker Nodes,” in the Fusion
Reference Manual.
Our goal for this composition is to motion track the background image so that the text moves along
with the scene as the camera flies along.
If you position your pointer over this box, the entire onscreen control for that tracker appears, and if you
click the onscreen control to select that tracker, it turns red. As with so many other tracker interfaces
you’ve likely used, this consists of two boxes with various handles for moving and resizing them:
— The inner box is the “pattern box,” which identifies the “pattern” in the image you’re tracking
and want to follow the motion of. The pattern box has a tiny handle at its upper-left corner that
you use to drag the box to overlap whatever you want to track. You can also resize this box by
dragging any corner, or you can squish or stretch the box by dragging any edge to make the
box better fit the size of the pattern you’re trying to track. The center position of the tracker is
indicated via X and Y coordinates.
— The outer box is the “search box,” which identifies how much of the image the tracker needs to
analyze to follow the motion of the pattern. If you have a slow-moving image, then the default search
box size is probably fine. However, if you have a fast-moving image, you may need to resize the search
box (using the same kind of corner and side handles) to search a larger area, at the expense of a
longer analysis. The name of that tracker is shown at the bottom right of the search box.
It’s worth saying a second time that the handle for moving a tracker’s onscreen control is a tiny
dot at the upper-left corner of the inner pattern box. It’s really easy to miss if you’re new to Fusion.
You must click on this dot to drag the tracker around.
In this example, we’ll drag the onscreen control so the pattern box overlaps a section of the
bridge right over the leftmost support. As we drag the onscreen control, we see a zoomed-in
representation of the part of the image we’re dragging over to help us position the tracker with
greater precision. For this example, the default sizes of the pattern and search box are fine as is.
Additional controls over each tracker and the image channels being analyzed appear at the
bottom, along with offset controls for each tracker, but we don’t need those now (at least not yet).
Again, this track is so simple that we don’t need to change the default behaviors that much, but
because the drone is flying in a circular pattern, the shape of the pattern area is changing as the
clip plays. Fortunately, we can choose Every Frame from the Adaptive Mode menu to instruct the
tracker to update the pattern being matched at every frame of the analysis, to account for this.
Now, we just need to use the tracker analysis buttons at the top to begin the analysis. These
buttons work like transport controls, letting you start and stop analysis as necessary to deal with
problem tracks in various ways. Keep in mind that the first and last buttons, Track from Last Frame
and Track from First Frame, always begin a track at the last or first frame of the composition,
regardless of the playhead’s current position, so make sure you’ve placed your tracker onscreen
controls appropriately at the last or first frame.
The analysis buttons, left to right: Track from Last Frame, Track
Backward, Stop Tracking, Track Forward, Track from First Frame
The Center X and Y parameters, while individually adjustable, also function as a single target
for purposes of connecting to tracking to quickly set up match moving animation. You set this
up via the contextual menu that appears when you right-click any parameter in the Inspector,
which contains a variety of commands for adding keyframing, modifiers, expressions, and other
automated methods of animation including connecting to motion tracking.
If we right-click anywhere on the line of controls for Center X and Y, we can choose Connect To >
Tracker1 > Bridge Track: Offset position from the contextual menu, which connects this parameter
to the tracking data we analyzed earlier.
Immediately, the text moves so that the center position coincides with the center of the tracked
motion path at that frame. This lets us know the center of the text is being match moved to the
motion track path.
The offset we create is shown as a dotted red line that lets us see the actual offset being created
by the X and Y Offset controls. In fact, this is why we connected to the Bridge Track: Offset
position option earlier.
The text offset from the tracked motion path; the offset can be seen as a dotted red line in the viewer
Two frames of the text being match moved to follow the bridge in the shot
Planar Tracking
This chapter provides an overview of how to use the Planar Tracker
node, and how to use it to make match moves simple.
For more information about the Planar Tracker node, see Chapter 57, “Tracker Nodes,” in the Fusion
Reference Manual.
Contents
Introduction to Tracking��������������������������������������������������������������������������������������� 583
The Planar Tracker automates this process by analyzing the perspective distortions of a planar surface
on a background plate over time, and then re-applying those same perspective distortions to a
different foreground.
TIP: Part of using the Planar Tracker is also knowing when to give up and fall back to using
Fusion’s Tracker node or to manual keyframing. Some shots are simply not trackable, or
the resulting track suffers from too much jitter or drift. The Planar Tracker is a time-saving
node in the artist’s toolbox and, while it can track most shots, no tracker is a 100% solution.
— Track: Used to isolate a planar surface and track its movement over time. Then, you can create a
Planar Transform node that uses this data to match move another clip in various ways.
— Steady: After analyzing a planar surface, this mode removes all motion and distortions from the
planar surface, usually in preparation for some kind of paint or roto task, prior to “unsteadying”
the clip to add the motion back.
— Corner Pin: After analyzing a planar surface, this mode computes and applies a matching
perspective distortion to a foreground image you connect to the foreground input of the Planar
Tracker node, and merges it on top of the tracked footage.
— Stabilize: After analyzing a planar surface, allows smoothing of a clip’s translation, rotation, and
scale over time. Good for getting unwanted vibrations out of a clip while retaining the overall
camera motion that was intended.
Fusion’s Lens Distort node can be used to remove or add lens distortion in an image. Connecting
the MediaIn or Loader node to the Lens Distort node displays controls for manually correcting lens
distortion. If you use Synth Eyes, PFTrack or 3D Equalizer software, you can also import lens data from
those applications to make the adjustments more automatic.
For more information about using the Lens Distort node, see Chapter 61, “Warp Nodes,” in the Fusion
Reference Manual.
If you are using DaVinci Resolve, you can use the Lens Corrections control in the Cut page or
Edit page. This adjustment carries over into the Fusion page. Lens correction in DaVinci Resolve
automatically analyzes the frame in the Timeline viewer for edges that are being distorted by a wide
angle lens. Clicking the Analyze button moves the Distortion slider to provide an automatic correction.
From there, the MediaIn node in the Fusion page will have the correction applied, and you can begin
planar tracking.
3 Next, you’ll need to identify the specific pattern within the image that you want to track. In most
cases, this will probably be a rectangle, but any arbitrary closed polygon can be used. The pixels
enclosed by this region will serve as the pattern that will be searched for on other frames. Please
note that it is important that the pattern is drawn on the reference frame. In this example, we
want to track the wall behind the man, so we draw a polygon around part of the wall that the man
won’t pass over as he moves during the shot.
TIP: Do not confuse the pattern you’re identifying with the region you’re planning
to corner pin (which always has four corners and is separately specified in
Corner Pin mode.
5 If necessary, move the playhead back to the reference frame, which in this case was the first
frame. Then, click the Track To End button and wait for the track to complete.
As the clip tracks, you can see track markers and trails (if they’re enabled in the Options tab of
the Inspector) that let you see how much detail is contributing to the track, and the direction of
motion that’s being analyzed.
6 Once the track is complete, play through the clip to visually inspect the track so you can evaluate
how accurate it is. Does it stick to the surface? Switching to Steady mode can help here, as
scrubbing through the clip in Steady mode will help you immediately see unwanted motion in
the track.
7 Since we’re doing a match move, click the Create Planar Transform button to export a Planar
Transform node that will automatically transform either images or masks to follow the analyzed
motion of the plane you tracked.
In this case, the Planar Transform node will be inserted after a pair of Background and Paint nodes
that are being used to put some irritatingly trendy tech jargon graffiti on the wall. The Planar
The result is a seamless match move of the fake graffiti married to the wall in the original clip.
As a rule of thumb, the more pixels in the pattern, the better the quality of the track. In particular, this
means on the reference frame, the pattern to be tracked should:
— Be as large as possible.
— Be as much in frame as possible.
— Be as unoccluded as possible by any moving foreground objects.
— Be at its maximum size (e.g., when tracking an approaching road sign, it is good to pick a later
frame where it is 400 x 200 pixels rather than 80 x 40 pixels).
— Be relatively undistorted (e.g., when the camera orbits around a flat stop sign, it is better to pick
a frame where the sign is face on parallel to the camera rather than a frame where it is at a highly
oblique angle).
If the pattern contains too few pixels or not enough trackable features, this can cause problems
with the resulting track, such as jitter, wobble, and slippage. Sometimes dropping down to a simpler
motion type can help in this situation.
Additionally, the Fusion page of DaVinci Resolve provides access to all of the Resolve FX that come with
DaVinci Resolve.
Lastly, you can develop your own plugins without using a computer development environment by
scripting Fusion’s native Fuse plugins.
Contents
What Are Open FX?�������������������������������������������������������������������������������������������������� 590
Fusion Fundamentals | Chapter 24 Using Open FX, Resolve FX, and Fuse Plugins 589
What Are Open FX?
Fusion is able to use compatible Open FX (OFX) plugins that are installed on your computer. Open FX
is an open standard for visual effects plugins. It allows plugins written to the standard to work on both
DaVinci Resolve and Fusion Studio as well as other applications that support the standard.
OFX plugins can be purchased and downloaded from third-party suppliers such as BorisFX, Red Giant,
and RE:Vision Effects. All OFX appear in the Open FX category of the Effects Library, alongside all other
effects that are available in Fusion.
After installing a set of OFX plugins, you can access them or Resolve FX plugins in Fusion by opening
the Open FX category in the Effects Library.
To add a plugin to the Node Editor, either click the Open FX or Resolve FX plugin name in the Effects
Library or drag and drop the plugin onto a connection line to insert it into the node tree. If the plugin
has editable settings, you can adjust these in the Inspector.
Fusion Fundamentals | Chapter 24 Using Open FX, Resolve FX, and Fuse Plugins 590
Introduction to Fuse Plugins
Fuses are plugins developed for Fusion using the Lua built-in scripting language. Being script-based,
Fuses are compiled on-the-fly in Fusion without the need of a computer programming environment.
While a Fuse may be slower than an identical Open FX plugin created using Fusion’s C++ SDK, a Fuse
will still take advantage of Fusion’s existing nodes and GPU acceleration.
To install a Fuse:
1 Use the .fuse extension at the end of the document name.
2 For DaVinci Resolve, save it in one of the following locations:
— On macOS: Macintosh HD/Users/username/Library/Application Support/Blackmagic Design/
DaVinci Resolve/Fusion/Fuses
— On Windows: C:\Users\username\AppData\Roaming\Blackmagic Design\DaVinci Resolve\
Support\Fusion\Fuses
— On Linux: home/username/.local/share/DaVinciResolve/Fusion/Fuses
You can open and edit Fuses by selecting the Fuse node in the Node Editor and clicking the
Edit button at the top of the Inspector. The Fuse opens in the text editor specified in the Global
Preferences/Scripting panel.
TIP: Changes made to a Fuse in a text editor do not immediately propagate to other
instances of that Fuse in the composition. Reopening a composition updates all Fuses in
the composition based on the current saved version. Alternatively, you can click the Reload
button in the Inspector to update the selected node without closing and reopening the
composition.
Fusion Fundamentals | Chapter 24 Using Open FX, Resolve FX, and Fuse Plugins 591
3D
Compositing
PART 3 — CONTENTS
3D Compositing Basics
This chapter covers many of the nodes used for creating
3D composites, the tasks they perform, and how they can
be combined to produce effective 3D scenes.
Contents
An Overview of 3D Compositing����������������� 594 Plane of Focus and Depth of Field���������������� 615
Within the Fusion Node Editor, you have a GPU-accelerated 3D compositing environment that includes
support for imported geometry, point clouds, and particle systems for taking care of such things as:
Conveniently, at no point are you required to specify whether your overall composition is 2D or 3D,
because you can seamlessly combine any number of 2D and 3D “scenes” together to create a single
output. However, the nodes that create these scenes must be combined in specific ways for this to
work properly.
— One of the available geometry nodes (such as Text3D or Image Plane 3D)
— A light node (such as DirectionalLight or SpotLight)
— A camera node
— A Merge3D node
— A Renderer3D node
All these should be connected together as seen below, with the resultantly more complex 3D scene
shown below.
The same text, this time lit and framed using Text3D,
Camera, and SpotLight nodes to a Merge3D node
To briefly explain how this node tree works, the geometry node (in this case Text3D) creates an object
for the scene, and then the Merge3D node provides a virtual stage that combines the attached
geometry with the light and camera nodes to produce a lit and framed result with highlights and
shadows, while the aptly named Renderer3D node renders the resulting 3D scene to produce 2D
image output that can then be merged with other 2D images in your composition.
In fact, these nodes are so important that they appear at the right of the toolbar, enabling you to
quickly produce 3D scenes whenever you require. You might notice that the order of the 3D buttons
on the toolbar, from left to right, corresponds to the order in which these nodes are ordinarily used.
So, if you simply click on each one of these buttons from left to right, you cannot fail to create a
properly assembled 3D scene, ready to work on, as seen in the previous screenshot.
Geometry Nodes
You can add 3D geometry to a composition using the ImagePlane3D node, the Shape3D node,
the Cube3D node, the Text3D node, or optionally by importing a model via the FBX Mesh 3D node.
Furthermore, you can add particle geometry to scenes from pEmitter nodes. You can connect
these to a Merge3D node either singularly or in multiples to create sophisticated results combining
multiple elements.
Texturing Geometry
By itself, geometry nodes can only consist of a simple flat color. However, you can alter the look of 3D
geometry by texturing it using clips (either still images or movies), using material nodes such as the
Blinn and Phong nodes to create more sophisticated textures with combinations of 2D images and
environment maps, or you can use a preset shader from the Templates > Shader bin of the Effects
Library, which contains materials and texture presets that are ready to use.
If you’re working with simple geometric primitives, you can texture them by connecting either an
image (a still image or movie) or a shader from the Templates bin of the Effects Library directly to the
material input of a Shape3D, Cube3D, or other compatible node, as shown below.
If you’re shading or texturing Text3D nodes, you need to add a texture in a specific way since each
node is actually a scene with individual 3D objects (the characters) working together. In the following
example, the RustyMetal shader preset is applied to a Text3D node using the ReplaceMaterial3D
node. The interesting thing about the ReplaceMaterial3D node is that it textures every geometric
object within a scene at once, meaning that if you put a ReplaceMaterial3D node after a Text3D node,
you texture every character within that node. However, if you place a ReplaceMaterial3D node after
a Merge3D node, then you’ll end up changing the texture of every single geometric object being
combined within that Merge3D node, which is quite powerful.
You can build elaborate scenes using multiple Merge3D nodes connected together
This checkbox is disabled by default, which lets you light elements in one Merge3D scene without
worrying about how the lighting will affect geometry attached to other Merge3D nodes further
downstream. For example, you may want to apply a spotlight to brighten the wall of a building in one
Merge3D node without having that spotlight spill over onto the grass or pavement at the foot of the
wall modeled in another Merge3D node. In the example shown below, the left image shows how the
cone and taurus connected to a downstream node remain unlit by the light in an upstream node with
Pass Through Lights disabled, while the right image shows how everything becomes lit when turning
Pass Through Lights on.
If you transform a Merge3D node that’s connected to other Merge3D nodes, what happens
depends on which node you’re transforming, an upstream node or the downstream node:
— If you transform a downstream Merge3D node, you also transform all upstream nodes connected
to it as if they were all a single scene.
— If you transform an upstream Merge3D node, this has no effect on downstream Merge3D nodes,
allowing you to make transforms specific to that particular node’s scene.
To be more specific, 3D nodes that output 3D scenes cannot be connected directly to inputs that
require 2D images. For example, the output of an ImagePlane3D node cannot be connected directly
to the input of a Blur node, nor can the output of a Merge3D node be directly connected to a regular
Merge node. First, a Renderer3D node must be placed at the end of your 3D scene to render it into 2D
images, which may then be composited and adjusted like any other 2D image in your composition.
The image produced by the Renderer3D can be any resolution with options for fields processing,
color depth, and pixel aspect.
Software Renderer
The software renderer is generally used to produce the final output. While the software renderer
is not the fastest method of rendering, it has twin advantages. First, the software renderer can
easily handle textures much larger than one half of your GPU’s maximum texture size, so if you’re
working with texture images larger than 8K you should choose the software renderer to obtain
maximum quality.
Second, the software renderer is required to enable the rendering of “constant” and “variable” soft
shadows with adjustable Spread, which is not supported by the OpenGL renderer. Soft shadows are
more natural, and they’re enabled in the Shadows parameters of the Controls tab of light nodes;
you can choose Sampling Quality and Softness type, and adjust Spread, Min Softness, and Filter
Size sliders. Additionally, the software renderer supports alpha channels in shadow maps, allowing
transparency to alter shadow density.
When the Renderer3D node “Renderer Type” drop-down is set to OpenGL Renderer, you cannot render
soft shadows or excessively large textures (left). When the Renderer3D node “Renderer Type” drop-
down is set to Software Renderer, you can render higher-quality textures and soft shadows (right).
On the other hand, because of its speed, the OpenGL renderer exposes additional controls for
Accumulation Effects that let you enable depth of field rendering for creating shallow-focus effects.
Unfortunately, you can’t have both soft shadow rendering and depth of field rendering, so you’ll need
to choose which is more important for any given 3D scene you render.
OpenGL UV Renderer
When you choose the OpenGL UV Renderer option, a Renderer3D node outputs an “unwrapped”
version of the textures applied to upstream objects, at the resolution specified within the Image tab of
that Renderer3D node.
This specially output image is used for baking out texture projections or materials to a texture map for
one of two reasons:
The UV renderer can also be used for retouching textures. You can combine multiple DSLR still shots
of a location, project all those onto the mesh, UV render it out, and then retouch the seams and apply
it back to the mesh.
You could project tracked footage of a road with cars on it, UV render out the projection from the
geometry, do a temporal median filter on the frames, and then map a “clean” roadway back down.
The 3D Viewer
The interactive 3D Viewer is highly dependent on the computer’s graphics hardware, relying on
support from OpenGL. The amount of onboard memory, as well as the speed and features of your
workstation’s GPU, make a huge difference in the speed and capabilities of the 3D Viewer.
Displaying a node with a 3D output in any viewer will switch the display type to a 3D Viewer. Initially,
the contents of the scene will be displayed through a default perspective view.
To change the viewpoint, right-click in the viewer and choose the desired viewpoint from the ones
listed in the Camera submenu. A shortcut to the Camera submenu is to right-click on the axis label
displayed in the bottom corner of the viewer.
In addition to the usual Perspective, Front, Top, Left, and Right viewpoints, if there are cameras
and lights present in the scene as potential viewpoints, those are shown as well. It’s even possible
to display the scene from the viewpoint of a Merge3D or Transform3D by selecting it from the
contextual menu’s Camera > Other submenu. Being able to move around the scene and see it
from different viewpoints can help with the positioning, alignment, and lighting, as well as other
aspects of your composite.
Furthermore, selecting a 3D node in the Node Editor also selects the associated object in the
3D Viewer.
When a viewer is set to display the view of a camera or light, panning, zooming, or rotating the
viewer (seen at right) actually transforms the camera or light you’re viewing through (seen at left)
It is even possible to view the scene from the perspective of a Merge3D or Transform3D node by
selecting the object from the Camera > Others menu. The same transform techniques will then
move the position of the object. This can be helpful when you are trying to orient an object in a
certain direction.
Transparency Sorting
While generally the order of geometry in a 3D scene is determined by the Z-position of each object,
sorting every face of every object in a large scene can take an enormous amount of time. To provide
the best possible performance, a Fast Sorting mode is used in the OpenGL renderer and viewers.
This is set by right-clicking in the viewer and choosing Transparency > Z-buffer. While this approach
is much faster than a full sort, when objects in the scene are partially transparent it can also produce
incorrect results.
The Sorted (Accurate) mode can be used to perform a more accurate sort at the expense of
performance. This mode is selected from the Transparency menu of the viewer’s contextual menu.
The Renderer3D also presents a Transparency menu when the Renderer Type is set to OpenGL.
Sorted mode does not support shadows in OpenGL. The software renderer always uses the Sorted
(Accurate) method.
The basic rule is when a scene contains overlapping transparency, use the Full/Quick Sort modes,
and otherwise use the Z-buffer (Fast). If the Full Sort method is too slow, try switching back to
Z-buffer (Fast).
Material Viewer
When you view a node that comes from the 3D > Material category of nodes in the Effects Library, the
viewer automatically switches to display a Material Viewer. This Material Viewer allows you to preview
the material applied to a lit 3D sphere rendered with OpenGL by default.
The type of geometry, the renderer, and the state of the lighting can all be set by right-clicking the
viewer and choosing options from the contextual menu. Each viewer supports A and B buffers to
assist with comparing multiple materials.
Transformations
Merge3D, 3D Objects, and Transform3D all have Transform parameters that are collected together
into a Transform tab in the Inspector. The parameters found in this tab affect how the object is
positioned, rotated, and scaled within the scene.
The Translation parameters are used to position the object in local space, the Rotation parameters
affect the object’s rotation around its own center, and the Scale slider(s) affect its size (depending
on whether or not they’re locked together). The same adjustments can be made in the viewer
using onscreen controls.
From left to right, the Position, Rotation, and Scale onscreen Transform controls
If the Scale’s Lock XYZ checkbox is enabled in the Inspector, only the overall scale of the object is
adjusted by dragging the red or center onscreen control, while the green and blue portions of the
onscreen control have no effect. If you unlock the parameters, you are able to scale an object along
individual axes separately to squish or stretch the object.
Selecting Objects
With the onscreen controls visible in the viewer, you can select any object by clicking on its center
control. Alternatively, you can also select any 3D object by clicking its node in the Node Editor.
Pivot
In 3D scenes, objects rotate and scale around an axis called a pivot. By default, this pivot goes through
the object’s center. If you want to move the pivot so it is offset from the center of the object, you can
use the X, Y, and Z Pivot parameters in the Inspector.
4 Use the X/Y/Z Target Position controls in the Inspector or the Target onscreen control in the
viewer to position the target and in turn position the object it’s attached to.
In the viewer, a line is drawn between the target and the center of the 3D object it’s attached to, to
show the relationship between these two sets of controls. Whenever you move the target, the object
is automatically transformed to face its new position.
A light made to face the wall using its enabled target control
Parenting
One of the many advantages of the node-based approach to 3D compositing is that parenting
between objects becomes implicit in the structure of a 3D node tree. The basis for all parenting is the
Merge3D node. If you’re careful about how you connect the different 3D objects you create for your
scene, you can use multiple Merge3D nodes to control which combinations of objects are transformed
and animated together, and which are transformed and animated separately.
For example, picture a scene with two spheres that are both connected to a Merge3D. The
Merge3D can be used to rotate one sphere around the other, like the moon around the earth. Then
the Merge3D can be connected to another Merge3D to create the earth and the moon orbiting
around the sun.
Cameras
When setting up and animating a 3D scene, the metaphor of a camera is one of the most
comprehensible ways of framing how you want that scene to be rendered out, as well as animating
your way through the scene. Additionally, compositing artists are frequently tasked with matching
cameras from live-action clips, or matching cameras from 3D applications.
To accommodate all these tasks, Fusion provides a flexible Camera3D node with common camera
controls such as Angle of View, Focal Length, Aperture, and Clipping planes, to either set up your own
camera or to import camera data from other applications. The Camera3D node is a virtual camera
through which the 3D environment can be viewed.
Cameras are typically connected and viewed via a Merge3D node; however, you can also connect
cameras upstream of other 3D objects if you want that camera to transform along with that object
when it moves.
The viewer’s frame may be different from the camera frame, so it may not match the true boundaries
of the image that will be rendered by the Renderer3D node. If there is no Renderer3D node added
to your scene yet, you can use Guides that represent the camera’s framing. For more information
about Guides, see Chapter 7, “Using Viewers,” in the Fusion Reference Manual.
Turning on “Enable Accumulation Effects” exposes a Depth of Field checkbox along with Quality and
Amount of DoF Blur sliders that let you adjust the depth of field effect. These controls affect only
the perceived quality of the depth of field that is rendered. The actual depth of field that’s generated
depends solely on the setup of the camera and its position relative to the other 3D objects in
your scene.
When you select your scene’s Camera3D node to view its controls in the Inspector, a new Focal Plane
checkbox appears in the Control Visibility group. Turning this on lets you see the green focal plane
indicator in the 3D Viewer that lets you visualize the effect of the Focal Plane slider, which is located in
the top group of parameters in the Camera3D node’s Controls tab.
For more information about these specific camera controls, see Chapter 29, “3D Nodes,” in the Fusion
Reference Manual.
Importing Cameras
If you want to match cameras between applications, you can import camera paths and positions
from a variety of popular 3D applications. Fusion is able to import animation splines from Maya and
XSI directly with their own native spline formats. Animation applied to cameras from 3ds Max and
LightWave are sampled and keyframed on each frame.
A dialog box with several options will appear. When the Force Sampling checkbox is enabled, Fusion
will sample each frame of the motion, regardless of the format.
TIP: When importing parented or rigged cameras, baking the camera animation in the 3D
application before importing it into Fusion often produces more reliable results.
NOTE: When lighting is disabled in either the viewer or final renders, the image will appear
to be lit by a 100% ambient light.
— Merge3D nodes have a Pass Through Lights checkbox that determines whether lights attached to
an upstream Merge3D node also illuminate objects attached to downstream Merge3D nodes.
— ImagePlane3D, Cube3D, Shape3D, Text3D, and FBXMesh3D nodes have a set of Lighting controls
that let you turn three controls on and off: Affected by Lights, Shadow Caster, and Shadow Receiver.
Ambient Light
You use ambient light to set a base light level for the scene, since it produces a general uniform
illumination of the scene. Ambient light exists everywhere without appearing to come from any
particular source; it cannot cast shadows and will tend to fill in shadowed areas of a scene.
Directional Light
A directional light is composed of parallel rays that light up the entire scene from one direction,
creating a wall of light. The sun is an excellent example of a directional light source.
Point Light
A point light is a well defined light that has a small clear source, like a light bulb, and shines from that
point in all directions.
Spotlight
A spotlight is an advanced point light that produces a well defined cone of light with falloff. This is the
only light that produces shadows.
All of the Light nodes display onscreen controls in the viewer, although not all controls affect every
light type. In the case of the ambient light, the position has no effect on the results. The directional
light can be rotated, but position and scale will be ignored. The point light ignores rotation. Both
position and rotation apply to the spotlight.
Lighting Hierarchies
Lights normally do not pass through a Merge, since the Pass Through Lights checkbox is off by
default. This provides a mechanism for controlling which objects are lit by which lights. For example,
in the following two node trees, two shapes and an ambient light are combined with a Merge3D node,
which is then connected to another Merge3D node that’s also connected to a plane and a spotlight.
At the left, the first Merge3D node of this tree has Pass Through Lights disabled, so you can only see
the two shapes lit. At the right, Pass Through Lights has been enabled, so both the foreground shapes
and the background image plane receive lighting.
Lighting Options
Most nodes that generate geometry have additional options for lighting. These options are used to
determine how each individual object reacts to lights and shadows in the scene.
— Affected By Lights: If the Affected By Lights checkbox is enabled, lights in the scene
will affect the geometry.
— Shadow Caster: When enabled, the object will cast shadows on other objects in the scene.
— Shadow Receiver: If this checkbox is enabled, the object will receive shadows.
Shadows
The only light that can cast shadows is the spotlight. Spotlight nodes cast shadows by default,
although these shadows will not be visible in the viewer until shadows are enabled using the viewer
toolbar button. Shadows will not appear in the output of the Renderer3D unless the Shadows option is
enabled for that renderer. If you want to prevent a spotlight from casting shadows, you can disable the
Enable Shadows checkbox in the node’s Inspector.
For more information on shadow controls, see the “Spotlight” section of Chapter 90, “3D Light Nodes,”
in the DaVinci Resolve Reference Manual or Chapter 30 in the Fusion Reference Manual.
Shadow Maps
A shadow map is an internal depth map that specifies each pixel’s depth in the scene. This information
is used to assemble the shadow layer created from a spotlight. All the controls for the shadow map are
found in the Spotlight Inspector.
The quality of the shadow produced depends greatly on the size of the shadow map. Larger maps
generate better-looking shadows but will take longer to render. The wider the cone of the spotlight,
or the more falloff in the cone, the larger the shadow map will need to be to produce useful quality
results. Setting the value of the Shadow Map Size control sets the size of the depth map in pixels.
Generally, through trial and error, you’ll find a point of diminishing returns where increasing the size of
the shadow map no longer improves the quality of the shadow. It is not recommended to set the size
of the shadow maps any larger than they need to be.
The Shadow Map Proxy control is used to set a percentage by which the shadow map is scaled for
fast interactive previews, such as Autoproxy and LoQ renders. A value of .4, for example, represents a
40% proxy.
Shadow Softness
By default, the spotlight generates shadows without soft edges, but there are options for constant
and variable soft shadows. Hard-edged shadows will render significantly faster than either of the Soft
Shadow options. Shadows without softness will generally appear aliased, unless the shadow map size
is large enough. In many cases, softness is used to hide the aliasing rather than increasing the shadow
map to preserve memory and avoid exceeding the graphics hardware capabilities.
Setting the spotlight’s shadow softness to None will render crisp and well-defined shadows.
The Constant option will generate shadows where the softness is uniform across the shadow,
regardless of the shadow’s distance from the casting geometry. The Variable option generates
Selecting the Variable option reveals the Spread, Min Softness, and Filter Size sliders. A side
effect of the method used to produce variable softness shadows is that the size of the blur applied
to the shadow map can become effectively infinite as the shadow’s distance from the geometry
increases. These controls are used to limit the shadow map by clipping the softness calculation to a
reasonable limit.
The filter size determines where this limit is applied. Increasing the filter size increases the maximum
possible softness of the shadow. Making this smaller can reduce render times but may also limit the
softness of the shadow or potentially even clip it. The value is a percentage of the shadow map size.
For more information, see “Spotlight” in Chapter 90, “3D Light Nodes,” in the DaVinci Resolve
Reference Manual or Chapter 30 in the Fusion Reference Manual.
The goal is to adjust the Multiplicative Bias slider until the majority of the Z-fighting is resolved, and
then adjust the Additive Bias slider to eliminate the rest. The softer the shadow, the higher the bias
will probably have to be. You may even need to animate the bias to get a proper result for some
particularly troublesome frames.
Nodes that describe the geometry’s response to light are called illumination models. Blinn, Cook-
Torrance, Ward, and Phong are the included illumination models. These nodes are found in the 3D >
Material category of nodes in the Effects Library.
Most materials also accept textures, which are typically 2D images. Textures are used to refine the
look of an object further, by adding photorealistic details, transparency, or special effects. More
complex textures like bump maps, 3D textures, and reflection maps are also available in the 3D >
Texture category.
Materials can also be combined to produce elaborate and highly detailed composite materials.
Each node that creates or loads geometry into a 3D scene also assigns a default material. The default
material is the Blinn illumination model, but you can override this material using one of several nodes
that output a 3D material. Some of these materials provide a greater degree of control over how the
geometry reacts to light, providing inputs for diffuse and specular texture maps, bump mapping, and
environmental maps, which mimic reflection and refraction.
Material Components
All the standard illumination models share certain characteristics that must be understood.
Diffuse
The Diffuse parameters of a material control the appearance of an object where light is absorbed
or scattered. This diffuse color and texture are the base appearance of an object, before taking into
account reflections. The opacity of an object is generally set in the diffuse component of the material.
Alpha
The Alpha parameter defines how much the object is transparent to diffuse light. It does not affect
specular levels or color. However, if the value of alpha, either from the slider or a Material input from
the diffuse color, is very close to or at zero, those pixels, including the specular highlights, will be
skipped and disappear.
Opacity
The Opacity parameter fades out the entire material, including the specular highlights. This value
cannot be mapped; it is applied to the entire material.
Specularity is made up of color, intensity, and exponent. The specular color determines the color of
light that reflects from a shiny surface. Specular intensity is how bright the highlight will be.
The specular exponent controls the falloff of the specular highlight. The larger the value, the
sharper the falloff and the smaller the specular component will be.
Transmittance
When using the software renderer, the Transmittance parameters control how light passes through a
semi-transparent material. For example, a solid blue pitcher will cast a black shadow, but one made of
translucent blue plastic would cast a much lower density blue shadow. The transmittance parameters
are essential to creating the appearance of stained glass.
Transmissive surfaces can be further limited using the Alpha and Color Detail control.
Attenuation
The transmittance color determines how much color is passed through the object. For an object to
have fully transmissive shadows, the transmittance color must be set to to RGB = (1, 1, 1), which means
100% of green, blue, and red light passes through the object. Setting this color to RGB = (1, 0, 0)
means that the material will transmit 100% of the red arriving at the surface but none of the green or
blue light.
Alpha Detail
When the Alpha Detail slider is set to 0, the non-zero portions of the alpha channel of the diffuse color
are ignored and the opaque portions of the object casts a shadow. If it is set to 1, the alpha channel
determines how dense the object casts a shadow.
NOTE: The OpenGL renderer ignores alpha channels for shadow rendering, resulting in
a shadow always being cast from the entire object. Only the software renderer supports
alpha in the shadow maps.
The following examples for Alpha Detail and Color Detail cast a shadow using this image. It is a
green-red gradient from left to right. The outside edges are transparent, and inside is a small semi-
transparent circle.
Alpha Detail set to 1; the alpha channel determines the density of the shadow
Color Detail
Color Detail is used to color the shadow with the object’s diffuse color. Increasing the Color Detail
slider from 0 to 1 brings in more diffuse color and texture into the shadow.
TIP: The OpenGL renderer will always cast a black shadow from the entire object, ignoring
the color. Only the software renderer supports color in the shadow maps.
Saturation
Saturation will allow the diffuse color texture to be used to define the density of the shadow without
affecting the color. This slider lets you blend between the full color and luminance only.
Illumination models left to right: Standard, Blinn, Phong, Cook-Torrance, and Ward
Standard
The Standard material provides a default Blinn material with basic control over the diffuse, specular,
and transmittance components. It only accepts a single texture map for the diffuse component with
the alpha used for opacity. The Standard Material controls are found in the Material tab of all nodes
that load or create geometry. Connecting any node that outputs a material to that node’s Material
Input will override the Standard material, and the controls in the Material tab will be hidden.
Blinn
The Blinn material is a general purpose material that is flexible enough to represent both metallic
and dielectric surfaces. It uses the same illumination model as the Standard material, but the Blinn
material allows for a greater degree of control by providing additional texture inputs for the specular
color, intensity, and exponent (falloff), as well as bump map textures.
Phong
The Phong material produces the same diffuse result as Blinn, but with wider specular highlights at
grazing incidence. Phong is also able to make sharper specular highlights at high exponent levels.
Cook-Torrance
The Cook-Torrance material combines the diffuse illumination model of the Blinn material with a
combined microfacet and Fresnel specular model. The microfacets need not be present in the mesh
or bump map; they are represented by a statistical function, Roughness, which can be mapped.
The Fresnel factor attenuates the specular highlight according to the Refractive Index, which can
be mapped.
Ward
The Ward material shares the same diffuse model as the others but adds anisotropic highlights,
ideal for simulating brushed metal or woven surfaces, as the highlight can be elongated in the U or V
directions of the mapping coordinates. Both the U and V spread functions are mappable.
This material does require properly structured UV coordinates on the meshes it is applied to.
TIP: UV Mapping is the method used to wrap a 2D image texture onto 3D geometry. Similar
to X and Y coordinates in a frame, U and V are the coordinates for textures on 3D objects.
Texture maps are used to modify various material inputs, such as diffuse color, specular color,
specular exponent, specular intensity, bump map, and others. The most common uses of texture
maps is the diffuse color/opacity component.
A node that outputs a material is frequently used, instead of an image, to provide other shading
options. Materials passed between nodes are RGBA samples; they contain no other information
about the shading or textures that produced them.
For instance, if you want to combine an anisotropic highlight with a Blinn material, you can take the
output of the Blinn, including its specular, and use it as the diffuse color of the Ward material. Or,
if you do not want the output of the Blinn to be relit by the Ward material, you can use the Channel
Boolean material to add the Ward material’s anisotropic specular component to the Blinn material with
a greater degree of control.
The reflections and refractions use an environment mapping technique to produce an approximation
that balances realistic results with greater rendering performance. Environment maps assume an
object’s environment is infinitely distant from the object and rendered into a cubic or spherical texture
surrounding the object.
The Nodes > 3D > Texture > Cube Map and Sphere Map nodes can be used to help create
environment maps, applying special processing and transforms to create the cubic or spherical
coordinates needed.
The Reflect node outputs a material that can be applied to an object directly, but the material does
not contain an illumination model. As a result, objects textured directly by the Reflect node will not
respond to lights in the scene. For this reason, the Reflect node is usually combined with the Blinn,
Cook-Torrance, Phong, or Ward nodes.
Reflection
Reflection outputs a material making it possible to apply the reflection or refraction to other materials
either before or after the lighting model with different effects.
Refraction
Refraction occurs only where there is transparency in the background material, which is generally
controlled through the Opacity slider and/or the alpha channel of any material or texture used for the
Background Material Texture input. The Reflect node provides the following material inputs:
— Background Material: Defines both the opacity for refraction and the base color for reflection.
— Reflection Color Material: The environment reflection.
— Reflection Intensity Material: A multiplier for the reflection.
— Refraction Tint Material: The environment refraction.
— Bump Map Texture: Normal perturbing map for environment reflection/refraction vectors.
Working with reflection and refraction can be tricky. Here are some techniques to make it easier:
— Typically, use a small amount of reflection, between 0.1 and 0.3 strength. Higher values are used
for surfaces like chrome.
— Bump maps can add detail to the reflections/refractions. Use the same bump map in the
Illumination model shader that you combine with Reflect.
Bump Maps
Bump mapping helps add details and small irregularities to the surface appearance of an object.
Bump mapping modifies the geometry of the object or changes its silhouette.
To apply a bump map, you typically connect an image containing the bump information to the
BumpMap node. The bump map is then connected to the Bump input of a Material node. There
are two ways to create a bump map for a 3D material: a height map and a bump map.
TIP: Normals are generated by 3D modeling and animation software as a way to trick the
eye into seeing smooth surfaces, even though the geometry used to create the models uses
only triangles to build the objects.
If you were to connect a bump map directly to the bump map input of a material, it will result in
incorrect lighting. Fusion prevents you from doing this, however, because Fusion uses a different
coordinate system for doing the lighting calculation. You first must use a BumpMap that expects a
packed bump map or height map and will do the conversion of the bump map to work correctly.
If your bump mapping doesn’t appear correct, here are a few things to look for:
— Make sure you have the nodes connected correctly. The height/bump map should connect into a
BumpMap and then, in turn, should connect into the bump map input on a material.
— Change the precision of the height map to get less banding in the normals. For low frequency
images, float32 may be needed.
— Adjust the Height scale on the BumpMap. This scales the overall effect of the bump map.
— Make sure you set the type to HeightMap or BumpMap to match the image input. Fusion cannot
detect which type of image you have.
— Check to ensure High Quality is on (right-click in the transport controls bar and choose High
Quality from the contextual menu). Some nodes like Text+ produce an anti-aliased version in High
Quality mode that will substantially improve bump map quality.
— If you are using an imported normal map image, make sure it is packed [0–1] in RGB and that it is
in tangent space. The packing can be done in Fusion, but the conversion to tangent space cannot.
Projection Mapping
Projection is a technique for texturing objects using a camera or projector node. This can be useful for
texturing objects with multiple layers, applying a texture across multiple separate objects, projecting
background shots from the camera’s viewpoint, image-based rendering techniques, and much more.
Textures are assigned to the object like any other texturing technique. The UVs can be locked to the
vertices at a chosen frame using the Ref Time slider. This locking only works as long as vertices are not
created, destroyed, or reordered (e.g., projection locking will not work on particles because they get
created/destroyed, nor will they work on a Cube3D when its subdivision level slider is animated).
TIP: Projected textures can be allowed to slide across an object. If the object moves relative
to the Projector 3D, or alternatively, by grouping the two together with a Merge3D, they can
be moved as one and the texture will remain locked to the object.
In the following section of a much larger composition, an image (the Loader1 node) is projected
into 3D space by mapping it onto five planes (Shape3D nodes renamed ground, LeftWall, RightWall,
Building, and Background), which are positioned as necessary within a Merge3D node to apply
reflections onto a 3D car to be composited into that scene.
Five planes positioning a street scene in 3D space in preparation for UV Projection (left), and the UV
Map node being used to project these planes so they appear as through a camera in the scene (right)
However, this is now a 3D scene, ready for a digital car to be placed within it, receiving reflections and
lighting and casting shadows into the scene as if it were there.
Geometry
There are five nodes used for creating geometry in Fusion. These nodes can be used for a variety of
purposes. For instance, the Image Plane 3D is primarily used to place image clips into a 3D scene,
while the Shapes node can add additional building elements to a 3D set, and Text 3D can add three-
dimensional motion graphics for title sequences and commercials. Although each node is covered in
more detail in the “3D Nodes” chapter, a summary of the 3D creation nodes is provided below.
Cube 3D
The Cube 3D creates a cube with six inputs that allow mapping of different textures to each of the
cube’s faces.
Image Plane 3D
The Image Plane 3D is the basic node used to place a 2D image into a 3D scene with an
automatically scaled plane.
Text 3D
The Text 3D is a 3D version of the Text+ node. This version supports beveling and extrusion but
does not have support for the multi-layered shading model available from Text+.
Particles
When a pRender node is connected to a 3D view, it will export its particles into the 3D
environment. The particles are then rendered using the Renderer3D instead of the Particle
renderer. For more information, see Chapter 52, “Particle Nodes,” in the Fusion Reference Manual.
Visible
If the Visibility checkbox is not selected, the object will not be visible in a viewer, nor will it be
rendered into the output image by a Renderer3D. A non-visible object does not cast shadows.
This is usually enabled by default, so objects that you create are visible in both the viewers and
final renders.
Unseen by Cameras
If the Unseen by Cameras checkbox is selected, the object will be visible in the viewers but
invisible when viewing the scene through a camera, so the object will not be rendered into the
output image by a Renderer3D. Shadows cast by an Unseen object will still be visible.
You can use the Size slider in the FBX Mesh Inspector parameters to reduce the scale of such files to
something that matches Fusion’s 3D scene.
FBX Exporter
You can export a 3D scene from Fusion to other 3D packages using the FBX Exporter node. On render,
it saves geometry, cameras lights, and animation into different file formats such as .dae or .fbx. The
animation data can be included in one file, or it can be baked into sequential frames. Textures and
materials cannot be exported.
Using Text3D
The Text3D node is probably the most ubiquitous node employed by motion graphics artists looking
to create titles and graphics from Fusion. It’s a powerful node filled with enough controls to create
nearly any text effect you might need, all in three dimensions. This section seeks to get you started
quickly with what the Text3D node is capable of. For more information, see Chapter 29, “3D Nodes,”
in the Fusion Reference Manual.
TIP: If you click the Text icon in the toolbar to create a Text3D node, and then you click it
again while the Text3D node you just created is selected, a Merge3D node is automatically
created and selected to connect the two. If you keep clicking the Text icon, more Text3D
nodes will be added to the same selected Merge3D node.
Near the bottom of the Text tab are the Extrusion parameters, available within a
disclosure control.
By default, all text created with the Text3D node is flat, but you can use the Extrusion Style,
Extrusion Depth, and various Bevel parameters to give your text objects thickness.
Additionally, selecting a Text3D node exposes all the onscreen transform controls discussed
elsewhere in this chapter. Using these controls, you can position and animate each text object
independently.
Combining Text3D nodes using Merge3D nodes doesn’t just create a scene; it also enables you to
transform your text objects either singly or in groups:
— Selecting an individual Text3D node or piece of text in the viewer lets you move that one text
object around by itself, independently of other objects in the scene.
— Selecting a Merge3D node exposes a transform control that affects all objects connected to that
Merge3D node at once, letting you transform the entire scene.
“Sub” Transforms
Another Transform tab (which the documentation has dubbed the “Sub” Transform tab) lets you apply
a separate level of transform to either characters, words, or lines of text, which lets you create even
more layout variations. For example, choosing to Transform by Words lets you change the spacing
between words, rotate each word, and so on. You can apply simultaneous transforms to characters,
words, and lines, so you can use all these capabilities at once if you really need to go for it. And, of
course, all these parameters are animatable.
Shading
The Shading tab lets you shade or texture a text object using standard Material controls.
The Fog3D node works well with depth of field and antialiasing supported by the OpenGL renderer.
Since it is not a post-processing node (like the VolumeFog node found in the Nodes > Position menu
or Fog node in Nodes > Deep Pixel), it does not need additional channels like Position or Z-channel
color. Furthermore, it supports transparent objects.
The SoftClip node uses the distance of a pixel from the viewpoint to affect opacity, allowing objects
to gradually fade away when too close to the camera. This prevents objects from “popping off”
should the camera pass through them. This is especially useful with particles that the camera may be
passing through.
Geometry nodes such as the Shape3D node use a Matte Objects checkbox to enable masking out
parts of the 3D scene. Effectively, everything that falls behind a matte object doesn’t get rendered.
However, matte objects can contribute information into the Z-channel and the Object ID channel,
leaving all other channels at their default values. They do not remove or change any geometry; they
can be thought of as a 3D garbage matte for the renderer.
Is Matte
Located in the Controls tab for the geometry, this is the main checkbox for matte objects. When
enabled, objects whose pixels fall behind the matte object’s pixels in Z do not get rendered.
Opaque Alpha
When the Is Matte checkbox is enabled, the Opaque Alpha checkbox is displayed. Enabling this
checkbox sets the alpha value of the matte object to 1. Otherwise the alpha, like the RGB, will be 0.
Infinite Z
When the Is Matte checkbox is enabled, the Infinite Z checkbox is displayed. Enabling this
checkbox sets the value in the Z-channel to infinite. Otherwise, the mesh will contribute normally
to the Z-channel.
Matte objects cannot be selected in the viewer unless you right-click in the viewer and choose
3D Options > Show Matte Objects in the contextual menu. However, it’s always possible to select
the matte object by selecting its node in the node tree.
The Material ID is a value assigned to identify what material is used on an object. The Object ID is
roughly comparable to the Material ID, except it identifies objects and not materials.
Both the Object ID and Material ID are assigned automatically in numerical order, beginning with 1. It
is possible to set the IDs to the same value for multiple objects or materials even if they are different.
Override 3D offers an easy way to change the IDs for several objects. The Renderer will write the
assigned values into the frame buffers during rendering, when the output channel options for these
buffers are enabled. It is possible to use a value range from 0 to 65534. Empty pixels have an ID of 0,
so although it is possible to assign a value of 0 manually to an object or material, it is not advisable
because a value of 0 tells Fusion to set an unused ID when it renders.
3D Scene Input
Nodes that utilize the World Position channel are located under the Position category. VolumeFog
and Z to WorldPos require a camera input matching the camera that rendered the Position channels,
which can either be a Camera3D or a 3D scene containing a camera. Just as in the Renderer3D, you
can choose which camera to use if more than one are in the scene. The VolumeFog can render without
a camera input from the Node Editor if the world space Camera Position inputs are set to the correct
value. VolumeMask does not use a camera input. Nodes that support the World Position Pass, located
under the Position category, offer a Scene input, which can be either a 3D Camera or a 3D scene
containing a camera.
There are three Position nodes that can take advantage of World Position Pass data.
Empty regions of the render will have the Position channel incorrectly initialized to (0,0,0). To get the
correct Position data, add a bounding sphere or box to your scene to create distant values and allow
the Position nodes to render correctly.
Without a bounding mesh to generate Position values, the fog fills in the background incorrectly
The Point Cloud node can import point clouds written into scene files from match moving or 3D
scanning software.
The entire point cloud is imported as one object, which is a significantly faster approach.
If a point that matches the name you entered is found, it will be selected in the point cloud and
highlighted yellow.
TIP: The Point Cloud Find function is a case-sensitive search. A point named “tracker15” will
not be found if the search is for “Tracker15”.
3D Camera Tracking
This chapter presents an overview of using the Camera Tracker
node and the workflow it involves. Camera tracking is used to
create a virtual camera in Fusion’s 3D environment based on the
movement or a live-action camera in a clip. You can then use the
virtual camera to composite 3D models, text, or 2D images into a
live‑action clip that has a moving camera.
For more information on other types of tracking in Fusion, see Chapter 22, “Using the Tracker Node,”
in the Fusion Reference Manual.
Contents
Introduction to Tracking�������������������������������� 650 Matching the Live-Action Camera������������� 658
How Camera Tracking Works������������������������� 651 How Do You Know When to Stop?���������������� 659
The Camera Tracking Workflow�������������������� 651 Using Seed Frames��������������������������������������������� 660
Clips That Don’t Work Well Cleaning Up Camera Solves���������������������������� 661
for Camera Tracking������������������������������������������ 652
Exporting a 3D Scene for Efficiency���������� 664
Outputting from the Camera Tracker������ 653
Unalign the 3D Scene Transforms���������������� 664
2D View������������������������������������������������������������������� 653
Setting the Ground Plane�������������������������������� 664
3D View������������������������������������������������������������������� 653
Setting the Origin����������������������������������������������� 665
Auto-Tracking in the Camera Tracker������� 655
Setting the Scale�������������������������������������������������� 666
Increasing Auto-Generated
Tracking Points����������������������������������������������������� 655 Realign the Scene������������������������������������������������ 666
Each tracker type has its own chapter in this manual. This chapter covers the tracking techniques with
the Camera Tracker node.
The Camera Tracker’s purpose is to create a 3D animated camera and point cloud of the scene. A point
cloud is a large group of points generated by the solver that roughly recreates the 3D positions of the
tracked features in a scene. The point cloud can then be used as a guide when integrating other 2D or
3D elements alongside live-action features.
Once you complete these steps, an animated camera and point cloud are exported from the Inspector
into a 3D composite. The Camera Tracker encompasses this complete workflow within one tool. Five
tabs at the top of the Inspector are roughly laid out in the order in which you’ll use them. These five
tabs are:
— Lack of depth: Camera tracking requires parallax in a clip in order to work. You must be able to
identify objects further away and objects that are nearer as the camera moves. If everything is at
the same distance from the camera, there is no way to calculate depth. In this case, it’s better to
skip the Camera Tracker node and find another solution.
— Locked-off shots: If the camera does not move, there is no way to calculate which objects are
closer and which are nearer. Again, don’t spend too much time in this situation; it is better to skip
the Camera Tracker node and find another solution.
— Tripod pans: Similar to a locked-off shot, there is no way to calculate which objects are closer
and which are nearer from a pan that remains centered on a locked-off tripod. Skip the Camera
Tracker node and find another solution.
— No detail: Clips like green screens without tracking markers lack enough detail to track. If you
are lucky enough to be involved in the shooting of these types of shots, including tracker markers
makes it much easier to get a good track. Without detail, camera tracking will fail and you will need
to find a more manual solution.
— Motion blur: Fast camera motion or slow shutter speeds can introduce motion blur, which will
make it difficult to find patterns to track. It’s worth trying shots like these to see if there are
enough details to get a good solve, but know when give up and turn to another solution.
— Rolling shutter: CMOS-based cameras sometimes introduce distortion due to the shutter
capturing different lines at slightly different times. This distortion can create significant problems
for camera tracking. Sometimes it is possible to create motion vectors with the Optical Flow node
to create new in-between frames without the wobble distortion of the rolling shutter. Then you
can use the corrected image to connect to the Camera Tracker.
— Parallax issues: When objects at different distances in a shot overlap in the frame, the
overlapping area can be misinterpreted as a corner. Having a tracker assigned to an overlapping
angle like this will cause errors as the parallax starts to shift and the overlapping area slides. This
can be solved in Fusion by removing that tracker before running the solver.
— Moving objects: It’s difficult to capture a shot where objects in the clip do not move. People, cars,
animals, or other object may move in and out of a shot. These objects move independent of the
camera movement and must be eliminated or they will cause solving errors. You can fix these
issues by masking out objects that are not “nailed to the set.” The masks are then connected to
the Track Mask input on the Camera Tracker node.
TIP: Some shots that cannot be tracked using Fusion’s Camera Tracker can be performed
in dedicated 3D camera-tracking software like 3D Equalizer and PF Track. Camera tracking
data from these applications can then be imported in the Camera3D node in Fusion.
— The primary output is a 2D view used when you are setting up the Track, refining the camera, and
performing your initial solve.
— There is also a 3D output used after your initial solve for viewing the camera path and point cloud
in 3D space. This view can be helpful when you are refining tracks to increase the accuracy of
the solve and aligning your ground plane. It can be used simultaneously with the 2D output in
side-by-side views.
Note that the selection of tracks in the 2D view and their corresponding locators (in the point cloud) in
the 3D view are synchronized. There are also viewer menus available in both the 2D and 3D views to
give quick control of the functionality of this tool.
2D View
The 2D view is the primary display for the node. Viewing the node displays the image being tracked
as well as overlay tracker markers and their motion paths. A dedicated toolbar gives you access to the
common features used to track and solve a clip.
3D View
The second output of the Camera Tracker node displays a 3D scene. To view this, connect this 3D
output to a 3D Transform or Merge 3D node and view that tool.
After an initial solve, the 3D output displays the point cloud and the camera, along with the image
connected to it. Selecting points displays the Camera Tracker toolbar above the viewer, which gives
control of various functions, such as renaming, deleting, and changing the colors of points in the
point cloud.
— Optical Flow: Usually your best choice, unless you have a great deal of criss-crossing
objects in a clip.
— Tracker: A good second choice when Optical Flow can’t be used due to motion estimation errors
like criss-crossing objects.
— Planar: Mostly used in simpler clips, where the majority of the image consists of planar surfaces
such as the facades of buildings.
The primary way of avoiding these problem areas is by masking. You connect a mask to the Camera
Tracker node’s Track Mask input to identify areas of a scene that the Camera Tracker can analyze. For
example, if you have a clip of an airport runway along a shoreline, the waves of the water and moving
clouds in the sky must be masked since they move independently of the camera.
When creating a mask, the fixed areas of the image to be analyzed for tracking should be
encompassed in the white portion of the mask. All moving objects that need to be ignored should be
encompassed in the black portion. The mask should then be attached to the Camera Tracker Track
Mask input.
By doing this, the tracker ignores the waves of the water and moving clouds. Unlike drawing a mask
for an effect, the mask in this case does not have to be perfect. You are just trying to identify the
rough area to occlude from the tracking analysis.
The original image to be tracked (left), and the occlusion mask of the clouds and water (right)
TIP: If there’s a lot of motion in a shot, you can use the Tracker or Planar Tracker nodes
to make your occlusion mask follow the area you want to track. Just remember that, after
using the PlanarTracker or PlanarTransform node to transform your mask, you need to use
a Bitmap node to turn it back into a mask that can be connected to the Camera Tracker
node’s Track Mask input.
If the actual values are not known, try a best guess. The solver attempts to find a camera near these
parameters, and it helps the solver by giving parameters as close to the live action as possible. The
more accurate the information you provide, the more accurate the solver calculation. At a minimum,
try to at least choose the correct camera model from the Film Gate menu. If the film gate is incorrect,
the chances that the Camera Tracker correctly calculates the lens focal length become very low.
Unlike the Track and Solve tabs, the Camera tab does not include a button at the top of the Inspector
that executes the process. There is no process to perform on the Camera tab once you configure the
camera settings. After you set the camera settings to match the live-action camera, you move to the
Solve tab.
The trackers found in the Track phase of this workflow have a great deal to do with the success or
failure of the solver, making it critical to deliver the best set of tracking points from the very start.
Although the masking you create to occlude objects from being tracked helps to omit problematic
tracking points, you almost always need to further filter and delete poor quality tracks in the Solver
tab. That’s why, from a user’s point of view, solving should be thought of as an iterative process.
You can interpret a value of 1.0 as a pixel offset; at any given time, the track could be offset by 1 pixel.
The higher the resolution, the lower the solve error should be. If you are working with 4K material,
your goal should be to achieve a solve error below 0.5.
Additionally, for the solver to accurately triangulate and reconstruct the camera and point
cloud, it is important to have:
— A good balance of tracks across objects at different depths, with not too many tracks
in the distant background or sky (these do not provide any additional perspective
information to the solver).
— Tracks distributed evenly over the image and not highly clustered on a few objects or
one side of the image.
— The track starts and ends staggered over time, with not too many tracks ending on
the same frame.
Selecting appropriate seed frames is not necessarily recommended unless you have some experience
with camera tracking. Keeping the default Auto Select Seed Frames checkbox enabled in the Solve
Options section of the Solver tab selects the best frames in most cases. However, you can disable the
checkbox and use the Seed Frame 1 and Seed Frame 2 slider to select frames you believe achieve
better results.
Be aware that deleting too many tracks can cause the Average Solve Error to increase, as the solver
has too little information to work with. In particular, if there are fewer than eight tracks on any frame,
mathematically there is not enough information to solve the clip. However, it is strongly recommended
to use a lot more than eight tracks to get a robust and accurate solve.
IMPORTANT: If you are not familiar with camera tracking, it may be tempting to try to
directly edit the resulting 3D splines in the Spline Editor in order to improve a solved
camera’s motion path. This option should be used as an absolute last resort. It’s preferable,
instead, to modify the 2D tracks being fed into the solver.
Hovering the pointer over any tracking point displays a large metadata tooltip that includes the solve
error for the point. For a more visual representation of the accuracy, you can enable the display of
3D locators in the viewer by clicking the Reprojection Locators button in the viewer toolbar.
After a solve, the Camera Tracker toolbar can display Reprojection locators
When the tracking points are converted into a point cloud by the solver, it creates 3D reprojection
locators for each tracking point. These Reprojection locators appear as small X marks near the
corresponding tracking point. The more the two objects overlap, the lower the solve error.
The goal when filtering the trackers is to remove all red tracker marks and keep all the green marks.
Whether you decide to keep both the yellow and orange or just the yellow is more a question of
how many marks you have in the clip. You produce a better solve if you retain only the yellow marks;
however, if you do not have enough marks to calculate the 3D scene, you will have to keep some of the
better orange marks as well.
— Keep all tracks with motion that’s completely determined by the motion of the
live‑action camera.
— Delete tracks on moving objects or people and tracks that have parallax issues.
— Delete tracks that are reflected in windows or water.
— Delete tracks of highlights that move over a surface.
— Delete tracks that do not do a good job of following a feature.
— Delete tracks that follow false corners created by the superposition of
foreground and background layers.
— Consider deleting tracks that correspond to locators that the solver has reconstructed
at an incorrect Z-depth.
Deleting Tracks
You can manually delete tracks in the viewer or use filters to select groups of tracks. When deleting
tracks in the viewer, it is best to modify the viewer just a bit to see the tracks more clearly. From the
Camera Tracker toolbar above the viewer, clicking the Track Trails button hides the trails of the tracking
points. This cleans up the viewer to show points only, making it easier to make selections. At the right
end of the toolbar, clicking the Darken Image button slightly darkens the image, again making the
points stand out a bit more in the viewer.
To begin deleting poor-quality tracks, you can drag a selection box around a group of tracks you
want to remove and then either click the Delete Tracks button in the Camera Tracker toolbar or
press Command-Delete.
You can hold down the Command key to select discontiguous tracking marks that are not near
each other. If you accidentally select tracks you want to keep, continue holding the Command key
and drag over the selected tracks to deselect them.
When deleting tracks, take note of the current Average Solve Error at the top of the Inspector and
then rerun the solver. It is better to delete small groups of tracks and then rerun the solver than
to delete one or two large sections. As mentioned previously, deleting too many tracks can have
adverse effects and increase the Average Solve Error.
For instance, it is generally best to run the solver using tracks with longer durations. Since shorter
tracks tend to be less accurate when calculating the camera, you can remove them using the Filter
section in the Inspector.
Increasing the Minimum Track Length parameter sets a threshold that each tracker must meet.
Tracks falling below the threshold appear red. You can then click the Select Tracks Satisfying Filters
button to select the shorter tracks and click Delete from the Options section in the Inspector.
Before you can export the 3D scene, you must provide a bit more information about it. You’ll do this
using controls found in the Export tab. Cameras do not include tiltmeters, so clips do not contain
metadata that indicates how the camera is tilted or oriented. This is critical information when
recreating the virtual camera. It is also useful to determine the location for the center of this 3D scene.
The Export tab provides various translation, rotation, and scale controls to set these options.
TIP: In some cases, the clip you are tracking may not have the ground in the frame.
If necessary, you can set the Selection menu to XY, which indicates you are selecting points
on a wall.
— Camera 3D
— Point Cloud
— Ground Plane
— Merge 3D
— Camera Tracker Renderer (3D Renderer)
To work with the 3D scene, you can select the Merge 3D and load it into one of the viewers, and
then select the Camera Tracker Renderer and load that into a second viewer.
When the Merge 3D is selected, a toolbar above the viewer can add 3D test geometry like an
image plane or cube to verify the precision of the 3D scene and camera. You can then connect
actual 3D elements into the Merge 3D as you would any manually created 3D scene. The point
cloud can help align and guide the placement of objects, and the CameraTracker Renderer is a
Renderer 3D node with all the same controls.
Use the point cloud to accurately place different elements into a 3D scene
At this point, there is no need for the Camera Tracker node unless you find that you need to rerun
the solver. Otherwise, you can save some memory by deleting the Camera Tracker node.
Particle Systems
This chapter is designed to give you a brief introduction
to the creation of fully 3D particle systems, one of Fusion’s
most powerful features.
Once you understand these basics, for more Information on each particle system node that’s
available, see Chapter 52, “Particle Nodes,” in the Fusion Reference Manual.
Contents
Introduction to Particle Systems���������������������������������������������������������������������� 669
Emitters������������������������������������������������������������������������������������������������������������������������� 675
Forces����������������������������������������������������������������������������������������������������������������������������� 676
Compositing����������������������������������������������������������������������������������������������������������������� 676
Rendering��������������������������������������������������������������������������������������������������������������������� 676
The three most fundamental nodes required for creating particle systems are found on the toolbar.
As with the 3D nodes to the right, these are arranged, from left to right, in the order in which they
must be connected to work, so even if you can’t remember how to hook up a simple particle system,
all you need to do is click the three particle system nodes from left to right to create a functional
particle system.
However, these three nodes are only the tip of the iceberg. Opening the Particle category in the
Effects Library reveals many, many particle nodes designed to work together to create increasingly
complex particle interactions.
All particle nodes begin with the letter “p,” and they’re designed to work together to produce
sophisticated effects from relatively simple operations and settings. The next section shows
different ways particle nodes can be connected to produce different effects.
If you’re trying to create particle systems with more natural effects, you can add “forces” to each
emitter. These forces are essentially physics or behavioral simulations that automatically cause
the particles affected by them to be animated with different kinds of motion, or to be otherwise
affected by different objects within scenes.
You can also attach the following types of nodes to a pEmitter node to deeply customize a
particle system:
— Attach a 2D image to a pEmitter node to create highly customized particle shapes. Make sure your
image has an appropriate alpha channel.
— Attach a Shape3D or other 3D geometry node to a pEmitter node to create a more specific region
of emission (by setting Region to Mesh in the Region tab).
The Output Mode of the pRender node, at the very top of the controls exposed in the Inspector,
can be set to either 2D or 3D, depending on whether you want to combine the result of the
particle system with 2D layers or with objects in a 3D scene.
If you connect a pRender node to a Merge3D node, the Output Mode is locked to 3D, meaning
that 3D geometry is output by the pRender node for use within the Merge3D node’s scene. This
means that the particles can be lit, they can cast shadows, and they can interact with 3D objects
within that scene.
NOTE: Once you set the pRender node to either 2D or 3D and make any change to the
nodes in the Inspector, you cannot change the output mode.
A pEmitter node loaded into the viewer with the rotation onscreen controls enabled
Alternatively, you can use the controls of the pEmitter’s Region tab in the Inspector to adjust
Translation, Rotation, and Pivot. All these controls can be animated.
Emitters
pEmitter nodes are the source of all particles. Each pEmitter node can be set up to generate a single
type of particle with enough customization so that you’ll never create the same type of particle
twice. Along with the pRender node, this is the only other node that’s absolutely required to create a
particle system.
— Controls: The primary controls governing how many particles are generated (Number), how long
they live (Lifespan), how fast they move (Velocity) and how widely distributed they are (Angle and
Angle Variance), their rotation (Rotation Mode with X, Y, and Z controls), and whether there’s spin
(Spin X, Y, and Z controls). For each parameter of particle generation, there’s an accompanying
Variance control that lets you make that parameter less uniform and more natural by introducing
random variation.
— Sets: This tab contains settings that affect the physics of the particles emitted by the node. These
settings do not directly affect the appearance of the particles. Instead, they modify behaviors
such as velocity, spin, quantity, and lifespan.
— Style: While the Controls tab has a simple control for choosing a color for particles, the Style
tab has more comprehensive controls including color variance and Color Over Life controls.
Additionally, size controls including Size Over Life, fade controls, and blur controls let you create
sophisticated particle animations with a minimum of adjustments, while Merge controls give you
an additional level of control over how overlapping particles combine visually. A set of controls at
the bottom lets you choose how animated effects are timed.
— Region: The Region tab lets you choose what kind of geometric region is used to disperse
particles into space and whether you’re emitting particles from the region’s volume or surface.
The Winding Rule and Winding Ray Direction controls determine how the mesh region will handle
particle creation with geometric meshes that are not completely closed, as is common in many
meshes imported from external applications. Tweaking these last parameters is common when
using imported mesh geometry as a region for emitting particles, since even geometry that
appears closed will frequently appear to “leak” particles thanks to improperly welded vertices.
Some forces, including pDirectionalForce, pFlock, pFriction, pTurbulence, and pVortex, are rules that
act upon particles without the need for any other input. These are simply “acts of nature” that cause
particles to behave in different ways.
Other forces, such as pAvoid, pBounce, pFollow, and pKill, work in conjunction with 3D geometry in
a scene such as shapes or planes to cause things to happen when a particle interacts or comes near
that geometry. Note that some of the particles described previously can also use geometry to direct
their actions, so these two categories of forces are not always that clear-cut.
Compositing
The pMerge node is a simple way to combine multiple emitters so that different types of particles
work together to create a sophisticated result. The pMerge node has no parameters; you simply
connect emitters to it, and they’re automatically combined.
Rendering
The pRender node is required whether you’re connecting a particle system’s output to a 2D Merge
node or to a Merge3D node for integration into a 3D scene. Along with the pEmitter node, this is the
only other node that’s absolutely required to create a particle system.
— Controls: The main controls that let you choose whether to output 2D or 3D image data, and
whether to add blur or glow effects to the particle systems, along with a host of other details
controlling how particles will be rendered.
— Scene: These controls let you transform the overall particle scene all at once.
— Grid: The grid is a helpful, non-rendering guide used to orient 2D particles in 3D space. The grid
is never output in renders. The width, depth, number of lines, and grid color can be set using the
controls found in this tab.
— Image: Controls the output of the pRender node, with controls over the process mode, resolution,
and color space settings of the output.
Different particle system presets in the Templates category of the Bins window in Fusion Studio
Simply drag and drop any of the particle presets into the Node Editor, load the last node into the
viewer, and you’ll see how things are put together.
Contents
Overview������������������������������������������������������������� 1788 DisparityToZ, ZToDisparityv�������������������������� 1794
Stereoscopic Overview
All stereoscopic features are fully integrated into Fusion’s 3D environment. Stereoscopic images can
be created using a single camera, which supports eye separation and convergence distance, and a
Renderer 3D for the virtual left and right eye. It is also possible to combine two different cameras for a
stereo camera rig.
Stereoscopic nodes can be used to solve 3D stereoscopic shooting issues, like 3D rig misalignment,
image mirror polarization differences, camera timing sync issues, color alignment, convergence, and
eye separation issues. The stereo nodes can also be used for creating depth maps.
NOTE: The stereoscopic nodes in the Fusion page work independently of the stereoscopic
tools in the other DaVinci Resolve pages.
Stereoscopic Nodes
— Stereo > Anaglyph: Combines stereo images to create a single anaglyph image for viewing.
— Stereo > Combiner: sStacks a separate stereo images into a single stacked pair,
so they can be processed together.
— Stereo > Disparity: Generates disparity between left/right images.
— Stereo > DisparityToZ: Converts disparity to Z-depth.
— Stereo > Global Align: Shifts each stereo eye manually to do basic alignment of stereo images.
— Stereo > NewEye: Replaces left and/or right eye with interpolated eyes.
— Stereo > Splitter: Separates a stacked stereo image into to left and right images.
— Stereo > StereoAlign: Adjusts vertical alignment, convergence, and eye separation.
— Stereo > ZToDisparity: Converts Z-depth to disparity.
There are a couple of ways to retrieve or generate those extra channels within Fusion.
For example:
— The Renderer3D node is capable of generating most of these channels.
— The OpticalFlow node generates the Vector and BackVector channels, and then TimeStretcher and
TimeSpeed can make use of these channels.
— The Disparity node generates the Disparity channels, and then DisparityToZ, NewEye, and
StereoAlign nodes can make use of the Disparity channels.
— The OpenEXR format can be used to import or export aux channels into Fusion by specifying a
mapping from EXR attributes to Fusion Aux channels using CopyAux.
TimeSpeed, TimeStretcher
You can create smooth constant or variable slow-motion effects using the TimeSpeed or
TimeStretcher nodes. When Optical Flow motion vectors are available in the aux channel of an image,
enabling Flow mode in the TimeSpeed or TimeStretcher Interpolation settings will take advantage of
the Vector and BackVector channels. For the Flow mode to work, there must be either an upstream
OpticalFlow node generating the hidden channels or an OpenEXR Loader bringing these channels
in. These nodes use the Vector/BackVector data to do interpolation on the motion channel and then
destroy the data on output since the input Vector/BackVector channels are invalid. For more detail on
TimeSpeed or TimeStretcher, see Chapter 49, “Miscellaneous Nodes,” in the Fusion Reference Manual.
SmoothMotion
SmoothMotion can be used to smooth the Vector and BackVector channels or smooth the disparity
in a stereo 3D clip. This node passes through, modifies, or generates new aux channels, but does not
destroy them.
By choosing Classic from the Method drop-down menu in the Inspector, you can use the older CPU-
based algorithm to maintain compatibility with comps created in previous versions. This method may
also be better suited for some Stereo3D processing.
The Disparity node analyzes a stereo pair of images and generates an X&Y disparity map.
The workflow is to load a left and right stereo image pair and process those in the Disparity node.
Once the Disparity map is generated, other nodes can process the images.
TIP: When connectng stereo pairs in the node tree, make sure that the left and right
images are connected to the left and right inputs of the Disparity node.
Disparity generation, like Optical Flow, is computationally expensive, so the general idea is that
you can pre-generate these channels, either overnight or on a render farm, and save them into an
EXR sequence.
Stereo Camera
There are two ways to set up a stereoscopic camera. The common way is to simply add a Camera 3D
and adjust the eye separation and convergence distance parameters.
The other way is to connect another camera to the RightStereoCamera input port of the Camera 3D.
When viewing the scene through the original camera or rendering, the connected camera is used for
creating the right-eye content.
Stereo Materials
Using the Stereo Mix material node, it is possible to assign different textures per eye.
Disparity
The Disparity node does the heavy lifting of generating disparity maps. This generates the Disparity
channel and stores it in the hidden aux channels of their output image.
NewEye, StereoAlign
NewEye and StereoAlign use and destroy the Disparity channel to do interpolation on the
color channel.
The hidden channels are destroyed in the process because, after the nodes have been applied, the
original Disparity channels would be invalid.
For these nodes to work, there must be either an upstream Disparity node generating the hidden
channels or an OpenEXR Loader bringing these channels in.
TIP: If the colors between shots are different, use Color Corrector or Color Curves to do a
global alignment first before calculating the Disparity map. Feed the image you will change
into the orange input and the reference into the green input. In the Histogram section of
the Color Corrector, select Match, and also select Snapshot Match Time. In the Color Curves’
Reference section, select Match Reference.
The advantage to using Stack mode is that you do not have to have duplicate branches of the Node
Editor for the left and right eyes. As a consequence, you will see Stereo nodes with two inputs and two
outputs labeled as “Left” and “Right.”
When in Stack mode, the stack should be connected to the left eye input and the Left output should
be used for connecting further nodes. In Stack mode, the respective Right eye inputs and outputs
are hidden.
In the above example, the workflow on the right takes the left and right eye, generates the disparity,
and then NewEye is used to generate a new eye for the image right away.
The example on the left renders the frames with disparity to intermediate EXR images. These images
are then loaded back into Stereo nodes and used to create the NewEye images.
Although not shown in the above diagram, it is usually a good idea to color correct the right eye to be
similar to the left eye before disparity generation, as this helps with the disparity-tracking algorithm.
The color matching does not need to be perfect—for example, it can be accomplished using the
“Match” option in a Color Corrector’s histogram options.
You would expect for non-occluded pixels that Dleft = -Dright, although, due to the disparity
generation algorithm, this is only an approximate equality.
NOTE: Disparity stores both X and Y values because rarely are left/right images perfectly
registered in Y, even when taken through a carefully set up camera rig.
Both Disparity and Optical Flow values are stored as un-normalized pixel shifts. In particular, note
that this breaks from Fusion’s resolution-independent convention. After much consideration, this
convention was chosen so the user wouldn’t have to worry about rescaling the Disparity/Flow values
when cropping an image or working out scale factors when importing/exporting these channels to
other applications. Because the Flow and Disparity channels store things in pixel shifts, this can cause
problems with Proxy and AutoProxy. Fusion follows the convention that, for proxied images, these
channels store unscaled pixel shifts valid for the full-sized image. So if you wish to access the Disparity
values in a script or via a probe, you need to remember to always scale them by (image. Width/image.
OriginalWidth, image. Height/ image. OriginalHeight).
The CopyAux node is used to copy those channels directly into the RGB channels for viewing or
further processing. The advantage of using the CopyAux node is that it does static normalization,
which reduces a lot of flicker that the viewer’s time-variant normalization causes. When viewing long
sequences of aux channels, the CopyAux node has the option to kill off aux channels and keep only
the current RGB channels, freeing up valuable memory so you can cache more frames.
One thing to be aware of is that aux channels tend to consume a lot of memory. A float-32 1080p
image containing just RGBA uses about 32 MB of memory, but with all the aux channels enabled it
consumes around 200 MB of memory.
Semi-Transparent Objects
The Optical Flow and Disparity generation algorithms Fusion uses assume there is only one layer per
pixel when tracking pixels from frame to frame. In particular, transparent objects and motion blur will
cause problems. For example, a shot flying through the clouds with the semi-transparent clouds in the
foreground and a distant landscape background will confuse the Optical Flow/Stereo algorithms, as
they do not recognize overlapping objects with different motions. Usually the optical flow will end up
tracking regions of one object or the other. If the transparent object and the background are near the
same depth and consequently have the same disparity, then it is not a problem.
Motion Blur
Motion blur is also a serious problem for the reason explained in the previous point. The Disparity and
Optical Flow algorithms are unsure whether to assign a pixel in the motion blur to the moving object
or the background pixel. Because the algorithms used are global in nature, not only the vectors on the
motion blur will be wrong, but it will confuse the algorithm on regions close to the motion blur.
Depth of Field
Depth of field is also another problem related to the above two problems. The problem occurs when
you have a defocused foreground object over a background object that is moving (Optical Flow case)
or shifts between L/R (Stereo Disparity case). The blurred edges will confuse the tracking because they
can’t figure out that the edges are actually two separate objects.
For example, if you have composited a lens flare in, it is better to compute OpticalFlow/Disparity
before that, since the semi-transparent lens flare will confuse the tracking algorithms.
If you are color correcting the left/right eyes to match or for deflickering, it is better to apply the
OpticalFlow/Disparity afterward, since it will be easier for the tracking algorithm to find matches if the
color matches between frames.
If you are removing lens distortion, think carefully about whether you want to do it before or after
Disparity computation. If you do it after, your Disparity map will also act as a lens distortion map,
combining the two effects as one.
As a general rule of thumb, it is best to use OpticalFlow/Disparity before any compositing operations
except an initial color matching correction and a lens distortion removal.
The reason is that flow/disparity matching works well when there is common pixel data to match in
both frames, but when there are pixels that show up in just one frame (or one eye), then the Disparity/
OpticalFlow nodes must make a guess and fill in the data. The biggest occlusions going from L <–> R
are usually pixels along the L/R edges of the images that get moved outside. This is similar for optical
flow when you have a moving camera.
Another thing to be aware of are black borders around the edges of your frames, which you should
crop away.
Although this picking functionality does not operate any differently from normal picking of color
channels, this issue may cause some confusion. If it helps, the analogous workflow mistake with color
nodes would be a user trying to pick a gradient color for a Background node from a view showing the
Background node itself (you are trying to pick a color for a node from its own output).
Another issue that you need to be aware of is which eye you are picking. To avoid problems, it’s a good
idea to always pick from the left eye. The reason is that the Disparity channels for the left and right
eyes are different, and when you pick from a horizontal/vertical stereo stack, Fusion has no way of
knowing whether you picked the Disparity value from the left or right eye.
The above are not hard and fast rules; rather, they are guidelines to prevent foot shootings. If you
understood the above reasoning fully, you’ll realize there are exceptions, like picking disparity from the
left output of DisparityToZ and Z from the left/right output of ZToDisparity, where everything is okay.
The Vector channel might be better named “forward vector” or “forward flow,” since the name
“Vector” to describe a channel is “not technically correct,” as the more mathematically-inclined user
might recognize that all the channels except the scalar channels Z/ID are technically “vector” channels.
A frames Vector aux channel will store the flow forward from the current frame to the next frame in
the sequence, and the BackVector aux channel will store the flow backward from the current frame
to the previous frame. If either the previous or next frames do not exist (either not on disk or the
global range of a Loader does not allow OpticalFlow to access them), Fusion will fill the corresponding
channels with zeros (transparent black).
The Disparity channel stores the displacement vectors that match pixels in one eye to the other eye.
The left image’s Disparity channel will contain vectors that map left > right and the right image’s
Disparity channel will contain vectors that map right > left.
For example:
(xleft, yleft) + (Dleft. x, Dleft. y) -> (xright, yright) (xright, yright)
+ (Dright. x, Dright. y) -> (xleft, yleft)
You would expect for non-occluded pixels that Dleft = -Dright, although due to the disparity
generation algorithm, this is only an approximate equality. Note that Disparity stores both X and
Y values because rarely are left/right images perfectly registered in Y, even when taken through a
carefully set up camera rig.
Disparity and Optical Flow values are stored as un-normalized pixel shifts. In particular, note that this
breaks from Fusion’s resolution-independent convention. After much consideration, this convention
was chosen so the user wouldn’t have to worry about rescaling the Disparity/Flow values when
cropping an image or working out scale factors when importing/exporting these channels to other
applications. Because the Flow and Disparity channels store things in pixel shifts, this can cause
problems with Proxy and AutoProxy. The convention that Fusion follows is that, for proxied images,
When using Vector and BackVector aux channels, remember that all nodes expect these aux channels
to be filled with the flow between sequential frames.
When working with these channels, it is the user’s responsibility to follow these rules (or for clever
users to abandon them). Nodes like TimeStretcher will not function correctly since they still expect
the channels to contain flow forward/back by 1 frame.
3D Nodes
This chapter covers, in great detail, the nodes used for
creating 3D composites.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Alembic Mesh 3D [Abc]����������������������������������� 694 Projector 3D [3Pj]���������������������������������������������� 750
You can import Alembic files (.abc) into Fusion in two ways:
— Choose File > Import > Alembic Scene in Fusion or Fusion > Import > Alembic Scene in
DaVinci Resolve’s Fusion page.
— Add an AlembicMesh3D node to the Node Editor.
The first method is the preferred method; both Alembic and FBX nodes by themselves import the
entire model as one object. However, the Import menu breaks down the model, lights, camera, and
animation into a string of individual nodes. This makes it easy to edit and modify and use subsections
of the imported Alembic mesh. Also, transforms in the file are read into Fusion splines and into the
Transform 3D nodes, which get saved with the comp. Later, when reloading the comp, the transforms
are loaded from the comp and not the Alembic file. Fusion handles the meshes differently, always
reloading them from the Alembic file.
Arbitrary user data varies depending on the software creating the Alembic file, and therefore this type
of metadata is mostly ignored.
The top half of the Import dialog displays information about the selected file including the name of
the plugin/application that created the Alembic file, the version of the Alembic software developer kit
used during the export, the duration of the animation in seconds, if available, and the frame rate(s)
in the file.
Various objects and attributes can be imported by selecting the checkboxes in the Import section.
— Hierarchy: When enabled, the full parenting hierarchy is recreated in Fusion using multiple
Transform 3D nodes. When disabled, the transforms in the Alembic file are flattened down into
the cameras and meshes. The flattening results in several meshes/cameras connected to a single
Merge node in Fusion. It is best to have this disabled when the file includes animation. If enabled,
the many rigs used to move objects in a scene will result in an equally large number of nodes in
Fusion, so flattening will reduce the number of nodes in your node tree.
— Orphaned Transforms: When the hierarchy option is enabled, an Orphaned Transforms setting
is displayed. Activating this Orphan Transforms setting imports transforms that parent a mesh or
camera. For example, if you have a skeleton and associated mesh model, the model is imported as
an Alembic mesh, and the skeleton as a node tree of Merge3Ds. If this is disabled, the Merge3Ds
are not created.
— Cameras: When enabled, importing a file includes cameras along with Aperture, Angles of View,
Plane of Focus, as well as Near and Far clipping plane settings. The resolution Gate Fit may be
imported depending on whether the application used to export the file correctly tagged the
resolution Gate Fit metadata. If your camera does not import successfully, check the setting for
the Camera3D Resolution Gate Fit. Note that 3D Stereoscopic information is not imported.
— InverseTransform: Imports the Inverse Transform (World to Model) for cameras.
— Points: Alembic files support a Points type. This is a collection of 3D points with position
information. Some 3D software exports particles as points. However, keep in mind that while
position is included, the direction and orientation of the particles are lost.
— Meshes: This setting determines whether importing includes 3D models from the Alembic file.
If it is enabled, options to include UVs and normals are displayed.
Not all objects and properties in a 3D scene have an agreed upon universal convention in the Alembic
file format. That being the case, Lights, Materials, Curves, Multiple UVs, and Velocities are not currently
supported when you import Alembic files.
Since the FBX file format does support materials and lights, we recommend the use of FBX for lights,
cameras, and materials. Use Alembic for meshes only.
Inputs
The AlembicMesh3D node has two inputs in the Node Editor. Both are optional since the node is
designed to use the imported mesh.
— SceneInput: The orange input can be used to connect an additional 3D scene or model. The
imported Alembic objects combine with the other 3D geometry.
— MaterialInput: The optional green input is used to apply a material to the geometry by
connecting a 2D bitmap image. It applies the connected image to the surface of the geometry in
the scene.
Controls Tab
The first tab in the Inspector is the Controls tab. It includes a series of unique controls specific to
the Alembic Mesh 3D node as well as six groupings of controls that are common to most 3D nodes.
“The Common Controls” section at the end of this chapter includes detailed descriptions of the
common controls.
Filename
The complete file path of the imported Alembic file is displayed here. This field allows you to change or
update the file linked to this node.
Object Name
This text field shows the name of the imported Alembic mesh, which is also used to rename the
Alembic Mesh 3D node in the Node Editor.
When importing with the Alembic Mesh 3D node, if this text field is blank, the entire contents of the
Alembic geometry are imported as a single mesh. When importing geometry using File > Import >
Alembic Scene, this field is set by Fusion.
Wireframe
Enabling this option causes the mesh to display only the wireframe for the object in the viewer. When
enabled, there is a second option for wireframe anti-aliasing. You can also render these wireframes
out to a file if the Renderer 3D node has the OpenGL render type selected.
Common Controls
Controls, Materials, Transform, and Settings Tabs
The controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID in the
Controls tab are common in many 3D nodes. The Materials tab, Transforms tab and Settings tab in the
Inspector are also duplicated in other 3D nodes. These common controls are described in detail at the
end of this chapter in “The Common Controls” section.
Bender 3D Introduction
The Bender 3D node is used to bend, taper, twist, or shear 3D geometry based on the geometry’s
bounding box. It works by connecting any 3D scene or object to the orange input on the Bender 3D
node, and then adjusting the controls in the Inspector. Only the geometry in the scene is modified.
Any lights, cameras, or materials are passed through unaffected.
The Bender node does not produce new vertices in the geometry; it only alters existing vertices in the
geometry. So, when applying the Bender 3D node to primitives, like the Shape 3D, or Text 3D nodes,
increase the Subdivision setting in the primitive’s node to get a higher-quality result.
Inputs
The following inputs appear on the Bender 3D node in the Node Editor.
— SceneInput: The orange scene input is the required input for the Bender 3D node. You use this
input to connect another node that creates or contains a 3D scene or object.
Bender 3D controls
Controls Tab
The first tab in the Inspector is the Controls tab. It includes all the controls for the Bender 3D node.
Bender Type
The Bender Type menu is used to select the type of deformation to apply to the geometry. There are
four modes available: Bend, Taper, Twist, and Shear.
Amount
Adjusting the Amount slider changes the strength of the deformation.
Axis
The Axis control determines the axis along which the deformation is applied. It has a different
meaning depending on the type of deformation. For example, when bending, this selects the elbow in
conjunction with the Angle control. In other cases, the deform is applied around the specified axis.
Angle
The Angle thumbwheel control determines what direction about the axis a bend or shear is applied.
It is not visible for taper or twist deformations.
Group Objects
If the input of the Bender 3D node contains multiple 3D objects, either through a Merge 3D or
strung together, the Group Objects checkbox treats all the objects in the input scene as a single
object, and the common center is used to deform the objects, instead of deforming each component
object individually.
Common Controls
Settings
The Settings tab in the Inspector is common to all 3D nodes. This common tab is described in detail at
the end of this chapter in “The Common Controls” section.
Camera 3D [3Cm]
Camera Projection
The Camera 3D node can also be used to perform Camera Projection by projecting a 2D image
through the camera into 3D space. Projecting a 2D image can be done as a simple Image Plane
aligned with the camera, or as an actual projection, similar to the behavior of the Projector 3D node,
with the added advantage of being aligned precisely with the camera. The Image Plane, Projection,
and Materials tabs do not appear until you connect a 2D image to the magenta image input on the
Camera 3D node in the Node Editor.
Stereoscopic
The Camera node has built-in stereoscopic features. They offer control over eye separation and
convergence distance. The camera for the right eye can be replaced using a separate camera node
connected to the green left/right stereo camera input. Additionally, the plane of focus control for
depth of field rendering is also available here.
Alternatively, it is possible to copy the current viewer to a camera (or spotlight or any other object) by
selecting the Copy PoV To option in the viewer’s contextual menu, under the Camera submenu.
Inputs
There are three optional inputs on the Camera 3D node in the Node Editor.
— SceneInput: The orange input is used to connect a 3D scene or object. When connected, the
geometry links to the camera’s field of view. It acts similarly to an image attached to the Image
Plane input. If the camera’s Projection tab has projection enabled, the image attached to the
orange image input projects on to the geometry.
— ImageInput: The optional magenta input is used to connect a 2D image. When camera projection
is enabled, the image can be used as a texture. Alternatively, when the camera’s image plane
controls are used, the parented planar geometry is linked to the camera’s field of view.
— RightStereoCamera: The green input should be connected to another Camera 3D node when
creating 3D stereoscopic effects. It is used to override the internal camera used for the right eye
in stereoscopic renders and viewers.
Displaying a camera node directly in the viewer shows only an empty scene; there is nothing for the
camera to see. To view the scene through the camera, view the Merge 3D node where the camera
is connected, or any node downstream of that Merge 3D. Then right-click on the viewer and select
Camera > [Camera name] from the contextual menu. Right-clicking on the axis label found in the lower
corner of each 3D viewer also displays the Camera submenu.
Inspector
Camera 3D controls
Controls Tab
The Camera3D Inspector includes six tabs along the top. The first tab, called the Controls tab, contains
some of the most fundamental camera settings, including the camera’s clipping plains, field of view,
focal length, and stereoscopic properties. Some tabs are not displayed until a required connection is
made to the Camera 3D node.
Orthographic cameras present controls only for the near and far clipping planes, and a control to set
the viewing scale.
Near/Far Clip
The clipping planes are used to limit what geometry in a scene is rendered based on an object’s
distance from the camera’s focal point. Clipping planes ensure objects that are extremely close to the
camera, as well as objects that are too far away to be useful, are excluded from the final rendering.
The default perspective camera ignores this setting unless the Adaptive Near/Far Clip checkbox
located under the Near/Far Clip control is disabled.
The clip values use units, so a far clipping plane of 20 means that any object more than 20 units from
the camera is invisible to the camera. A near clipping plane of 0.1 means that any object closer than 0.1
units is also invisible.
NOTE: A smaller range between the near and far clipping planes allows greater accuracy
in all depth calculations. If a scene begins to render strange artifacts on distant objects, try
increasing the distance for the Near Clip plane.
The Z-distance of an orthographic camera from the objects it sees does not affect the scale of those
objects, only the viewing size does.
Angle of View
Angle of View defines the area of the scene that can be viewed through the camera. Generally, the
human eye can see more of a scene than a camera, and various lenses record different degrees of the
total image. A large value produces a wider angle of view, and a smaller value produces a narrower, or
more tightly focused, angle of view.
Focal Length
In the real world, a lens’ Focal Length is the distance from the center of the lens to the film plane.
The shorter the focal length, the closer the focal plane is to the back of the lens. The focal length
is measured in millimeters. The angle of view and focal length controls are directly related. Smaller
focal lengths produce a wider angle of view, so changing one control automatically changes the
other to match.
The relationship between focal length and angle of view is angle = 2 * arctan[aperture / 2 /
focal_length].
Use the vertical aperture size to get the vertical angle of view and the horizontal aperture size to get
the horizontal angle of view.
Stereo
The Stereo section includes options for setting up 3D stereoscopic cameras. 3D stereoscopic
composites work by capturing two slightly different views, displayed separately to the left and right
eyes. The mode menu determines if the current camera is a stereoscopic setup or a mono camera.
When set to the default mono setting, the camera views the scene as a traditional 2D film camera.
Three other options in the mode menu determine the method used for 3D stereoscopic cameras.
Toe In
In a toe-in setup, both cameras are rotating in on a single focal point. Though the result is
stereoscopic, the vertical parallax introduced by this method can cause discomfort by the audience.
Toe-in stereoscopic works for convergence around the center of the images but exhibits keystoning,
or image separation, to the left and right edges. This setup is can be used when the focus point and
the convergence point need to be the same. It is also used in cases where it is the only way to match a
live-action camera rig.
Parallel
The cameras are shifted parallel to each other. Since this is a purely parallel shift, there is no
Convergence Distance control that limits your control over placing objects in front of or behind the
screen. However, Parallel introduces no vertical parallax, thus creating less strain on the eyes.
Rig Attached To
This drop-down menu allows you to control which camera is used to transform the stereoscopic setup.
Based on this menu, transform controls appear in the viewer either on the right camera, left camera,
or between the two cameras. The ability to switch the transform controls through rigging can assist in
matching the animation path to a camera crane or other live-action camera motion. The Center option
places the transform controls between the two cameras and moves each evenly as the separation and
convergence are adjusted. Left puts the transform controls on the left camera, and the right camera
Eye Separation
Eye Separation defines the distance between both stereo cameras. Setting Eye Separation to a value
larger than 0 shows controls for each camera in the viewer when this node is selected. Note that there
is no Convergence Distance control in Parallel mode.
Convergence Distance
This control sets the stereoscopic convergence distance, defined as a point located along the Z-axis of
the camera that determines where both left- and right-eye cameras converge.
The Convergence Distance controls are only available when setting the Mode menu to Toe-In
or Off Axis.
Film Back
Film Gate
The size of the film gate represents the dimensions of the aperture. Instead of setting the aperture’s
width and height, you can choose it using the list of preset camera types in the Film Gate menu.
Selecting one of the options automatically sets the aperture width and aperture height to match.
Aperture Width/Height
The Aperture Width and Height sliders control the dimensions of the camera’s aperture or the portion
of the camera that lets light in on a real-world camera. In video and film cameras, the aperture is the
mask opening that defines the area of each frame exposed. The Aperture control uses inches as its
unit of measurement.
NOTE: This setting corresponds to Maya’s Resolution Gate. The modes Overscan,
Horizontal, Vertical, and Fill correspond to Inside, Width, Height, and Outside.
— Inside: The image source defined by the film gate is scaled uniformly until one of its dimensions
(X or Y) fits the inside dimensions of the resolution gate mask. Depending on the relative
dimensions of image source and mask background, either the image source’s width or height may
be cropped to fit the dimension of the mask.
— Width: The image source defined by the film gate is scaled uniformly until its width (X) fits the
width of the resolution gate mask. Depending on the relative dimensions of image source and
mask, the image source’s Y-dimension might not fit the mask’s Y-dimension, resulting in either
cropping of the image source in Y or the image source not covering the mask’s height entirely.
— Height: The image source defined by the film gate is scaled uniformly until its height (Y) fits the
height of the resolution gate mask. Depending on the relative dimensions of image source and
mask, the image source’s X-dimension might not fit the mask’s X-dimension, resulting in either
cropping of the image source in X or the image source not covering the mask’s width entirely.
Control Visibility
This section allows you to selectively activate the onscreen controls that are displayed along with
the camera.
— Show View Controls: Displays or hides all camera onscreen controls in the viewers.
— Frustum: Displays the actual viewing cone of the camera.
— View Vector: Displays a white line inside the viewing cone, which can be used to determine the
shift when in Parallel mode.
— Near Clip: The Near clipping plane. This plane can be subdivided for better visibility.
— Far Clip: The Far clipping plane. This plane can be subdivided for better visibility.
— Focal Plane: The plane based on the Plane of Focus slider explained in the Controls tab above.
This plane can be subdivided for better visibility.
— Convergence Distance: The point of convergence when using Stereo mode. This plane can be
subdivided for better visibility.
Import Camera
The Import Camera button displays a dialog to import a camera from another application.
*dotXSI .xsi
NOTE: FBX cameras can be imported using DaVinci Resolve’s Fusion > Import > FBX Scene
menu or File > Import > FBX Scene in Fusion Studio.
Image Tab
When a 2D image is connected to the magenta image input on the Camera3D node, an Image tab
is created at the top of the inspector. The connected image is always oriented so it fills the camera’s
field of view.
Except for the controls listed below, the options in this tab are identical to those commonly found
in other 3D nodes. For more detail on visibility, lighting, matte, blend mode, normals/tangents, and
Object ID, see “The Common Controls” section at the end of this chapter.
Fill Method
This menu configures how to scale the image plane if the camera has a different aspect ratio.
— Inside: The image plane is scaled uniformly until one of its dimensions (X or Y) fits the inside
dimensions of the resolution gate mask. Depending on the relative dimensions of image source
and mask background, either the image source’s width or height may be cropped to fit the
dimensions of the mask.
— Width: The image plane is scaled uniformly until its width (X) fits the width of the mask.
Depending on the relative dimensions of image source and the resolution gate mask, the image
source’s Y-dimension might not fit the mask’s Y-dimension, resulting in either cropping of the
image source in Y or the image source not covering the mask’s height entirely.
— Height: The image plane is scaled uniformly until its height (Y) fits the height of the mask.
Depending on the relative dimensions of image source and the resolution gate mask, the image
source’s X-dimension might not fit the mask’s X-dimension, resulting in either cropping of the
image source in X or the image source not covering the mask’s width entirely.
— Outside: The image plane is scaled uniformly until one of its dimensions (X or Y) fits the outside
dimensions of the resolution gate mask. Depending on the relative dimensions of image source
and mask, either the image source’s width or height may be cropped or not fit the respective
dimension of the mask.
— Depth: The Depth slider controls the image plane’s distance from the camera.
NOTE: The Camera Z position has no effect on the image plane’s distance from the camera.
Projection Tab
When a 2D image is connected to the camera node, a fourth projection tab is displayed at the top of
the Inspector. Using this Projection tab, it is possible to project the image into the scene. A projection
is different from an image plane in that the projection falls onto the geometry in the scene exactly as if
there were a physical projector present in the scene. The image is projected as light, which means the
Renderer 3D node must be set to enable lighting for the projection to be visible.
Projection Mode
— Light: Defines the projection as a spotlight.
— Ambient Light: Defines the projection as an ambient light.
— Texture: Allows a projection that can be relighted using other lights. Using this setting requires a
Catcher node connected to the applicable inputs of the specific material.
Image Plane: The camera’s image plane isn‘t just a virtual guide for you in the viewers.
It‘s actual geometry that you can also project on to. To use a different image on the image
plane, you need to insert a Replace Material node after your Camera node.
Parallel Stereo: There are three ways you can achieve real Parallel Stereo mode:
— Connect an additional external (right) camera to the green Right Stereo Camera
input of your camera.
— Create separate left and right cameras.
— When using Toe-In or Off Axis, set the Convergence Distance slider to a very large
value of 999999999.
Rendering Overscan: If you want to render an image with overscan, you also must modify
your scene‘s Camera3D. Since overscan settings aren’t exported along with camera data
from 3D applications, this is also necessary for cameras you’ve imported via .fbx or .ma
files. The solution is to increase the film back’s width and height by the factor necessary to
account for extra pixels on each side.
The node also provides six additional image inputs that can be used to map a texture onto the six
faces of the cube. Cubes are often used as shadow casting objects and for environment maps. For
other basic primitives, see the Shape 3D node in this chapter.
Inputs
The following are optional inputs that appear on the Cube3D node in the Node Editor:
— SceneInput: The orange scene input is used to connect another node that creates or contains a
3D scene or object. The additional geometry gets added to the Cube3D.
— NameMaterialInput: These six inputs are used to define the materials applied to the six faces
of the cube. You can connect either a 2D image or a 3D material to these inputs. Textures
or materials added to the Cube3D do not get added to any 3D objects connected to the
Cube’s SceneInput.
Cube 3D controls
Controls Tab
The first tab in the Inspector is the Controls tab. It includes the primary controls for determining the
overall size and shape of the Cube 3D node.
Lock Width/Height/Depth
This checkbox locks the Width, Height, and Depth dimensions of the cube together. When selected,
only a Size control is displayed; otherwise, separate Width, Height, and Depth sliders are shown.
Size or Width/Height/Depth
If the Lock checkbox is selected, then only the Size slider is shown; otherwise, separate sliders are
displayed for Width, Height, and Depth. The Size and Width sliders are the same control renamed, so
any animation applied to Size is also applied to Width when the controls are unlocked.
Subdivision Level
Use the Subdivision Level slider to set the number of subdivisions used when creating the
image plane.
Cube Mapping
Enabling the Cube Mapping checkbox causes the cube to wrap its first texture across all six faces
using a standard cubic mapping technique. This approach expects a texture laid out in the shape
of a cross.
Wireframe
Enabling this checkbox causes the mesh to render only the wireframe for the object when rendering
with the OpenGL renderer in the Renderer 3D node.
Common Controls
Controls, Materials, Transform, and Settings Tabs
The remaining controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID
are common to many 3D nodes. The same is true of the Materials, Transform, and Settings tabs. Their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Using scripting math functions and lookup tables from images, you can move vertex positions on 3D
geometry. Vertices can be more than just positions in 3D space. You can manipulate normals, texture
coordinates, vectors, and velocity.
For example, Custom Vertex 3D can be used to make a flat plane wave like a flag, or create
spiral models.
Besides providing a 3D scene input and three image inputs, the Inspector includes up to eight number
fields and as many as eight XYZ position values from other controls and parameters in the node tree.
TIP: Not all geometry has every attribute. For example, most Fusion geometry does not
have vertex colors, with the exception of particles and some imported FBX/Alembic meshes.
No geometry currently has environment coordinates, and only particles have velocities. If
an attribute is not present on the input geometry, it is assumed to have a default value.
Inputs
The Custom Vertex 3D node includes four inputs. The orange scene input is the only one of the four
that is required.
— SceneInput: The orange scene input takes 3D geometry or a 3D scene from a 3D node
output. This is the 3D scene or geometry that is manipulated by the calculations in the
Custom Vertex 3D node.
— ImageInput1, ImageInput2, ImageInput3: The three image inputs using green, magenta, and
teal colors are optional inputs that can be used for compositing.
NOTE: Missing attributes on the input geometry are created if the expression for an
attribute is nontrivial. The values for the attributes are given as in the above point.
For example, if the input geometry does not have normals, then the values of (nx, ny,
nz) is always (0,0,1). To change this, you could use a ReplaceNormals node beforehand to
generate them.
Vertex Tab
Using the fields in the Vertex tab, vertex calculations can be performed on the Position, Normals,
Vertex Color, Texture Coordinates, Environment Coordinates, UV Tangents, and Velocity attributes.
The vertices are defined by three XYZ Position values in world space as px, py, pz. Normals, which
define as a vector the direction the vertex is pointing as nx, ny, nz.
Vertex color is the Red, Green, Blue, and Alpha color of the point as vcr, vcg, vcb, vca.
Numbers Tab
Numbers 1-8
Numbers are variables with a dial control that can be animated or connected to modifiers exactly as
any other control might. The numbers can be used in equations on vertices at current time: n1, n2,
n3, n4,… or at any time: n1_at(float t), n2_at(float t), n3_at(float t), n4_at(float t), where t is the time you
want. The values of these controls are available to expressions in the Setup and Intermediate tabs.
They can be renamed and hidden from the viewer using the Config tab.
Points 1-8
The point controls represent points in the Custom Vertex 3D tool, not the vertices. These eight point
controls include 3D X,Y,Z position controls for positioning points at the current time: (p1x, p1y, p1z,
p2x, p2y, p2z) or at any time: p1x_at(float t), p1y_at(float t), p1z_at(float t), p2x_at(float t), p2y_at(float t),
p2z_at(float t), where t is the time you want. For example, you can use a point to define a position in
3D space to rotate the vertices around. They can be renamed and hidden from the viewer using the
Config tab. They are normal positional controls and can be animated or connected to modifiers as any
other node might.
LUT Tab
LUTs 1-4
The Custom Vertex 3D node provides four LUT splines. A LUT is a lookup table that will
return a value from the height of the LUT spline. For example, getlut1(float x), getlut2(float x),...
where x = 0 … 1 accesses the LUT values.
The values of these controls are available to expressions in the Setup and Intermediate tabs using
the getlut# function. For example, setting the R, G, B, and A expressions to getlut1(r1), getlut2(g1),
These controls can be renamed using the options in the Config tab to make their meanings more
apparent, but expressions still see the values as lut1, lut2,...lut8.
Setup Tab
Setups 1-8
Up to eight separate expressions can be calculated in the Setup tab of the Custom Vertex 3D node.
The Setup expressions are evaluated once per frame, before any other calculations are performed.
The results are then made available to the other expressions in the node as variables s1, s2, s3, and s4.
Think of them as global setup scripts that can be referenced by the intermediate and channel scripts
for each vertex.
For example, Setup scripts can be used to transform vertex from model to world space.
NOTE: Because these expressions are evaluated once per frame only and not for each
pixel, it makes no sense to use per-pixel variables like X and Y or channel variables like r1,
g1, b1, and so on. Allowable values include constants, variables like n1…n8, time, W and H,
and so on, and functions like sin() or getr1d().
Intermediates 1-8
An additional eight expressions can be calculated in the Intermediate tab. The Intermediate
expressions are evaluated once per vertex, after the Setup expressions are evaluated. Results are
available as variables i1, i2, i3, i4, i5, i6, i7, i8, which can be referenced by channel scripts. Think of them
as “per vertex setup” scripts.
For example, you can run the script to produce the new vertex (i.e., new position, normal, tangent,
UVs, etc.) or transform from world space back to model space.
Config Tab
Random Seed
Use this to set the seed for the rand() and rands() functions. Click the Reseed button to set the seed
to a random value. This control may be needed if multiple Custom Vertex 3D nodes are required with
different random results for each.
Point Controls
There are eight sets of Point controls, corresponding to the eight controls in the Points tab. Disable
the Show Point checkbox to hide the corresponding Point control and its crosshair in the viewer.
Similarly, edit the Name for Point text field to change the control’s name.
Common Controls
Settings Tab
The Settings tab controls are common to many 3D nodes, and their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Displace 3D [3Di]
When using Displace 3D, keep in mind that it only displaces existing vertices and does not subdivide
surfaces to increase detail. To obtain a more detailed displacement, increase the subdivision amount
for the geometry that is being displaced. Note that the pixels in the displacement image may contain
negative values.
TIP: Passing a particle system through a Displace 3D node disables the Always Face
Camera option set in the pEmitter. Particles are not treated as point-like objects; each
of the four particle vertices are individually displaced, which may or may not be the
preferred outcome.
Inputs
The following two inputs appear on the Displace 3D node in the Node Editor:
Inspector
Displace 3D controls
Controls Tab
The Displace 3D Inspector includes two tabs along the top. The primary tab, called the Controls tab,
contains the dedicated Displace 3D controls.
Camera Displacement
— Point to Camera: When the Point to Camera checkbox is enabled, each vertex is displaced toward
the camera instead of along its normal. One possible use of this option is for displacing a camera’s
image plane. The displaced camera image plane would appear unchanged when viewed through
the camera but is deformed in 3D space, allowing one to comp-in other 3D layers that correctly
interact in Z.
— Camera: This menu is used to select which camera in the scene is used to determine the camera
displacement when the Point to Camera option is selected.
Common Controls
Settings Tab
The Settings tab controls are common to many 3D nodes, and their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Duplicate 3D [3Dp]
Inputs
The Duplicate 3D node has a single input by default where you connect a 3D scene. An optional Mesh
input appears based on the settings of the node.
— SceneInput: The orange Scene Input is a required input. The scene or object you connect to this
input is duplicated based on the settings in the Control tab of the Inspector.
— MeshInput: A green optional mesh input appears when the Region’s tab Region menu is set to
mesh. The mesh can be any 3D model, either generated in Fusion or imported.
A Cube 3D is duplicated
Inspector
Duplicate 3D controls
Copies
Use this range control to set the number of copies made. Each copy is a copy of the last copy, so if this
control is set to [0,3], the parent is copied, then the copy is copied, then the copy of the copy is copied,
and so on. This allows some interesting effects when transformations are applied to each copy using
the controls below.
Setting the First Copy to a value greater than 0 excludes the original object and shows only the copies.
Time Offset
Use the Time Offset slider to offset any animations that are applied to the source geometry by a set
amount per copy. For example, set the value to -1.0 and use a cube set to rotate on the Y-axis as the
source. The first copy shows the animation from a frame earlier; the second copy shows animation
from a frame before that, etc. This can be used with great effect on textured planes—for example,
where successive frames of a clip can be shown.
Transform Method
— Linear: When set to Linear, transforms are multiplied by the number of the copy, and the total
scale, rotation, and translation are applied in turn, independent of the other copies.
— Accumulated: When set to Accumulated, each object copy starts at the position of the previous
object and is transformed from there. The result is transformed again for the next copy
Transform Order
With this menu, the order in which the transforms are calculated can be set. It defaults to Scale-
Rotation-Transform (SRT).
Rotation
The buttons along the top of this group of rotation controls set the order in which rotations are
applied to the geometry. Setting the rotation order to XYZ would apply the rotation on the X-axis first,
followed by the Y-axis rotation, then the Z-axis rotation.
The three Rotation sliders set the amount of rotation applied to each copy.
Pivot
The pivot controls determine the position of the pivot point used when rotating each copy.
Scale
— Lock: When the Lock XYZ checkbox is selected, any adjustment to the duplicate scale is applied
to all three axes simultaneously. If this checkbox is disabled, the Scale slider is replaced with
individual sliders for the X, Y, and Z scales.
— Scale: The Scale controls tell Duplicate how much scaling to apply to each copy.
Jitter Tab
The options in the Jitter tab allow you to randomize the position, rotation, and size of all the copies
created in the Controls tab.
Randomize
Click the Randomize button to auto generate a random seed value.
Jitter Probability
Adjusting this slider determines the percentage of copies that are affected by the jitter. A value of 1.0
means 100% of the copies are affected, while a value of 0.5 means 50% are affected.
Time Offset
Use the Time Offset slider to offset any animations that are applied to the source geometry by a set
amount per copy. For example, set the value to –1.0 and use a cube set to rotate on the Y-axis as the
source. The first copy shows the animation from a frame earlier; the second copy shows animation
from a frame before that, etc. This can be used with great effect on textured planes—for example,
where successive frames of a clip can be shown.
Translation Jitter
Use these three controls to adjust the amount of variation in the X, Y, and Z translation of the
duplicated objects.
Rotation Jitter
Use these three controls to adjust the amount of variation in the X, Y, and Z rotation of the
duplicated objects.
Pivot Jitter
Use these three controls to adjust the amount of variation in the rotational pivot center of the
duplicated objects. This affects only the additional jitter rotation, not the rotation produced by the
Rotation settings in the Controls tab.
Scale Jitter
Use this control to adjust the amount of variation in the scale of the duplicated objects. Disable the
Lock XYZ checkbox to adjust the scale variation independently on all three axes.
Region Tab
The options in the Region tab allow you to define an area in the viewer where the copies can appear or
are prevented from appearing. Like most parameters in Fusion, this area can be animated to cause the
copied object to pop on and off the screen based on the region’s shape and setting.
Region Tab
The Region section includes two settings for controlling the shape of the region and the affect the
region has on the duplicate objects.
— Region Mode: There are three options in the Region Mode menu. The default, labeled
“Ignore region” bypasses the node entirely and causes no change to the copies of objects from
how they are set in the Controls and Jitter tabs. The menu option labeled “When inside region”
causes the copied objects to appear only when their position falls inside the region defined in
this tab. The last menu option, “When not Inside region” causes the copied objects to appear only
when their position falls outside the region defined in this tab.
— Region: The Region menu determines the shape of the region. The five options include cube,
sphere, and rectangle primitive shapes. The mesh option allows you to connect a 3D model into
the green mesh input on the node. The green input appears only after the Region menu is set
to Mesh. The All setting refers to the entire scene. This allows the copies to pop on and off if the
Region mode is animated. When the Region menu is set to Mesh, four other options are displayed.
These are described below.
— Winding Rule: Using four common techniques, the Winding Rule menu determines how the
mesh of polygons is determined as an area of volume and consequently how copies locate the
vertices in the mesh. Complex overlapping regions of a mesh can cause an irregular fit. Trying
a different technique from this menu can sometimes create a better match between the mesh
and how the copies interpret the mesh shape.
Common Controls
Settings Tab
The Settings tab controls are common to many 3D nodes, and their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Extrude 3D [3Ex]
Inputs
There are three inputs to this node, one for inputing the shape itself, and two for materials.
— ShapeInput: This yellow input expects a Shape node. This is the 2D shape that you want to bevel
and extrude into 3D space.
— MaterialInput: The green-colored material input accepts either a 2D image or a 3D material.
It provides the texture for the shape based on the connected source, such as a Loader node in
Fusion Studio or a MediaIn node in DaVinci Resolve. The 2D image is used as a diffuse texture map
for the Basic Material tab in the Inspector. If a 3D material is connected, then the Basic Material
tab is disabled.
— BevelMaterialInput: The pink-colored material input accepts either a 2D image or a 3D material.
It provides the texture for the bevel based on the connected source, such as a Loader node in
Fusion Studio or a MediaIn node in DaVinci Resolve. The 2D image is used as a diffuse texture map
for the Basic Material tab in the Inspector. If a 3D material is connected, then the Basic Material
tab is disabled.
Inspector
Extrusion Style
There are two choices: Classic and Custom. Classic gives a standard uniform extrusion, while Custom
exposes an Extrusion Profile graph that lets you add points and manipulate them to create unique
custom extrusions, like picture frames and knurled buttons.
Extrusion Depth
This slider determines the extruded width of the polygon.
Extrusion Subdivisions
This slider determines the number of subdivisions within the smoothed portions of the
extrusion profile.
Bevel Depth
Increase the value of the Bevel Depth slider above zero to add a bevel to the polygon.
Bevel Width
This slider determines the width of the polygon’s bevel.
Smoothing Angle
Use this control to adjust the smoothing angle applied to the edges of the bevel.
Common Controls
Materials, Transform, and Settings Tabs
The Materials tab, Transforms tab, and Settings tab in the Inspector are also duplicated in other 3D
nodes. These common controls are described in detail at the end of this chapter in “The Common
Controls” section.
Setting the Preferences > Global > General > Auto Clip Browse option in the Fusion Studio application,
or the Fusion > Fusion Settings > General > Auto Clip Browse option in DaVinci Resolve to Enabled
(default), and then adding this node to a composition automatically displays a file browser allowing
you to choose where to save the file.
Once you have set up the node, the FBX Exporter is used similarly to a Saver node: clicking the Render
button in the toolbar renders out the file.
Besides the FBX format, this node can also export to the 3D Studio’s .3ds, Collada’s .dae, Autocad’s
.dxf, and the Alias .obj formats.
Inputs
The FBX Exporter node has a single orange input.
— Input: The output of the 3D scene that you want to export connects to the orange input on the
FBX Exporter node.
Controls Tab
The Controls tab includes all the parameters you used to decide how the FBX file is created and what
elements in the scene get exported.
Filename
This Filename field is used to display the location and file that is output by the node. You can click the
Browse button to open a file browser dialog and change the location where the file is saved.
Format
This menu is used to set the format of the output file.
Not all features of this node are supported in all file formats. For example, the .obj format does not
handle animation.
Version
The Version menu is used to select the available versions for the chosen format. The menu’s contents
change dynamically to reflect the available versions for that format. If the selected format provides
only a single option, this menu is hidden.
Frame Rate
This menu sets the frame rate that is in the FBX scene.
Scale Units By
This slider changes the working units in the exported FBX file. Changing this can simplify workflows
where the destination 3D software that you have uses a different scale.
Geometry/Lights/Cameras
These three checkboxes determine whether the node attempts to export the named scene element.
For example, deselecting Geometry and Lights but leaving Cameras selected would output only the
cameras currently in the scene.
Common Controls
Settings Tab
The Settings tab controls are common to many 3D nodes, and their descriptions can be found in
“The Common Controls” section at the end of this chapter.
When importing geometry with this node, all the geometry in the FBX file is combined into one
mesh with a single pivot and transformation. The FBX Mesh node ignores any animation applied to
the geometry.
Alternatively, in Fusion Studio, the File > Import > FBX Scene or in DaVinci Resolve, the Fusion > Import
> FBX Scene menu can be used to import an FBX scene. This option creates individual nodes for each
camera, light, and mesh in the file. This menu option can also be used to preserve the animation of
the objects.
Inputs
— SceneInput: The orange scene input is an optional connection if you wish to combine other 3D
geometry nodes with the imported FBX file.
— Material Input: The green input is the material input that accepts either a 2D image or a 3D
material. If a 2D image is provided, it is used as a diffuse texture map for the basic material tab in
the node. If a 3D material is connected, then the basic material tab is disabled.
Inspector
Size
The Size slider controls the size of the FBX geometry that is imported. FBX meshes have a tendency
to be much larger than Fusion’s default unit scale, so this control is useful for scaling the imported
geometry to match the Fusion environment.
FBX File
This field displays the filename and file path of the currently loaded FBX mesh. Click the Browse button
to open a file browser that can be used to locate a new FBX file. Despite the node’s name, this node is
also able to load a variety of other formats.
Object Name
This input shows the name of the mesh from the FBX file that is being imported. If this field is blank,
then the contents of the FBX geometry are imported as a single mesh. You cannot edit this field; it is
set by Fusion when using the File > Import > FBX Scene menu.
Take Name
FBX files can contain multiple instances of an animation, called Takes. This field shows the name of the
animation take to use from the FBX file. If this field is blank, then no animation is imported. You cannot
edit this field; it is set by Fusion when using the File > Import > FBX Scene menu.
Wireframe
Enabling this checkbox causes the mesh to render only the wireframe for the object. Only the OpenGL
renderer in the Renderer 3D node supports wireframe rendering.
Common Controls
Controls, Materials, Transform, and Settings Tabs
The remaining controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object
ID are common to many 3D nodes. The same is true of the Materials, Transform, and Settings tabs.
Their descriptions can be found in “The Common Controls” section at the end of this chapter.
The Fog 3D node essentially retextures the geometry in the scene by applying a color correction
based on the object’s distance from the camera. An optional density texture image can be used to
apply variation to the correction.
Inputs
The Fog 3D node has two inputs in the Node Editor, only one of which is required for the Fog 3D to
project onto a 3D scene.
— SceneInput: The required orange-colored input accepts the output of a 3D scene on which the
fog is “projected.”
— DensityTexture: This optional green-colored input accepts a 2D image. The color of the
fog created by this node is multiplied by the pixels in this image. When creating the image
for the density texture, keep in mind that the texture is effectively projected onto the scene
from the camera.
Controls Tab
The Controls tab includes all the parameters you use to decide how the Fog looks and projects onto
the geometry in the scene.
Enable
Use this checkbox to enable or disable parts of the node from processing. This is not the same as the
red switch in the upper-left corner of the inspector. The red switch disables the tool altogether and
passes the image on without any modification. The Enable checkbox is limited to the effect part of the
tool. Other parts like scripts in the Settings tab still processes as normal.
Color
This control can be used to set the color of the fog. The color is also multiplied by the density texture
image, if one is connected to the green input on the node.
Radial
By default, the fog is created based on the perpendicular distance to a plane (parallel with the near
plane) passing through the eye point. When the Radial option is checked, the radial distance to the
eye point is used instead of the perpendicular distance. The problem with perpendicular distance fog
is that when you move the camera about, as objects on the left or right side of the frustum move into
the center, they become less fogged although they remain the same distance from the eye. Radial fog
fixes this. Radial fog is not always desirable, however.
Type
This control is used to determine the type of falloff applied to the fog.
Common Controls
Settings Tab
The Settings tab controls are common to many 3D nodes, and their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
Of the two inputs on this node, the material input is the primary connection you use to add an image
to the planar geometry created in this node.
— SceneInput: This orange input expects a 3D scene. As this node creates flat, planar geometry,
this input is not required.
— MaterialInput: The green-colored material input accepts either a 2D image or a 3D material. It
provides the texture and aspect ratio for the rectangle based on the connected source such as
a Loader node in Fusion Studio or a MediaIn node in DaVinci Resolve. The 2D image is used as a
diffuse texture map for the basic material tab in the Inspector. If a 3D material is connected, then
the basic material tab is disabled.
Controls Tab
Most of the Controls tab is taken up by common controls. The Image Plane specific controls at the top
of the Inspector allow minor adjustments.
Lock Width/Height
When checked, the subdivision of the plane is applied evenly in X and Y. When unchecked, there are
two sliders for individual control of the subdivisions in X and Y. This defaults to on.
Subdivision Level
Use the Subdivision Level slider to set the number of subdivisions used when creating the image
plane. If the Open GL viewer and renderer are set to Vertex lighting, the more subdivisions in the
mesh, the more vertices are available to represent the lighting. So, high subdivisions can be useful
when working interactively with lights.
Wireframe
Enabling this checkbox causes the mesh to render only the wireframe for the object when using the
OpenGL renderer.
Locator 3D [3Lo]
When the Locator is provided with a camera and the dimensions of the output image, it transforms
the coordinates of a 3D control into 2D screen space. The 2D position is exposed as a numeric output
that can be connected to/from other nodes. For example, to connect the center of an ellipse to the
2D position of the Locator, right-click on the Mask center control and select Connect To > Locator 3D
> Position.
Inputs
Two inputs accept 3D scenes as sources. The orange scene input is required, while the green Target
input is optional.
— SceneInput: The required orange scene input accepts the output of a 3D scene. This scene
should contain the object or point in 3D space that you want to covert to 2D coordinates.
— Target: The optional green target input accepts the output of a 3D scene. When provided, the
transform center of the scene is used to set the position of the Locator. The transformation
controls for the Locator become offsets from this position.
If an object is connected to the Locator node’s target input, the Locator is positioned at the object’s
center, and the Transformation tab’s translation XYZ sliders function in the object’s local coordinate
space instead of global scene space. This is useful for tracking an object’s position despite any
additional transformations applied further downstream.
Inspector
Locator 3D controls
Controls Tab
Most of the controls for the locator 3D are cosmetic, dealing with how the locator appears and
whether it is rendered in the final output. However, the Camera Settings are critical to getting the
results you’re looking for.
Size
The Size slider is used to set the size of the Locator’s onscreen crosshair.
Color
A basic Color control is used to set the color of the Locator’s onscreen crosshair.
— Is Matte: When activated, objects whose pixels fall behind the matte object’s
pixels in Z do not get rendered.
— Opaque Alpha: Sets the Alpha value of the matte object to 1. This checkbox is visible only when
the Is Matte option is enabled.
— Infinite Z: Sets the value in the Z-channel to infinity. This checkbox is visible only when the Is
Matte option is enabled.
Sub ID
The Sub ID slider can be used to select an individual subelement of certain geometry, such as an
individual character produced by a Text 3D node or a specific copy created by a Duplicate 3D node.
Make Renderable
Defines whether the Locator is rendered as a visible object by the OpenGL renderer. The software
renderer is not currently capable of rendering lines and hence ignores this option.
Unseen by Camera
This checkbox control appears when the Make Renderable option is selected. If the Unseen by Camera
checkbox is selected, the Locator is visible in the viewers but not rendered into the output image by
the Renderer 3D node.
Camera
This drop-down control is used to select the Camera in the scene that defines the screen space used
for 3D to 2D coordinate transformation.
Common Controls
Transform and Settings tabs
The remaining Transform and Settings tabs are common to many 3D nodes. Their descriptions can be
found in “The Common Controls” section at the end of this chapter.
Merge 3D Introduction
The Merge 3D node is the primary node in Fusion that you use to combine separate 3D elements into
the same 3D environment.
For example, in a scene created with an image plane, a camera, and a light, the camera would not be
able to see the image plane and the light would not affect the image plane until all three objects are
introduced into the same environment using the Merge 3D node.
The Merge provides the standard transformation controls found on most nodes in Fusion’s 3D suite.
Unlike those nodes, changes made to the translation, rotation, or scale of the Merge affect all the objects
connected to the Merge. This behavior forms the basis for all parenting in Fusion’s 3D environment.
Inputs
The Merge node displays only two inputs initially, but as each input is connected a new input appears
on the node, assuring there is always one free to add a new element into the scene.
— SceneInput[#]: These multicolored inputs are used to connect image planes, 3D cameras, lights,
entire 3D scenes, as well as other Merge 3D nodes. There is no limit to the number of inputs this
node can accept. The node dynamically adds more inputs as needed, ensuring that there is always
at least one input available for connection.
Multiple Merge 3D nodes can be strung together to control lighting or for neater organization. The
last Merge 3D in a string must connect to a Renderer 3D to be output as a 2D image.
Merge 3D with a connected Image Plane, FBX Mesh object, SpotLight, and camera
Merge 3D controls
Controls Tab
The Controls tab is used only to pass through any lights connected to the Merge 3D node.
Common Controls
Transform and Settings Tabs
The remaining controls for the Transform and Settings tabs are common to most 3D nodes. Their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Override 3D [3Ov]
Inputs
— SceneInput: The orange Scene input accepts the output of a Merge 3D node or any node
creating a 3D scene.
Inspector
Override 3D controls
Do [Option]
Enables the override for this option.
[Option]
If the Do [Option] checkbox is enabled, then the control for the property itself becomes visible.
The control values of the properties for all upstream objects are overridden by the new value.
Common Controls
Settings Tabs
The Settings tab includes controls common to most 3D nodes. Their descriptions can be found in
“The Common Controls” section at the end of this chapter.
When produced by 3D tracking software, the points typically represent each of the patterns tracked
to create the 3D camera path. These point clouds can be used to identify a ground plane and to
orient other 3D elements with the tracked image. The Point Cloud 3D node creates a point cloud
either by importing a file from a 3D tracking application or generating it when you use the Camera
Tracker node.
NOTE: A null object is an invisible 3D object that has all the same transform properties of a
visible 3D object.
Inputs
The Point Cloud has only a single input for a 3D scene.
Inspector
Controls Tab
The Controls tab is where you can import the point cloud from a file and controls its appearance in
the viewer.
Style
The Style menu allows you to display the point cloud as cross hairs or points in the viewer.
Size X/Y/Z
These sliders can be used to increase the size of the onscreen crosshairs used to represent
each point.
Density
This slider defines the probability of displaying a specific point. If the value is 1, then all points are
displayed. A value of 0.2 shows only every fifth point.
Color
Use the standard Color control to set the color of onscreen crosshair controls.
Make Renderable
Determines whether the point cloud is visible in the OpenGL viewer and in final renderings made
by the OpenGL renderer. The software renderer does not currently support rendering of visible
crosshairs for this node.
Unseen by Camera
This checkbox control appears when the Make Renderable option is selected. If the Unseen by
Cameras checkbox is selected, the point cloud is visible in the viewers but not rendered into the
output image by the Renderer 3D node.
Common Controls
Transform and Settings Tabs
The remaining Transform and Settings tabs are common to many 3D nodes. Their descriptions can be
found in “The Common Controls” section at the end of this chapter.
Frequently, one or more of the points in an imported point cloud is manually assigned to track the
position of a specific feature. These points usually have names that distinguish them from the rest of
the points in the cloud. To see the current name for a point, hover the mouse pointer directly over a
point, and after a moment a small tooltip appears with the name of the point.
When the Point Cloud 3D node is selected, a submenu is added to the viewer’s contextual menu with
several options that make it simple to locate, rename, and separate these points from the rest of the
point cloud.
— Find: Selecting this option from the viewer contextual menu opens a dialog to search for and
select a point by name. Each point that matches the pattern is selected.
— Rename: Rename any point by selecting Rename from the contextual menu. Type the new name
into the dialog that appears and press Return. The point now has that name, with a four-digit
number added to the end. For example, the Name window is window0000, and multiple points
would be window0000, window0001, and so on. Names must be valid Fusion identifiers (i.e., no
spaces allowed, and the name cannot start with a number).
— Delete: Selecting this option deletes the currently selected points.
— Publish: Normally, the exact position of a point in the cloud is not exposed. To expose the
position, select the points, and then select the Publish option from this contextual menu.
This adds a coordinate control to the control panel for each published point that displays the
point’s current location.
Projector 3D [3Pj]
Projected textures can be allowed to “slide“ across the object if the object moves relative to the
Projector 3D, or, alternatively, by grouping the two with a Merge 3D so they can be moved as one and
the texture remains locked to the object.
The Projector 3D node’s capabilities and restrictions are best understood if the Projector is
considered to be a variant on the SpotLight node. The fact that the Projector 3D node is actually a light
has several important consequences when used in Light or Ambient Light projection mode:
To project re-lightable textures or textures for non-diffuse color channels (like Specular Intensity or
Bump), use the Texture projection mode instead:
— Projections in Texture mode only strike objects that use the output of the Catcher node for all or
part of the material applied to that object.
— Texture mode projections clip the geometry according to the Alpha channel of the
projected image.
See the section for the Catcher node for additional details.
— SceneInput: The orange scene input accepts a 3D scene. If a scene is connected to this input,
then transformations applied to the spotlight also affect the rest of the scene.
— ProjectiveImage: The white input expects a 2D image to be used for the projection. This
connection is required.
Inspector
Projector 3D controls
Color
The input image is multiplied by this color before being projected into the scene.
Intensity
Use this slider to set the Intensity of the projection when the Light and Ambient Light projection
modes are used. In Texture mode, this option scales the Color values of the texture after multiplication
by the color.
Decay Type
A projector defaults to No Falloff, meaning that its light has equal intensity on geometry, despite the
distance from the projector to the geometry. To cause the intensity to fall off with distance, set the
Decay type to either Linear or Quadratic modes.
Angle
The Cone Angle of the node refers to the width of the cone where the projector emits its full intensity.
The larger the angle, the wider the cone angle, up to a limit of 90 degrees.
Fit Method
The Fit Method determines how the projection is fitted within the projection cone.
The first thing to know is that although this documentation may call it a “cone,” the Projector 3D and
Camera 3D nodes do not project an actual cone; it’s more of a pyramid of light with its apex at the
camera/projector. The Projector 3D node always projects a square pyramid of light—i.e., its X and
Y angles of view are the same. The pyramid of light projected by the Camera 3D node can be non-
square depending on what the Film Back is set to in the camera. The aspect of the image connected
into the Projector 3D/Camera 3D does not affect the X/Y angles of the pyramid, but rather the image
is scaled to fit into the pyramid based upon the fit options.
When both the aspect of the pyramid (AovY/AovX) and the aspect of the image (height * pixelAspectY)/
(width * pixelAspectX) are the same, there is no need for the fit options, and in this case the fit options
all do the same thing. However, when the aspect of the image and the pyramid (as determined by the
Film Back settings in Camera 3D) are different, the fit options become important.
For example, Fit by Width fits the width of the image across the width of the Camera 3D pyramid.
In this case, if the image has a greater aspect ratio than the aspect of the pyramid, some of the
projection extends vertically outside of the pyramid.
— Inside: The image is uniformly scaled so that its largest dimension fits inside the cone. Another
way to think about this is that it scales the image as big as possible subject to the restriction that
the image is fully contained within the pyramid of the light. This means, for example, that nothing
outside the pyramid of light ever receives any projected light.
Projection Mode
— Light: Projects the texture as a diffuse/specular light.
— Ambient Light: Uses an ambient light for the projection.
— Texture: When used in conjunction with the Catcher node, this mode allows re-lightable texture
projections. The projection strikes only objects that use the catcher material as part of their
material shaders.
One useful trick is to connect a Catcher node to the Specular Texture input on a 3D Material node
(such as a Blinn). This causes any object using the Blinn material to receive the projection as part
of the specular highlight. This technique can be used in any material input that uses texture maps,
such as the Specular and Reflection maps.
Shadows
Since the projector is based on a spotlight, it is also capable of casting shadows using shadow maps.
The controls under this reveal are used to define the size and behavior of the shadow map.
— Enable Shadows: The Enable Shadows checkbox should be selected if the light is to produce
shadows. This defaults to selected.
— Shadow Color: Use this standard Color control to set the color of the shadow.
This defaults to black (0, 0, 0).
— Density: The Shadow Density determines the transparency of the shadow. A density of 1.0
produces a completely transparent shadow, whereas lower values make the shadow transparent.
— Shadow Map Size: The Shadow Map Size control determines the size of the bitmap used to create
the shadow map. Larger values produce more detailed shadow maps at the expense of memory
and performance.
— Shadow Map Proxy: The Shadow Map Proxy determines the size of the shadow map used for
proxy and auto proxy calculations. A value of 0.5 would use a 50% shadow map.
— Multiplicative/Additive Bias: Shadows are essentially textures applied to objects in the scene,
so there is occasionally Z-fighting, where the portions of the object that should be receiving the
shadows render over the top of the shadow instead.
— Multiplicative and Additive Bias: Bias works by adding a small depth offset to move the shadow
away from the surface it is shadowing, eliminating the Z-fighting. Too little bias and the objects
can self-shadow themselves. Too much bias and the shadow can become separated from the
surface. Adjust the multiplicative bias first, then fine tune the result using the additive bias control.
Softness Falloff The Softness Falloff slider appears when the Softness is set to variable.
This slider controls how fast the softness of shadow edges grows with
distance. More precisely, it controls how fast the shadow map filter size
grows based on the distance between shadow caster and receiver. Its
effect is mediated by the values of the Min and Max Softness sliders.
Min Softness The Min Softness slider appears when the Softness is set to variable.
This slider controls the Minimum Softness of the shadow. The closer the
shadow is to the object casting the shadow, the sharper it is up to the
limit set by this slider.
Max Softness The Max Softness slider appears when the Softness is set to variable.
This slider controls the Maximum Softness of the shadow. The farther the
shadow is from the object casting the shadow, the softer it is up to the
limit set by this slider.
Common Controls
Transform and Settings Tabs
The remaining Transform and Settings tabs are common to many 3D nodes. Their descriptions can be
found in “The Common Controls” section at the end of this chapter.
The software render engine uses the system’s CPU only to produce the rendered images. It is usually
much slower than the OpenGL render engine, but produces consistent results on all machines,
making it essential for renders that involve network rendering. The Software mode is required to
produce soft shadows, and generally supports all available illumination, texture, and material features.
The OpenGL render engine employs the GPU processor on the graphics card to accelerate the
rendering of the 2D images. The output may vary slightly from system to system, depending on the
exact graphics card installed. The graphics card driver can also affect the results from the OpenGL
renderer. The OpenGL render engines speed makes it possible to provide customized supersampling
and realistic 3D depth of field options. The OpenGL renderer cannot generate soft shadows. For soft
shadows, the software renderer is recommended.
Like most nodes, the Renderer’s motion blur settings can be found under the Common Controls
tab. Be aware that scenes containing particle systems require that the Motion Blur settings on the
pRender nodes exactly match the settings on the Renderer 3D node.
Otherwise, the subframe renders conflict producing unexpected (and incorrect) results.
NOTE: The Open GL renderer respects the Color Depth option in the Image tab of the
Renderer 3D node. This can cause slowdowns on certain graphics cards when rendering to
int16 or float32.
Inputs
The Renderer 3D node has two inputs on the node. The main scene input takes in the Merge 3D or
other 3D nodes that need to be converted to 2D. The effect mask limits the Renderer 3D output.
— SceneInput: The orange scene input is a required input that accepts a 3D scene that you want to
convert to 2D.
— EffectMask: The blue effects mask input uses a 2D image to mask the output of the node.
Renderer 3D connected directly after a Merge 3D, rendering the 3D scene to a 2D image
Inspector
Render 3D controls
Eye
The Eye menu is used to configure rendering of stereoscopic projects. The Mono option ignores
the stereoscopic settings in the camera. The Left and Right options translate the camera using the
stereo Separation and Convergence options defined in the camera to produce either left- or right-eye
outputs. The Stacked option places the two images one on top of the other instead of side by side.
Reporting
The first two checkboxes in this section can be used to determine whether the node prints warnings
and errors produced while rendering to the console. The second set of checkboxes tells the node
whether it should abort rendering when a warning or error is encountered. The default for this node
enables all four checkboxes.
Renderer Type
This menu lists the available render engines. Fusion provides three: the software renderer,
OpenGL renderer, and the OpenGL UV render engine. Additional renderers can be added via third-
party plugins.
All the controls found below this drop-down menu are added by the render engine. They may change
depending on the options available to each renderer. So, each renderer is described in its own
section below.
Software Controls
Output Channels
Besides the usual Red, Green, Blue, and Alpha channels, the software renderer can also embed the
following channels into the image. Enabling additional channels consumes additional memory and
processing time, so these should be used only when required.
— RGBA: This option tells the renderer to produce the Red, Green, Blue, and Alpha color channels of
the image. These channels are required, and they cannot be disabled.
— Z: This option enables rendering of the Z-channel. The pixels in the Z-channel contain a value that
represents the distance of each pixel from the camera. Note that the Z-channel values cannot
include anti-aliasing. In pixels where multiple depths overlap, the frontmost depth value is used
for this pixel.
— Coverage: This option enables rendering of the Coverage channel. The Coverage channel
contains information about which pixels in the Z-buffer provide coverage (are overlapping with
other objects). This helps nodes that use the Z-buffer to provide a small degree of anti-aliasing.
The value of the pixels in this channel indicates, as a percentage, how much of the pixel is
composed of the foreground object.
— BgColor: This option enables rendering of the BgColor channel. This channel contains the color
values from objects behind the pixels described in the Coverage channel.
— Normal: This option enables rendering of the X, Y, and Z Normals channels. These three channels
contain pixel values that indicate the orientation (direction) of each pixel in the 3D space. A color
channel containing values in a range from [–1,1] represents each axis.
Lighting
— Enable Lighting: When the Enable Lighting checkbox is selected, objects are lit by any lights in
the scene. If no lights are present, all objects are black.
— Enable Shadows: When the Enable Shadows checkbox is selected, the renderer produces
shadows, at the cost of some speed.
OpenGL Controls
— RGBA: This option tells the renderer to produce the Red, Green, Blue, and Alpha color channels of
the image. These channels are required, and they cannot be disabled.
— Z: This option enables rendering of the Z-channel. The pixels in the Z-channel contain a value that
represents the distance of each pixel from the camera. Note that the Z-channel values cannot
include anti-aliasing. In pixels where multiple depths overlap, the frontmost depth value is used
for this pixel.
— Normal: This option enables rendering of the X, Y, and Z Normals channels. These three channels
contain pixel values that indicate the orientation (direction) of each pixel in the 3D space. A color
channel containing values in a range from [–1,1] is represented by each axis.
— TexCoord: This option enables rendering of the U and V mapping coordinate channels. The pixels
in these channels contain the texture coordinates of the pixel. Although texture coordinates are
processed internally within the 3D system as three-component UVW, Fusion images store only UV
components. These components are mapped into the Red and Green color channels.
— ObjectID: This option enables rendering of the ObjectID channel. Each object in the 3D
environment can be assigned a numeric identifier when it is created. The pixels in this floating-
point image channel contain the values assigned to the objects that produced the pixel. Empty
pixels have an ID of 0, and the channel supports values as high as 65534. Multiple objects can
share a single Object ID. This buffer is useful for extracting mattes based on the shapes of objects
in the scene.
— MaterialID: This option enables rendering of the Material ID channel. Each material in the 3D
environment can be assigned a numeric identifier when it is created. The pixels in this floating-
point image channel contain the values assigned to the materials that produced the pixel. Empty
pixels have an ID of 0, and the channel supports values as high as 65534. Multiple materials can
share a single Material ID. This buffer is useful for extracting mattes based on a texture—for
example, a mask containing all the pixels that comprise a brick texture.
Anti-Aliasing
Anti-aliasing can be enabled for each channel through the Channel menu. It produces an output
image with higher quality anti-aliasing by brute force, rendering a much larger image, and then
rescaling it down to the target resolution. Rendering a larger image in the first place, and then using a
Resize node to bring the image to the desired resolution can achieve the exact same results. Using the
supersampling built in to the renderer offers two distinct advantages over this method.
The rendering is not restricted by memory or image size limitations. For example, consider the steps
to create a float-16 1920 x 1080 image with 16x supersampling. Using the traditional Resize node
would require first rendering the image with a resolution of 30720 x 17280, and then using a Resize to
scale this image back down to 1920 x 1080. Simply producing the image would require nearly 4 GB of
memory. When anti-aliasing is performed on the GPU, the OpenGL renderer can use tile rendering to
significantly reduce memory usage.
The GL renderer can perform the rescaling of the image directly on the GPU more quickly than
the CPU can manage it. Generally, the more GPU memory the graphics card has, the faster the
operation is performed.
Because of hardware limitations, point geometry (particles) and lines (locators) are always rendered
at their original size, independent of supersampling. This means that these elements are scaled down
from their original sizes, and likely appear much thinner than expected.
TIP: For some things, sometimes using an SS Z-buffer improves quality, but for other
things like using the merge’s PerformDepthMerge option, it may make things worse.
Do not mistake anti-aliasing with improved quality. Anti-aliasing an aux channel does not mean it’s
better quality. In fact, anti-aliasing an aux channel in many cases can make the results much worse.
The only aux channels we recommend you enable anti-aliasing on are WorldCoord and Z.
Enable (LowQ/HiQ)
These two check boxes are used to enable anti aliasing of the rendered image.
The rate doesn’t exactly define the number of samples done per destination pixel; the width of the
reconstruction filter used may also have an impact.
The functions of these filters are shown in the image above. From left to right these are:
This produces better results with continuous tone images but is slower
Bi-Spline (cubic) than Quadratic. If the images have fine detail in them, the results may be
blurrier than desired.
This produces good results with continuous tone images which are scaled
Catmul-Rom
down, producing sharp results with finely detailed images.
Bessel This is similar to the Sinc filter but may be slightly faster.
Window Method
The Window Method menu appears only when the reconstruction filter is set to Sinc or Bessel.
Accumulation Effects
Accumulation effects are used for creating depth of field effects. Enable both the Enable Accumulation
Effects and Depth of Field checkboxes, and then adjust the quality and Amount sliders.
The blurrier you want the out-of-focus areas to be, the higher the quality setting you need.
A low amount setting causes more of the scene to be in focus.
The accumulation effects work in conjunction with the Focal plane setting located in the Camera
3D node. Set the Focal Plane to the same distance from the camera as the subject you want to be in
focus. Animating the Focal Plane setting creates rack of focus effects.
Texturing
— Texture Depth: Lets you specify the bit depth of texture maps.
— Warn about unsupported texture depths: Enables a warning if texture maps are in an
unsupported bit depth that Fusion can’t process.
Lighting Mode
The Per-vertex lighting model calculates lighting at each vertex of the scene’s geometry. This produces
a fast approximation of the scene’s lighting but tends to produce blocky lighting on poorly tessellated
objects. The Per-pixel method uses a different approach that does not rely on the detail in the scene’s
geometry for lighting, so it generally produces superior results.
Although the per-pixel lighting with the OpenGL renderer produces results closer to that produced
by the more accurate software renderer, it still has some disadvantages. The OpenGL renderer is less
capable of dealing correctly with semi-transparency, soft shadows, and colored shadows, even with
per-pixel lighting. The color depth of the rendering is limited by the capabilities of the graphics card in
the system.
Transparency
The OpenGL renderer reveals this control for selecting which ordering method to use when
calculating transparency.
— Z Buffer (fast): This mode is extremely fast and is adequate for scenes containing only opaque
objects. The speed of this mode comes at the cost of accurate sorting; only the objects closest to
the camera are certain to be in the correct sort order. So, semi-transparent objects may not be
shown correctly, depending on their ordering within the scene.
— Sorted (accurate): This mode sorts all objects in the scene (at the expense of speed)
before rendering, giving correct transparency.
— Quick Mode: This experimental mode is best suited to scenes that almost exclusively
contain particles.
Shading Model
Use this menu to select a shading model to use for materials in the scene. Smooth is the shading
model employed in the viewers, and Flat produces a simpler and faster shading model.
Wireframe
Renders the whole scene as wireframe. This shows the edges and polygons of the objects. The edges
are still shaded by the material of the objects.
Wireframe Anti-Aliasing
Enables anti-aliasing for the Wireframe render.
Single textures/multiple destinations: Beware of cases where a single area of the texture
map is used on multiple areas of the model. This is often done to save texture memory and
decrease modeling time. An example is the texture for a person where the artist mirrored
the left side mesh/uvs/texture to produce the right side. Trying to bake in lighting in this
case won’t work.
Unwrapped more the one mesh: Unwrapping more than one mesh at once can cause
problems. The reason is that most models are authored so they make maximum usage of
(u,v) in [0,1] x [0,1], so that in general models overlap each other in UV space.
Seams: When the UV gutter size is left at 0, this produces seams when the model is
retextured with the unwrapped texture.
The scope of the replacement can be limited using Object and Material identifiers in the Inspector.
The scope can also be limited to individual channels, making it possible to use a completely different
material on the Red channel, for example.
Since the Text 3D node does not include a material input, you can use the Replace Material to add
material shaders to the text.
Inputs
The Replace Material node has two inputs: one for the 3D scene, object, or 3D text that contains the
original material, and a material input for the new replacement material.
— SceneInput: The orange scene input accepts a 3D scene or 3D text that you want to replace the
material.
— MaterialInput: The green material input accepts either a 2D image or a 3D material. If a 2D image
is provided, it is used as a diffuse texture map for the basic material built into the node. If a 3D
material is connected, then the basic material is disabled.
Inspector
Controls Tab
Enable
This checkbox enables the material replacement. This is not the same as the red switch in the upper-
left corner of the Inspector. The red switch disables the tool altogether and passes the image on
without any modification. The enable checkbox is limited to the effect part of the tool. Other parts, like
scripts in the Settings tab, still process as normal.
Replace Mode
The Replace Mode section offers four methods of replacing each RGBA channel:
— Keep: Prevents the channel from being replaced by the input material.
— Replace: Replaces the material for the corresponding color channel.
— Blend: Blends the materials together.
— Multiply: Multiplies the channels of both inputs.
Inputs
The Replace Normals node has a single input for the 3D scene or incoming geometry.
— SceneInput: The orange scene input accepts a 3D scene or 3D geometry that contains the
normal coordinates you want to modify.
Control Tab
The options in the Control tab deal with repairing 3D geometry and then recomputing
normals/tangents.
Recompute
Controls when normals/tangents are recomputed.
Smoothing Angle
Adjacent faces with angles in degrees smaller than this value have their adjoining edges smoothed
across. A typical value one might choose for the Smoothing Angle is between 20 and 60 degrees.
There is special case code for 0.0f and 360.0f (f stands for floating-point value). When set to 0.0f,
faceted normals are produced; this is useful for artistic effect.
There are five items you should be aware of when dealing with normals.
#1 The FBX importer recomputes the normals if they don’t exist, but you can get a higher-
quality result from the Replace Normals node.
#2 Bump maps can sometimes depend on the model’s normals. Specifically, when you
simplify a complex high polygon model to a low polygon model + bump map, the normals
and bump map can become “linked.” Recomputing the normals in this case can make the
model look funny. The bump map was intended to be used with the original normals.
#3 Most primitives in Fusion are not generated with tangents; when needed, they are
generated on the fly by a Renderer 3D and cached.
#4 Tangents currently are only needed for bump mapping. If a material needs bump
mapping, then tangents are created. These tangents are created with some default
settings (e.g., Smoothing Angle, and so on). If you don’t want Fusion automatically creating
tangents, you can use the Replace Normals node to create them manually.
#5 All computations are done in the local coordinates of the geometries instead of in the
coordinate system of the Replace Normals 3D node. This can cause problems when there is
a non-uniform scale applied to the geometry before Replace Normals 3D is applied.
Common Controls
Settings Tab
The Settings tab is common to many 3D nodes. The description of these controls can be found in
“The Common Controls” section at the end of this chapter.
Inputs
There are two inputs on the Replicate 3D node: one for the destination geometry that contains the
vertices, and one for the 3D geometry you want to replicate.
— Destination: The orange destination input accepts a 3D scene or geometry with vertex positions,
either from the mesh or 3D particle animations.
— Input[#]: The input accepts the 3D scene or geometry for replicating. Once this input is
connected, a new input for alternating 3D geometry is created.
Controls Tab
Step
Defines how many positions are skipped. For example, a step of 3 means that only every third vertice
of the destination mesh is used, while a step of 1 means that all positions are used.
The Step setting helps to keep reasonable performance for big destination meshes. On parametric
geometry like a torus, it can be used to isolate certain parts of the mesh.
Point clouds are internally represented by six points once the Make Renderable option has been set.
To get a single point, use a step of 6 and set an X offset of –0.5 to get to the center of the point cloud.
Use –0.125 for Locator 3Ds. Once these have been scaled, the offset may differ.
Input Mode
This menu defines in which order multiple input scenes are replicated at the destination. No matter
which setting you choose, if only one input scene is supplied this setting has no effect.
— When set to Loop, the inputs are used successively. The first input is at the first position, the
second input at the second position, and so on. If there are more positions in the destination
present than inputs, the sequence is looped.
— When set to Random, a definite but random input for each position is used based on the seed in
the Jitter tab. This input mode can be used to simulate variety with few input scenes.
— The Death of Particles setting causes the input geometries’ IDs to change; therefore, their copy
order may change.
Time Offset
Use the Time Offset slider to offset any animations that are applied to the input geometry by a set
amount per copy. For example, set the value to –1.0 and use a cube set to rotate on the Y-axis as the
source. The first copy shows the animation from a frame earlier; the second copy shows animation
from a frame before that, etc.
This can be used with great effect on textured planes—for example, where successive frames of a
video clip can be shown.
Alignment
Alignment specifies how to align the copies in respect of the destination mesh normal or
particle rotation.
— Not Aligned: Does not align the copy. It stays rotated in the same direction as its input mesh.
— Aligned: This mode uses the point’s normal and tries to reconstruct an upvector. It works best
with organic meshes that have unwelded vertices, like imported FBX meshes, since it has the
same rotations for vertices at the same positions. On plane geometric meshes, a gradual shift in
rotation is noticeable. For best results, it is recommended to use this method at the origin before
any transformations.
Color
Affects the diffuse color or shader of each copy based on the input’s particle color.
— Use Object Color: Does not use the color of the destination particle.
— Combine Particle Color: Uses the shader of any input mesh and modifies the diffuse color to
match the color from the destination particle.
— Use Particle Color: Replaces the complete shader of any input mesh with a default shader. Its
diffuse color is taken from the destination particle.
Translation
These three sliders tell the node how much offset to apply to each copy. An X Offset of 1 would offset
each copy one unit, one unit along the X-axis from the last copy.
XYZ Rotation
These three rotation sliders tell the node how much rotation to apply to each copy.
XYZ Pivot
The pivot controls determine the position of the pivot point used when rotating each copy.
Lock XYZ
When the Lock XYZ checkbox is selected, any adjustment to the scale is applied to all three axes
simultaneously.
If this checkbox is disabled, the Scale slider is replaced with individual sliders for the X, Y, and Z scales.
Scale
The Scale control sets how much scaling to apply to each copy.
Jitter Tab
The Jitter tab can be used to introduce randomness to various parameters.
Random Seed/Randomize
The Random Seed is used to generate the jitter applied to the replicated objects. Two Replicate nodes
with identical settings but different random seeds will produce two completely different results. Click
the Randomize button to assign a Random Seed value.
Common Controls
Settings Tab
The Settings tab is common to many 3D nodes. The description of these controls can be found in
“The Common Controls” section at the end of this chapter.
Ribbon 3D [3Ri]
Furthermore, the way lines are drawn is completely up to the graphics card capabilities, so the ribbon
appearance may vary based on your computer’s graphics card.
Inspector
Ribbon 3D controls
Number of Lines
The number of parallel lines drawn between the start point and end point.
Line Thickness
Line thickness is allowed in the user interface to take on a floating-point value, but some graphics
cards allow only integer values. Some cards may only allow lines equal to or thicker than one, or max
out at a certain value.
Subdivision Level
The number of vertices on each line between start point and end points. The higher the number, the
more precise and smoother 3D displacement appears.
Ribbon Width
Determines how far the lines are apart from each other.
Start
XYZ control to set the start point of the ribbon.
End
XYZ control to set the end point of the ribbon.
Ribbon Rotation
Allows rotation of the ribbon around the virtual axis defined by start point and end points.
Anti-Aliasing
Allows you to apply anti-aliasing to the rendered lines. Using anti-aliasing isn’t necessarily
recommended. When activated, there may be be gaps between the line segments. This is especially
noticeable with high values of line thickness. Again, the way lines are drawn is completely up to the
graphics card, which means that these artifacts can vary from card to card.
Common Controls
Controls, Materials, and Settings Tabs
The controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID in the
Controls tab are common in many 3D nodes. The Materials tab and Settings tab in the Inspector are
also duplicated in other 3D nodes. These common controls are described in detail at the end of this
chapter in “The Common Controls” section.
Inputs
There are two optional inputs on the Shape 3D. The scene input can be used to combine additional
geometry with the Shape 3D, while the material input can be used to texture map the Shape
3D object.
— SceneInput: Although the Shape 3D creates its own 3D geometry, you can use the orange scene
input to combine an additional 3D scene or geometry.
— MaterialInput: The green input accepts either a 2D image or a 3D material. If a 2D image is
provided, it is used as a diffuse texture map for the basic material built into the node. If a 3D
material is connected, then the basic material is disabled.
Shape 3D controls
Controls Tab
The Controls tab allows you to select a shape and modify its geometry. Different controls appear
based on the specific shape that you choose to create.
Shape
This menu allows you to select the primitive geometry produced by the Shape 3D node. The remaining
controls in the Inspector change to match the selected shape.
— Lock Width/Height/Depth: [plane, cube] If this checkbox is selected, the width, height, and
depth controls are locked together as a single size slider. Otherwise, individual controls over the
size of the shape along each axis are provided.
— Size Width/Height/Depth: [plane, cube] Used to control the size of the shape.
Cube Mapping
When Cube is selected in the shape menu, the Cube uses cube mapping to apply the Shape node’s
texture (a 2D image connected to the material input on the node).
Radius
When a Sphere, Cylinder, Cone, or Torus is selected in the shape menu, this control sets the radius of
the selected shape.
Top Radius
When a cone is selected in the Shape menu, this control is used to define a radius for the top of a
cone, making it possible to create truncated cones.
Start/End Angle
When the Sphere, Cylinder, Cone, or Torus shape is selected in the Shape menu, this range control
determines how much of the shape is drawn. A start angle of 180° and end angle of 360° would only
draw half of the shape.
Bottom/Top Cap
When Cylinder or Cone is selected in the Shape menu, the Bottom Cap and Top Cap checkboxes are
used to determine if the end caps of these shapes are created or if the shape is left open.
Section
When the Torus is selected in the Shape menu, Section controls the thickness of the tube making up
the torus.
Subdivision Level/Base/Height
The Subdivision controls are used to determine the tessellation of the mesh on all shapes. The higher
the subdivision, the more vertices each shape has.
Wireframe
Enabling this checkbox causes the mesh to render only the wireframe for the object.
Common Controls
Controls, Materials, Transform and Settings Tabs
The controls for Visibility, Lighting, Matte, Blend Mode, Normals/Tangents, and Object ID in the
Controls tab are common in many 3D nodes. The Materials tab, Transforms tab, and Settings tab in the
Inspector are also duplicated in other 3D nodes. These common controls are described in detail at the
end of this chapter in “The Common Controls” section.
This node is very similar to the Fog 3D node, in that it is dependent on the geometry’s distance from
the camera.
Inputs
The Soft Clip includes only a single input for a 3D scene that includes a camera connected to it.
— SceneInput: The orange scene input is a required connection. It accepts a 3D scene input that
includes a Camera 3D node.
Controls Tab
The Controls tab determines how an object transitions between opaque and transparent as it moves
closer to the camera.
Enable
This checkbox can be used to enable or disable the node. This is not the same as the red switch in the
upper-left corner of the Inspector. The red switch disables the tool altogether and passes the image
on without any modification. The Enable checkbox is limited to the effect of the tool. Other parts, like
scripts in the Settings tab, still process as normal.
Smooth Transition
By default, an object coming closer and closer to the camera slowly fades out with a linear
progression. With the Smooth Transition checkbox enabled, the transition changes to a nonlinear
curve, arguably a more natural-looking transition.
Radial
By default, the soft clipping is done based on the perpendicular distance to a plane (parallel with the
near plane) passing through the eye point. When the Radial option is checked, the Radial distance to
the eye point is used instead of the Perpendicular distance. The problem with Perpendicular distance
soft clipping is that when you move the camera about, as objects on the left or right side of the
frustum move into the center, they become less clipped, although they remain the same distance from
the eye. Radial soft clip fixes this. Sometimes Radial soft clipping is not desirable.
For example, if you apply soft clip to an object that is close to the camera, like an image plane, the
center of the image plane could be unclipped while the edges could be fully clipped because they are
farther from the eye point.
Transparent/Opaque Distance
Defines the range of the soft clip. The objects begin to fade in from an opacity of 0 at the Transparent
distance and are fully visible at the Opaque distance. All units are expressed as distance from the
camera along the Z-axis.
Inputs
The Spherical camera node has two inputs.
— Image: This orange image input requires an image in a spherical layout, which can be any of
LatLong (2:1 equirectangular), Horizontal/Vertical Cross, or Horizontal/Vertical Strip.
— Stereo Input: The green input for a right stereo camera if you are working in stereo VR.
Controls Tab
Layout
— VCross and HCross: VCross and HCross are the six square faces of a cube laid out in a cross,
vertical or horizontal, with the forward view in the center of the cross, in a 3:4 or 4:3 image.
— VStrip and HStrip: VStrip and HStrip are the six square faces of a cube laid vertically or
horizontally in a line, ordered as Left, Right, Up, Down, Back, Front (+X, -X, +Y, -Y, +Z, -Z), in
a 1:6 or 6:1 image.
— LatLong: LatLong is a single 2:1 image in equirectangular mapping.
Near/Far Clip
The clipping plane is used to limit what geometry in a scene is rendered based on the object’s
distance from the camera’s focal point. This is useful for ensuring that objects that are extremely close
to the camera are not rendered and for optimizing a render to exclude objects that are too far away to
be useful in the final rendering.
The default perspective camera ignores this setting unless the Adaptively Adjust Near/Far Clip
checkbox control below is disabled.
The values are expressed in units, so a far clipping plane of 20 means that any objects more than
20 units from the camera are invisible to the camera. A near clipping plane of 0.1 means that any
objects closer than 0.1 units are also invisible.
NOTE: A smaller range between the near and far clipping plane allows greater accuracy
in all depth calculations. If a scene begins to render strange artifacts on distant objects,
try increasing the distance for the near clip plane. Use the vertical aperture size to get the
vertical angle of view and the horizontal aperture size to get the horizontal angle of view.
Stereo Method
This control allows you to adjust your stereoscopic method to your preferred working model.
Toe In
Both cameras point at a single focal point. Though the result is stereoscopic, the vertical parallax
introduced by this method can cause discomfort by the audience.
Off Axis
Often regarded as the correct way to create stereo pairs, this is the default method in Fusion. Off Axis
introduces no vertical parallax, thus creating less stressful stereo images.
Parallel
The cameras are shifted parallel to each other. Since this is a purely parallel shift, there is no
Convergence Distance control. Parallel introduces no vertical parallax, thus creating less stressful
stereo images.
Eye Separation
Defines the distance between both stereo cameras. If the Eye Separation is set to a value larger
than 0, controls for each camera are shown in the viewer when this node is selected. There is no
Convergence Distance control in Parallel mode.
Convergence Distance
This control sets the stereoscopic convergence distance, defined as a point located along the Z-axis of
the camera that determines where both left and right eye cameras converge.
Control Visibility
Allows you to selectively activate the onscreen controls that are displayed along with the camera.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Text 3D [3Txt]
The Text 3D node was based on a tool that predates the Fusion 3D environment. So, some of the
controls found in the basic primitive shapes and geometry loaders, such as many of the material,
lighting, and matte options, are not found in this node’s controls. The Text 3D node has a built-in
material, but unlike the other 3D nodes it does not have a material input. The Shading tab contains
controls to adjust the diffuse and specular components. To replace this default material with a more
advanced material, follow the Text Plus node with a Replace Material 3D node. The Override 3D node
can be used to control the lighting, visibility, and matte options for this node.
When network rendering a comp that contains Text 3D nodes, each render machine is required to
have the necessary fonts installed or the network rendering fails. Fusion does not share or copy fonts
to render slaves.
Inputs
— SceneInput: The orange scene input accepts a 3D scene that can be combined with the 3D text
created in the node.
Inspector
Text 3D controls
Styled Text
The Edit box in this tab is where the text to be created is entered. Any common character can be
typed into this box. The common OS clipboard shortcuts (Command-C or Ctrl-C to copy, Command-X
or Ctrl-X to cut, Command-V or Ctrl-V to paste) also work; however, right-clicking on the Edit box
displays a custom contextual menu with several modifiers you can add for more animation and
formatting options.
Font
Two Font menus are used to select the font family and typeface such as Regular, Bold, and Italic.
Color
This control sets the basic tint color of the text. This is the same Color control displayed in the Material
type section of the Shader tab.
Size
This control is used to increase or decrease the size of the text. This is not like selecting a point size in
a word processor. The size is relative to the width of the image.
Tracking
The Tracking parameter adjusts the uniform spacing between each character of text.
Line Spacing
Line Spacing adjusts the distance between each line of text. This is sometimes called leading in word-
processing applications.
V Anchor
The Vertical Anchor controls consist of three buttons and a slider. The three buttons are used to
align the text vertically to the top, middle, or bottom baseline of the text. The slider can be used to
customize the alignment. Setting the Vertical Anchor affects how the text is rotated but also the
location for line spacing adjustments. This control is most often used when the Layout type is set to
Frame in the Layout tab.
V Justify
The Vertical Justify slider allows you to customize the vertical alignment of the text from the V Anchor
setting to full justification so it is aligned evenly along the top and bottom edges. This control is most
often used when the Layout type is set to Frame in the Layout tab.
H Anchor
The Horizontal Anchor controls consist of three buttons and a slider. The three buttons justify the text
alignment to the left edge, middle, or right edge of the text. The slider can be used to customize the
justification. Setting the Horizontal Anchor affects how the text is rotated but also the location for
tracking (leading) spacing adjustments. This control is most often used when the Layout type is set to
Frame in the Layout tab.
Direction
This menu provides options for determining the direction in which the text is to be written.
Line Direction
These menu options are used to determine the text flow from top to bottom, bottom to top, left to
right, or right to left.
Write On
This range control is used to quickly apply simple Write On and Write Off animation to the text.
To create a Write On effect, animate the End portion of the control from 1 to 0 over the length of time
required. To create a Write Off effect, animate the Start portion of the range control from 0 to 1.
Extrusion Depth
An extrusion of 0 produces completely 2D text. Any value greater than 0 extrudes the text to generate
text with depth.
Bevel Depth
Increase the value of the Bevel Depth slider to bevel the text. The text must have extrusion before this
control has any effect.
Bevel Width
Use the Bevel Width control to increase the width of the bevel.
Smoothing Angle
Use this control to adjust the smoothing angle applied to the edges of the bevel.
Front/Back Bevel
Use these checkboxes to enable beveling for the front and back faces of the text separately
Custom Extrusion
In Custom mode, the Smoothing Angle controls the smoothing of normals around the edges of a text
character. The spline itself controls the smoothing along the extrusion profile. If a spline segment is
smoothed, for example by using the shortcut Shift-S, the normals are smoothed as well. If the control
TIP: Splines can also be edited from within the Spline Editor panel. It provides a larger
working space for working with any spline including the Custom Extrusion.
Extrusion profile spline control: Do not try to go to zero size at the Front/Back face.
This results in Z-fighting resulting from self-intersecting faces. To avoid this problem, make
sure the first and last point have their profiles set to 0.
Force Monospaced
This slider control can be used to override the kerning (spacing between characters) that is defined
in the font. Setting this slider to zero (the default value) causes Fusion to rely entirely on the kerning
defined with each character. A value of one causes the spacing between characters to be completely
even, or monospaced.
Layout Tab
The Layout Tab is used to position the text in one of four different layout types.
Layout Type
This menu selects the layout type for the text.
— Point: Point layout is the simplest of the layout modes. Text is arranged around an
adjustable center point.
— Frame: Frame layout allows you to define a rectangular frame used to align the text. The
alignment controls are used to justify the text vertically and horizontally within the boundaries of
the frame.
— Circle: Circle layout places the text around the curve of a circle or oval. Control is offered over
the diameter and width of the circular shape. When the layout is set to this mode, the Alignment
controls determine whether the text is positioned along the inside or outside of the circle’s edge,
and how multiple lines of text are justified.
— Path: Path layout allows you to shape your text along the edges of a path. The path can be used
simply to add style to the text, or it can be animated using the Position on Path control that
appears when this mode is selected.
Center X, Y, and Z
These controls are used to position the center of the layout. For instance, moving the center X, Y,
and Z parameters when the layout is set to Frame moves the position of the frame the text is within.
Size
This slider is used to control the scale of the layout element. For instance, increasing size when the
layout is set to Frame increases the frame size the text is within.
Rotation Order
These buttons allow you to select the order in which 3D rotations are applied to the text.
X, Y, and Z
These angle controls can be used to adjust the angle of the Layout element along any axis.
Position on Path
The Position on Path control is used to control the position of the text along the path. Values less than
zero or greater than one cause the text to move beyond, continuing in the same direction set by the
last two points on the path.
Transform Tab
There are actually two Transform tabs in the Text 3D Inspector. The first Transform tab is unique to
the Text 3D tool, while the second is the common Transform tab found on many 3D nodes. The Text
3D-specific Transform tab is described below since it contains some unique controls for this node.
Transform
This menu determines the portion of the text affected by the transformations applied in this tab.
Transformations can be applied to line, word, and character levels simultaneously. This menu is only
used to keep the number of visible controls to a reasonable number.
— Characters: Each character of text is transformed along its own center axis.
— Words: Each word is transformed separately on the word’s center axis.
— Lines: Each line of the text is transformed separately on that line’s center axis.
Spacing
The Spacing slider is used to adjust the amount of space between each line, word, or character. Values
less than one usually cause the characters to begin overlapping.
Pivot X, Y, and Z
This provides control over the exact position of the axis. By default, the axis is positioned at the
calculated center of the line, word, or character. The pivot control works as an offset, such that a value
of 0.1, 0.1 in this control would cause the axis to be shifted downward and to the right for each of the
text elements. Positive values in the Z-axis slider move the axis further along the axis (away from the
viewer). Negative values bring the axis of rotation closer.
X, Y, and Z
These controls can be used to adjust the angle of the text elements in any of the three dimensions.
Shear X and Y
Adjust these sliders to modify the slanting of the text elements along the X- and Y-axis.
Size X and Y
Adjust these sliders to modify the size of the text elements along the X- and Y-axis.
Shading
The Shading tab for the Text 3D node controls the overall appearance of the text and how lights affect
its surface.
Type
To use a solid color texture, select the Solid mode. Selecting the Image mode reveals a new external
input on the node that can be connected to another 2D image.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular
a material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that tend to inherit their color
from the material color. The basic shader material does not provide an input for textures to control
the specularity of the object. Use nodes from the 3D Material category when more precise control is
required over the specular appearance.
Specular Intensity
Specular Intensity controls the strength of the specular highlight. If the specular intensity texture port
has a valid input, then this value is multiplied by the Alpha value of the input.
Specular Exponent
Specular Exponent controls the falloff of the specular highlight. The greater the value, the sharper
the falloff, and the smoother and glossier the material appears. The basic shader material does not
provide an input for textures to control the specular exponent of the object. Use nodes from the 3D
Material category when more precise control is required over the specular exponent.
Image Source
This control determines the source of the texture applied to the material. If the option is set to Tool,
then an input appears on the node that can be used to apply the output of a 2D node as the texture.
Selecting Clip opens a file browser that can be used to select an image or image sequence from disk.
The Brush option provides a list of clips found in the Fusion\brushes folder.
Bevel Material
This option appears only when the Use One Material checkbox control is selected. The controls under
this option are an exact copy of the Material controls above but are applied only to the beveled edge
of the text.
Common Controls
Transform and Settings Tabs
The Transform and Settings tabs in the Inspector are duplicated in other 3D nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Text 3D Modifiers
Right-clicking within the Styled Text box displays a menu with the following text modifiers. Only one
modifier can be applied to a Text 3D Styled Text box. Below is a brief list of the text specific modifiers,
but for more information see Chapter 62, “Modifiers,” in the Fusion Reference Manual.
Animate
Use the Animate command to set to a keyframe on the entered text and animate the content
over time.
Comp Name
Comp Name puts the name of the composition in the Styled Text box and is generally used as a quick
way to create slates.
Follower
Follower is a text modifier that can be used to ripple animation applied to the text across each
character in the text. See “Text Modifiers” at the end of this chapter.
Publish
Publish the text for connection to other text nodes.
Text Scramble
A text modifier ID is used to randomize the characters in the text. See “Text Modifiers” at the end of
this chapter.
Text Timer
A text modifier is used to count down from a specified time or to output the current date and time.
See “Text Modifiers” at the end of this chapter.
Connect To
Use this option to connect the text generated by this Text node to the published output of
another node.
Transform 3D [3Xf]
Inputs
The Transform node has a single required input for a 3D scene or 3D object.
— Scene Input: The orange scene input is connected to a 3D scene or 3D object to apply a second
set of transformation controls.
Transform 3D controls
Controls Tab
The Controls tab is the primary tab for the Transform 3D node. It includes controls to translate, rotate,
or scale all elements within a scene without requiring a Merge 3D node.
Translation
— X, Y, Z Offset: Controls are used to position the 3D element in 3D space.
Rotation
— Rotation Order: Use these buttons to select the order used to apply the rotation along each axis
of the object. For example, XYZ would apply the rotation to the X-axis first, followed by the Y-axis,
and then the Z-axis.
— X, Y, Z Rotation: Use these controls to rotate the object around its pivot point. If the Use Target
checkbox is selected, then the rotation is relative to the position of the target; otherwise, the
global axis is used.
Pivot Controls
— X, Y, Z Pivot: A pivot point is the point around which an object rotates. Normally, an object rotates
around its own center, which is considered to be a pivot of 0,0,0. These controls can be used to
offset the pivot from the center.
Scale
— X, Y, Z Scale: If the Lock X/Y/Z checkbox is checked, a single scale slider is shown. This adjusts
the overall size of the object. If the Lock checkbox is unchecked, individual X, Y, and Z sliders are
displayed to allow scaling in any dimension.
Use Target
Selecting the Use Target checkbox enables a set of controls for positioning an XYZ target. When Use
Target is enabled, the object always rotates to face the target. The rotation of the object becomes
relative to the target.
Import Transform
Opens a file browser where you can select a scene file saved or exported by your 3D application.
It supports the following file types:
dotXSI .xsi
The Import Transform button imports only transformation data. For 3D geometry, lights and cameras,
consider using the File > FBX Import option from the menus.
Transform 3D onscreen
transformation controls
Triangulate 3D [3Tri]
Inputs
The Triangulate 3D node has a single required input for a 3D scene or 3D object.
— Scene Input: The orange scene input is connected to the 3D scene or 3D object you
want to triangulate.
Triangulate 3D controls
Controls Tab
There are no controls for this node.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
UV Map 3D [3UV]
NOTE: That this does not directly project an image through the camera. The image to be
projected should be connected to the diffuse texture input of whatever material is assigned
to the objects. When the texture is applied, it uses the UV coordinates created by the
camera. Because this is a texture projection and not light, the Alpha channel of the texture
correctly sets the opacity of the geometry.
The projection can optionally be locked to the vertices as it appears on a selected frame.
This fails if the number of vertices in the mesh changes over time, as Fusion must be able to match up
the mesh at the reference time and the current time. To be more specific, vertices may not be created
or destroyed or reordered. So, projection locking does not work for many particle systems, or for
primitives with animated subdivisions, or with duplicate nodes using non-zero time offsets.
NOTE: The UV Map 3D node does not put a texture or material on the mesh; it only
modifies the texture coordinates that the materials use. This may be confusing because the
material usually sits upstream, as seen in the Basic Node Setup example below.
Inputs
The UV Map 3D node has two inputs: one for a 3D scene or 3D object and another optional input for a
Camera 3D node.
— Scene Input: The orange scene input is connected to the 3D scene or 3D object you want to
triangulate.
— CameraInput: This input expects the output of the Camera 3D node. It is only visible when the
Camera Map mode menu is set to Camera.
UV Map 3D is placed after the Merge 3D, with a camera connected to line up the texture
UV Map 3D controls
Controls Tab
The UV Map 3D Controls tab allows you to select Planar, Cylindrical, Spherical, XYZ, and Cubic mapping
modes, which can be applied to basic Fusion primitives as well as imported geometry. The position,
rotation, and scale of the texture coordinates can be adjusted to allow fine control over the texture’s
appearance. An option is also provided to lock the UV produced by this node to animated geometry
according to a reference frame. This can be used to ensure that textures applied to animated
geometry do not slide.
Map Mode
The Map mode menu is used to define how the texture coordinates are created. You can think of this
menu as a way to select the virtual geometry that projects the UV space on the object.
Orientation X/Y/Z
Defines the reference axis for aligning the Map mode.
Fit
Clicking this button fits the Map mode to the bounding box of the input scene.
Size X/Y/Z
Defines the size of the projection object.
Center X/Y/Z
Defines the position of the projection object.
Rotation/Rotation Order
Use these buttons to select which order is used to apply the rotation along each axis of the object. For
example, XYZ would apply the rotation to the X-axis first, followed by the Y-axis, and then the Z-axis.
Rotation X/Y/Z
Sets the orientation of the projection object for each axis, independent from the rotation order.
Tile U/V/W
Defines how often a texture fits into the projected UV space on the applicable axis. Note that the UVW
coordinates are transformed, not a texture. This works best when used in conjunction with the Create
Texture node.
Flip U/V/W
Mirrors the texture coordinates around the applicable axis.
NOTE: To utilize the full capabilities of the UV Map 3D node, it helps to have a basic
understanding of how 2D images are mapped onto 3D geometry. When a 2D image
is applied to a 3D surface, it is converted into a texture map that uses UV coordinates
to determine how the image translates to the object. Each vertex on a mesh has a (U,
V) texture coordinate pair that describes the appearance the object takes when it is
unwrapped and flattened. Different mapping modes use different methods for working
out how the vertices transform into a flat 2D texture. When using the UV Map 3D node to
modify the texture coordinates on a mesh, it’s best to do so using the default coordinate
system of the mesh or primitive. So the typical workflow would look like Shape 3D > UV Map
3D > Transform 3D. The Transformation tab on the Shape node would be left to its default
values, and the Transform 3D node following the UV Map 3D does any adjustments needed
to place the node in the scene. Modifying/animating the transform of the Shape node
causes the texture to slide across the shape, which is generally undesirable. The UV Map 3D
node modifies texture coordinates per vertex and not per pixel. If the geometry the UV map
is applied to is poorly tessellated, then undesirable artifacts may appear.
Weld 3D [3We]
— The different normals produce hard shading/lighting edges where none were intended.
— If you try to Displace 3D the vertices along their normals, they crack.
— Missing pixels or doubled-up pixels in the rendered image.
— Particles pass through the tiny invisible cracks.
Instead of round tripping back to your 3D modeling application to fix the “duplicated” vertices, the
Weld 3D node allows you to do this in Fusion. Weld 3D welds together vertices with the same or nearly
the same positions. This can be used to fix cracking issues when vertices are displaced by welding the
geometry before the Displace. There are no user controls to pick vertices. Currently, this node welds
together just position vertices; it does not weld normals, texcoords, or any other vertex stream. So,
although the positions of two vertices have been made the same, their normals still have their old
values. This can lead to hard edges in certain situations.
Inputs
The Weld 3D node has a single input for a 3D scene or 3D object you want to repair.
— Scene Input: The orange scene input is connected to the 3D scene or 3D object you want to fix.
Inspector
Weld 3D controls
Controls Tab
The Controls tab for the Weld 3D node includes a simple Weld Mode menu. You can choose between
welding vertices or fracturing them.
Fracture
Fracturing is the opposite of welding, so all vertices are unwelded. This means that all polygon
adjacency information is lost. For example, an Image Plane 3D normally consists of connected quads
that share vertices. Fracturing the image plane causes it to become a bunch of unconnected quads.
Tolerance
In auto mode, the Tolerance value is automatically detected. This should work in most cases.
It can also be adjusted manually if needed.
Weld 3D is intended to be used as a mesh robustness tool and not as a mesh editing tool
to merge vertices. If you can see the gap between the vertices you want to weld in the 3D
view, you are probably misusing Weld 3D. Unexpected things may happen when you do
this; do so at your own peril.
LIMITATIONS Setting the tolerance too large can cause edges/faces to collapse to points.
If your model has detail distributed over several orders of scale, picking a tolerance value
can be hard or impossible.
For example, suppose you have a model of the International Space Station and there are
lots of big polygons and lots of really tiny polygons. If you set the tolerance too large, small
polygons that shouldn’t merge do; if you set the tolerance too small, some large polygons
won’t be merged.
Vertices that are far from the origin can fail to merge correctly. This is because bignumber
+ epsilon can exactly equal bignumber in float math. This is one reason it may be best to
merge in local coordinates and not in world coordinates.
Sometimes Weld 3D-ing a mesh can make things worse. Take Fusion’s cone, for instance.
The top vertex of the cone is currently duplicated for each adjoining face, and they all have
different normals. If you weld the cone, the top vertices merge and only have one normal,
making the lighting look weird.
WARNING Do not misuse Weld 3D to simplify (reduce the polygon count of) meshes.
It is designed to efficiently weld vertices that differ by only very small values, like a
0.001 distance.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
This can complicate connecting an object further downstream in the composition directly to the
position of an upstream object. The Coordinate Transform modifier can be added to any set of
XYZ coordinate controls and calculate the current position of a given object at any point in the
scene hierarchy.
To add a Coordinate Transform modifier, simply right-click a number field on any node and select
Modify With/CoordTransform Position from the Controls’ contextual menu.
Inspector
Target Object
This control should be connected to the 3D node that produces the original coordinates to be
transformed. To connect a node, drag and drop a node from the node tree into the Text Edit control,
or right-click the control and select the node from the contextual menu. It is also possible to type the
node’s name directly into the control.
Scene Input
This control should be connected to the 3D node that outputs the scene containing the object at the
new location. To connect a node, drag and drop a node from the node tree into the Text Edit control,
or right-click the control and select an object from the Connect To submenu.
These controls are often displayed in the lower half of the Controls tab. They appear in nodes that
create or contain 3D geometry.
Visibility
— Visible: If this option is enabled, the object is visible in the viewers and in final renders. When
disabled, the object is not visible in the viewers nor is it rendered into the output image by the
Renderer 3D node. Also, a non-visible object does not cast shadows.
Lighting
— Affected by Lights: Disabling this checkbox causes lights in the scene to not affect the object.
The object does not receive nor cast shadows, and it is shown at the full brightness of its color,
texture, or material.
— Shadow Caster: Disabling this checkbox causes the object not to cast shadows on other objects
in the scene.
— Shadow Receiver: Disabling this checkbox causes the object not to receive shadows cast by other
objects in the scene.
Matte
Enabling the Is Matte option applies a special texture, causing the object to not only become invisible
to the camera, but also making everything that appears directly behind the camera invisible as well.
This option overrides all textures. For more information on Fog 3D and Soft Clipping, see Chapter 25,
“3D Compositing Basics,” in the Fusion Reference Manual.
— Is Matte: When activated, objects whose pixels fall behind the matte object’s pixels in Z do not
get rendered. Two additional options are displayed when the Is Matte checkbox is activated.
— Opaque Alpha: When the Is Matte checkbox is enabled, the Opaque Alpha checkbox sets the
Alpha value of the matte object to 1.
— Infinite Z: This option sets the value in the Z-channel to infinite. This checkbox is visible only when
the Is Matte option is enabled.
Blend Mode
A Blend mode specifies which method is used by the renderer when combining this object with the
rest of the scene. The blend modes are essentially identical to those listed in the section for the 2D
Merge node. For a detailed explanation of each mode, see the section for that node.
— OpenGL Blend Mode: Use this menu to select the blending mode that is used when the
geometry is processed by the OpenGL renderer in the Renderer 3D node. This is also the mode
used when viewing the object in the viewers. Currently the OpenGL renderer supports a limited
number of blending modes.
— Software Blend Mode: Use this menu to select the blending mode that is used when the
geometry is processed by the software renderer. Currently, the software renderer supports all the
modes described in the Merge node documentation, except for the Dissolve mode.
Normal/Tangents
Normals are imaginary lines perpendicular to each point on the surface of an object. They are used
to illustrate the exact direction and orientation of every polygon on 3D geometry. Knowing the
direction and orientation determines how the object gets shaded. Tangents are lines that exists along
the surface’s plane. These lines are tangent to a point on the surface. The tangent lines are used to
describe the direction of textures you apply to the surface of 3D geometry.
— Scale: This slider increases or decreases the length of the vectors for both normals and tangents.
— Show Normals: Displays blue vectors typically extending outside the surface of the geometry.
These normal vectors help indicate how different areas of the surface are illuminated based on
the angle at which the light hits it.
— Show Tangents: Displays green vectors for Y and red vectors of X. The X and Y vectors represent
the direction of the image or texture you are applying to the geometry.
Object ID
Use this slider to select which ID is used to create a mask from the object of an image. Use the
Sample button in the same way as the Color Picker to grab IDs from the image displayed in the
viewer. The image or sequence must have been rendered from a 3D software package with those
channels included.
The controls in the Materials tab are used to determine the appearance of the 3D object when lit.
Most of these controls directly affect how the object interacts with light using a basic shader. For more
advanced control over the objects appearance, you can use tools from the 3D Materials category of
the Effects Library. These tools can be used to assemble a more finely detailed and precise shader.
When a shader is constructed using the 3D Material tools and connected to the 3D Object’s material
input, the controls in this tab are replaced by a label that indicates that an external material is
currently in use.
Diffuse
Diffuse describes the base surface characteristics without any additional effects like reflections or
specular highlights.
Diffuse Color
The Diffuse Color determines the basic color of an object when the surface of that object is either
lit indirectly or lit by an ambient light. If a valid image is provided to the tools diffuse texture input,
then the RGB values provided here are also multiplied by the color values of the pixels in the diffuse
texture. The Alpha channel of the diffuse material can be used to control the transparency of
the surface.
Opacity
Reducing the material’s Opacity decreases the color and Alpha values of the specular and diffuse
colors equally, making the material transparent and allowing hidden objects to be seen through the
material.
Specular
The Specular section provides controls for determining the characteristics of light that reflects toward
the viewer. These controls affect the appearance of the specular highlight that appears on the surface
of the object.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular
a material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that tend to inherit their color
from the material color. The basic shader material does not provide an input for textures to control
the specularity of the object. Use tools from the 3D Material category when more precise control is
required over the specular appearance.
Specular Intensity
Specular Intensity controls how strong the specular highlight is. If the specular intensity texture input
has a valid connection, then this value is multiplied by the Alpha value of the input.
Specular Exponent
Specular Exponent controls the falloff of the specular highlight. The greater the value, the sharper
the falloff, and the smoother and glossier the material appears. The basic shader material does not
provide an input for textures to control the specular exponent of the object. Use tools from the 3D
Material category when more precise control is required over the specular exponent.
Transmittance
Transmittance controls the way light passes through a material. For example, a solid blue sphere
casts a black shadow, but one made of translucent blue plastic would cast a much lower density
blue shadow.
There is a separate opacity option. Opacity determines how transparent the actual surface is when
it is rendered. Fusion allows adjusting both opacity and transmittance separately. This might be a bit
counter-intuitive to artists who are unfamiliar with 3D software at first. It is possible to have a surface
that is fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/
emissive surface.
Attenuation
Attenuation determines how much color is transmitted through the object. For an object to have
transmissive shadows, set the attenuation to (1, 1, 1), which means 100% of green, blue, red light
passes through the object. Setting this color to RGB (1, 0, 0) means that the material transmits 100% of
the red arriving at the surface but none of the green or blue light. This allows “stained glass” shadows.
Color Detail
The Color Detail slider modulates light passing through the surface by the diffuse color + texture
colors. Use this to throw a shadow that contains color details of the texture applied to the object.
Increasing the slider from 0 to 1 brings in more of diffuse color + texture color into the shadow. Note
that the Alpha and opacity of the object are ignored when transmitting color, allowing an object with a
solid Alpha to still transmit its color to the shadow.
Saturation
The Saturation slider controls the saturation of the color component transmitted to the shadow.
Setting this to 0.0 results in monochrome shadows.
Receives Lighting/Shadows
These checkboxes control whether the material is affected by lighting and shadows in the scene.
If turned off, the object is always fully lit and/or unshadowed.
Two-Sided Lighting
This makes the surface effectively two-sided by adding a second set of normals facing the opposite
direction on the back side of the surface. This is normally off, to increase rendering speed, but can
be turned on for 2D surfaces or for objects that are not fully enclosed, to allow the reverse or interior
surfaces to be visible as well.
Normally, in a 3D application, only the front face of a surface is visible and the back face is culled, so
that if a camera were to revolve around a plane in a 3D application, when it reached the backside, the
plane would become invisible. Making a plane two sided in a 3D application is equivalent to adding
another plane on top of the first but rotated by 180 degrees so the normals are facing the opposite
direction on the backside. Thus, when you revolve around the back, you see the second image plane
that has its normals facing the opposite way.
Fusion does exactly the same thing as 3D applications when you make a surface two sided. The
confusion about what two-sided lighting does arises because Fusion does not cull backfacing
polygons by default. If you revolve around a one-sided plane in Fusion, you still see it from the
backside (but you are seeing the frontside bits duplicated through to the backside as if it were
transparent). Making the plane two sided effectively adds a second set of normals to the backside of
the plane.
Note that this can become rather confusing once you make the surface transparent, as the same rules
still apply and produce a result that is counterintuitive. If you view from the frontside a transparent
two-sided surface illuminated from the backside, it looks unlit.
Material ID
This control is used to set the numeric identifier assigned to this material. The Material ID is an integer
number that is rendered into the MatID auxiliary channel of the rendered image when the Material
ID option is enabled in the Renderer 3D tool. For more information, see Chapter 25, “3D Compositing
Basics,” in the Fusion Reference Manual.
Many tools in the 3D category include a Transform tab used to position, rotate, and scale the object
in 3D space.
Translation
X, Y, Z Offset
These controls can be used to position the 3D element.
Rotation
Rotation Order
Use these buttons to select which order is used to apply rotation along each axis of the object.
For example, XYZ would apply the rotation to the X axis first, followed by the Y axis and then finally
the Z axis.
X, Y, Z Rotation
Use these controls to rotate the object around its pivot point. If the Use Target checkbox is selected,
then the rotation is relative to the position of the target; otherwise, the global axis is used.
Pivot
X, Y, Z Pivot
A Pivot point is the point around which an object rotates. Normally, an object rotates around its own
center, which is considered to be a pivot of 0,0,0. These controls can be used to offset the pivot from
the center.
Scale
X, Y, Z Scale
If the Lock X/Y/Z checkbox is checked, a single Scale slider is shown. This adjusts the overall size of the
object. If the Lock checkbox is unchecked, individual X, Y, and Z sliders are displayed to allow individual
Use Target
Selecting the Use Target checkbox enables a set of controls for positioning an XYZ target. When target
is enabled, the object always rotates to face the target. The rotation of the object becomes relative to
the target.
Import Transform
Opens a file browser where you can select a scene file saved or exported by your 3D application.
It supports the following file types:
dotXSI .xsi
The Import Transform button imports only transformation data. For 3D geometry, lights, and
cameras, consider using the File > FBX Import option.
Most of the controls in the Transform tab are represented in the viewer with onscreen controls for
transformation, rotation, and scaling. To change the mode of the onscreen controls, select one of the
three buttons in the toolbar in the upper left of the viewer. The modes can also be toggled using the
keyboard shortcut Q for translation, W for rotation, and E for scaling. In all three modes, individual
axes of the control may be dragged to affect just that axis, or the center of the control may be
dragged to affect all three axes.
The scale sliders for most 3D tools default to locked, which causes uniform scaling of all three axes.
Unlock the Lock X/Y/Z Scale checkbox to scale an object on a single axis only.
The Common Settings tab can be found on most tools in Fusion. The following controls are specific
settings for 3D nodes.
Comment Tab
The Comment tab contains a single text control that is used to add comments and notes to the tool.
When a note is added to a tool, a small red dot icon appears next to the setting’s tab icon and a text
bubble appears on the node. To see the note in the Node Editor, hold the mouse pointer over the
node for a moment. The contents of the Comments tab can be animated over time, if required.
Scripting Tab
The Scripting tab is present on every tool in Fusion. It contains several edit boxes used to add scripts
that process when the tool is rendering. For more details on the contents of this tab, please consult
the scripting documentation.
3D Light Nodes
This chapter details the 3D Light nodes available when creating
3D composites in Fusion. The abbreviations next to each node
name can be used in the Select Tool dialog when searching for tools
and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Ambient Light [3AL]������������������������������������������������������������������������������������������������� 818
Similar to a Camera 3D, you connect lights into a Merge 3D and view them in the scene by viewing the
Merge 3D node. Selecting a light node and loading it into the viewer does not show anything.
Inputs
The Ambient Light node includes a single optional orange input for a 3D scene or 3D geometry.
— SceneInput: The orange input is an optional input that accepts a 3D scene. If a scene is provided,
the Transform controls in this node apply to the entire scene provided.
Controls Tab
The Controls tab is used to set the color and brightness of the ambient light.
Enabled
When the Enabled checkbox is turned on, the ambient light affects the scene. When the checkbox to
turned off, the light is turned off. This checkbox performs the same function as the red switch to the
left of the node’s name in the Inspector.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the Intensity of the ambient light. A value of 0.2 indicates 20% percent light.
A perfectly white texture lit only with a 0.2 ambient light would render at 20% gray (.2, .2, .2).
Common Controls
Transform and Settings Tabs
The options presented in the Transform and Settings tabs are commonly found in other lighting
nodes. For more detail on the controls found in these tabs, see “The Common Controls” section at the
end of this chapter.
Similar to a Camera 3D, you connect lights into a Merge 3D and view them in the scene by viewing the
Merge 3D node. Selecting a light node and loading it into the viewer does not show anything.
Inputs
The Directional Light node includes a single optional orange input for a 3D scene or 3D geometry.
— SceneInput: The orange input is an optional input that accepts a 3D scene. If a scene is provided,
the Transform controls in this node apply to the entire scene provided.
Controls Tab
The Controls tab is used to set the color and brightness of the directional light. The direction of the
light source is controlled by the rotation controls in the Transform tab.
Enabled
When the Enabled checkbox is turned on, the directional light affects the scene. When the checkbox
is turned off, the light is turned off. This checkbox performs the same function as the red switch to the
left of the node’s name in the Inspector.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the Intensity of the directional light. A value of 0.2 indicates 20% percent light.
Common Controls
Transform and Settings Tabs
The options presented in the Transform and Settings tabs are commonly found in other lighting
nodes. For more detail on the controls found in these tabs, see “The Common Controls” section at the
end of this chapter.
This light shows an onscreen control, although only the position and distance of the control affect the
light. Since the light is a 360-degree source, rotation has no meaning. Additionally, a point light may
fall off with distance, unlike an ambient or directional light.
Similar to a Camera 3D, you connect lights into a Merge 3D and view them in the scene by viewing the
Merge 3D node. Selecting a light node and loading it into the viewer does not show anything.
Inputs
The Point Light node includes a single optional orange input for a 3D scene or 3D geometry.
— SceneInput: The orange input is an optional input that accepts a 3D scene. If a scene is provided,
the Transform controls in this node apply to the entire scene provided.
Controls Tab
The Controls tab is used to set the color and brightness of the point light. The position and distance of
the light source are controlled in the Transform tab.
Enabled
When the Enabled checkbox is turned on, the point light affects the scene. When the checkbox is
turned off the light is turned off. This checkbox performs the same function as the red switch to the
left of the node’s name in the Inspector.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the Intensity of the point light. A value of 0.2 indicates 20% percent light.
Decay Type
A point light defaults to No Decay, meaning that its light has equal intensity at all points in the scene.
To cause the intensity to fall off with distance, set the Decay Type to either Linear or Quadratic modes.
Common Controls
Transform and Settings Tabs
The options presented in the Transform and Settings tabs are commonly found in other lighting
nodes. For more detail on the controls found in these tabs, see “The Common Controls” section at the
end of this chapter.
Similar to a Camera 3D, you connect lights into a Merge 3D and view them in the scene by viewing the
Merge 3D node. Selecting a light node and loading it into the viewer does not show anything.
Inputs
The Spot Light node includes a single optional orange input for a 3D scene or 3D geometry.
— SceneInput: The orange input is an optional input that accepts a 3D scene. If a scene is provided,
the Transform controls in this node apply to the entire scene provided.
Controls Tab
The Controls tab is used to set the color and brightness of the spotlight. The position, rotation, and
distance of the light source are controlled in the Transform tab.
Enabled
When the Enabled checkbox is turned on, the spotlight affects the scene. When the checkbox is
turned off the light is turned off. This checkbox performs the same function as the red switch to the
left of the node’s name in the Inspector.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the Intensity of the spot light. A value of 0.2 indicates 20% percent light.
Decay Type
A spotlight defaults to No Falloff, meaning that its light has equal intensity on geometry despite the
distance from the light to the geometry. To cause the intensity to fall off with distance, set the Decay
type to either Linear or Quadratic modes.
Cone Angle
The Cone Angle of the light refers to the width of the cone where the light emits its full intensity.
The larger the angle, the wider the cone angle, up to a limit of 90 degrees.
Dropoff
The Dropoff controls how quickly the penumbra angle falls off from full intensity to 0.
Shadows
This section provides several controls used to define the shadow map used when this spotlight
creates shadows. For more information, see Chapter 25, “3D Compositing Basics,” in the Fusion
Reference Manual.
Enable Shadows
The Enable Shadows checkbox should be selected if the light is to produce shadows. This defaults
to selected.
Shadow Color
Use this standard Color control to set the color of the shadow. This defaults to black (0, 0, 0).
Density
The shadow density determines the transparency of the shadow. A density of 1.0 produces a
completely opaque shadow, whereas lower values make the shadow more transparent.
Multiplicative/Additive Bias
Shadows are essentially textures applied to objects in the scene, so there is occasionally Z-fighting,
where the portions of the object that should be receiving the shadows render over the top of the
shadow. Biasing works by adding a small depth offset to move the shadow away from the surface it is
shadowing, eliminating the Z-fighting. Too little bias and the objects can self-shadow themselves. Too
much bias and the shadow can become separated from the surface. Adjust the Multiplicative Bias first,
and then fine tune the result using the Additive Bias control.
For more information, see the Multiplicative and Additive Bias section of Chapter 85, “3D Compositing
Basics,” in the DaVinci Resolve Reference Manual, or Chapter 25 in the Fusion Reference Manual.
Softness
Soft edges in shadows are produced by filtering the shadow map when it is sampled. Fusion provides
two separate filtering methods for rendering shadows, which produce different effects.
NOTE: Shadows have a hard edge. No filtering of the shadow map is done at all. The
advantage of this method is that you only have to sample one pixel in the shadow map, so
it is fast.
— Constant: Shadows edges have a constant softness. A filter with a constant width is used when
sampling the shadow map. Adjusting the Constant Softness slider controls the size of the filter.
Note that the larger you make the filter, the longer it takes to render the shadows.
— Variable: The shadow edge softness grows the further the shadow receiver is positioned from
the shadow caster. The variable softness is achieved by changing the size of the filter based on the
distance between the receiver and caster. When this option is selected, the Softness Falloff, Min
Softness, and Max Softness sliders appear.
Constant Softness
If the Softness is set to Constant, then this slider appears. It can be used to set the overall softness of
the shadow.
Softness Falloff
The Softness Falloff slider appears when the Softness is set to variable. This slider controls how fast
the softness of shadow edges grows with distance. More precisely, it controls how fast the shadow
map filter size grows based upon the distance between the shadow caster and receiver. Its effect is
mediated by the values of the Min and Max Softness sliders.
Min Softness
The Min Softness slider appears when the Softness is set to Variable. This slider controls the Minimum
Softness of the shadow. The closer the shadow is to the object casting the shadow, the sharper it is,
up to the limit set by this slider.
Max Softness
The Max Softness slider appears when the Softness is set to Variable. This slider controls the
Maximum Softness of the shadow. The further the shadow is from the object casting the shadow, the
softer it is, up to the limit set by this slider.
Common Controls
Transform and Settings Tabs
The options presented in the Transform and Settings tabs are commonly found in other lighting
nodes. For more detailed information on the controls found in these tabs, see “The Common Controls”
section at the end of this chapter.
Many tools in the 3D category include a Transform tab used to position, rotate, and scale the object
in 3D space.
Translation
X, Y, Z Offset
These controls can be used to position the 3D element.
Rotation
Rotation Order
Use these buttons to select which order is used to apply Rotation along each axis of the object. For
example, XYZ would apply the rotation to the X axis first, followed by the Y axis, and finally the Z axis.
X, Y, Z Rotation
Use these control to rotate the object around its pivot point. If the Use Target checkbox is selected,
then the rotation is relative to the position of the target; otherwise, the global axis is used.
Pivot
X, Y, Z Pivot
A pivot point is the point around which an object rotates. Normally, an object rotates around its own
center, which is considered to be a pivot of 0,0,0. These controls can be used to offset the pivot from
the center.
Use Target
Selecting the Use Target checkbox enables a set of controls for positioning an XYZ target. When
Target is enabled, the object always rotates to face the target. The rotation of the object becomes
relative to the target.
Import Transform
Opens a file browser where you can select a scene file saved or exported by your 3D application. It
supports the following file types:
dotXSI .xsi
The Import Transform button imports only transformation data. For 3D geometry, lights, and
cameras, consider using the File > FBX Import option.
The Scale sliders for most 3D tools default to locked, which causes uniform scaling of all three axes.
Unlock the Lock X/Y/Z Scale checkbox to scale an object on a single axis only.
The Common Settings tab can be found on almost every tool found in Fusion. The following controls
are specific settings for 3D nodes.
Comment Tab
The Comment tab contains a single text control that is used to add comments and notes to the tool.
When a note is added to a tool, a small red dot icon appears next to the setting’s tab icon, and a text
bubble appears on the node. To see the note in the Node Editor, hold the mouse pointer over the
node for a moment. The contents of the Comments tab can be animated over time, if required.
Scripting Tab
The Scripting tab is present on every tool in Fusion. It contains several edit boxes used to add scripts
that process when the tool is rendering. For more details on the contents of this tab, please consult
the scripting documentation.
3D Material Nodes
This chapter details the 3D Material nodes available when creating
3D composites in Fusion. The abbreviations next to each node
name can be used in the Select Tool dialog when searching for tools
and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Blinn [3Bl]��������������������������������������������������������������������������������������������������������������������� 832
The standard basic material provided in the Materials tab of most geometry nodes is a simplified
version of the Blinn node. The primary difference is that the Blinn node provides additional texture
map inputs beyond just diffuse.
The Blinn node outputs a 3D Material that can be connected to the material inputs on any
3D geometry node.
The Blinn model in Fusion calculates the highlight as the dot product of the surface normal and the
half angle vector between light source and viewer (dot(N, H)). This may not always match the Blinn
model illumination model used by other 3D applications.
Inputs
There are five inputs on the Blinn node that accept 2D images or 3D materials. These inputs control
the overall color and image used for the 3D object as well as the color and texture used in the specular
highlight. Each of these inputs multiplies the pixels in the texture map by the equivalently named
parameters in the node itself. This provides an effective method for scaling parts of the material.
— Diffuse Texture: The orange Diffuse Texture input accepts a 2D image or a 3D material to be
used as a main object texture map.
— Specular Color Material: The green Specular Color material input accepts a 2D image or a 3D
material to be used as the color texture map for specula highlight areas.
— Specular Intensity Materials: The magenta Specular Intensity material input accepts a 2D image
or a 3D material to be used to alter the intensity of specular highlights. When the input is a 2D
image, the Alpha channel is used to create the map, while the color channels are discarded.
— Specular Exponent Material: The teal Specular Exponent material input accepts a 2D image or a
3D material that is used as a falloff map for the material’s specular highlights. When the input is a
2D image, the Alpha channel is used to create the map, while the color channels are discarded.
— Bump Map Material: The white Bump Map material input accepts only a 3D material. Typically,
you connect the texture into a Bump Map node, and then connect the Bump Map node to this
input. This input uses the RGB information as texture-space normals.
Inspector
Blinn controls
Diffuse
Diffuse describes the base surface characteristics without any additional effects like reflections or
specular highlights. Besides defining the base color of an object, the diffuse color also defines the
transparency of the object. The Alpha in a diffuse texture map can be used to make portions of the
surface transparent.
Diffuse Color
A material’s Diffuse Color describes the base color presented by the material when it is lit indirectly or
by ambient light. If a diffuse texture map is provided, then the color value provided here is multiplied
by the color values in the texture.
Alpha
This slider sets the material’s Alpha channel value. This affects diffuse and specular colors equally and
affects the Alpha value of the material in the rendered output. If a diffuse texture map is provided,
then the Alpha value set here is multiplied by the Alpha values in the texture map.
Opacity
Reducing the material’s opacity decreases the color and Alpha values of the specular and diffuse
colors equally, making the material transparent.
Specular
The parameters in the Specular section describe the look of the specular highlight of the surface.
These values are evaluated in a different way for each illumination model.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular
a material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that inherit their color from the
material color. If a specular texture map is provided, then the value provided here is multiplied by the
color values from the texture.
Specular Intensity
Specular Intensity controls how strong the specular highlight is. If the specular intensity texture is
provided, then this value is multiplied by the Alpha value of the texture.
Specular Exponent
Specular Exponent controls the falloff of the specular highlight. The greater the value, the sharper
the falloff, and the smoother and glossier the material appears. If the specular exponent texture is
provided, then this value is multiplied by the Alpha value of the texture map.
There is a separate Opacity option. Opacity determines how transparent the actual surface is when
it is rendered. Fusion allows adjusting both opacity and transmittance separately. At first, this might
be a bit counterintuitive to those who are unfamiliar with 3D software. It is possible to have a surface
that is fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/
emissive surface.
Attenuation
Attenuation determines how much color is passed through the object. For an object to have
transmissive shadows, set the attenuation to (1, 1, 1), which means 100% of green, blue, and red light
passes through the object. Setting this color to RGB (1, 0, 0) means that the material transmits 100% of
the red arriving at the surface but none of the green or blue light. This can be used for “stained glass”-
styled shadows.
Alpha Detail
When the Alpha Detail slider is set to 0, the Alpha channel of the object is ignored and the entire
object casts a shadow. If it is set to 1, the Alpha channel determines what portions of the object
cast a shadow.
Color Detail
The Color Detail slider modulates light passing through the surface by the diffuse color + texture
colors. Use this to throw a shadow that contains color details of the texture applied to the object.
Increasing the slider from 0 to 1 brings in more diffuse color + texture color into the shadow. Note
that the Alpha and opacity of the object are ignored when transmitting color, allowing an object with a
solid Alpha to still transmit its color to the shadow.
Saturation
The Saturation slider controls the saturation of the color component transmitted to the shadow.
Setting this to 0.0 results in monochrome shadows.
Receives Lighting/Shadows
These checkboxes control whether the material is affected by lighting and shadows in the scene. If
turned off, the object is always fully lit and/or unshadowed.
Two-Sided Lighting
This effectively makes the surface two sided by adding a second set of normals facing the opposite
direction on the backside of the surface. This is normally off to increase rendering speed, but it can
be turned on for 2D surfaces or for objects that are not fully enclosed, to allow the reverse or interior
surfaces to be visible as well.
Normally, in a 3D application, only the front face of a surface is visible and the back face is culled, so
that if a camera were to revolve around a plane in a 3D application, when it reached the backside, the
plane would become invisible. Making a plane two sided in a 3D application is equivalent to adding
another plane on top of the first but rotated by 180 degrees so the normals are facing the opposite
direction on the backside. Thus, when you revolve around the back, you see the second image plane,
which has its normals facing the opposite way.
NOTE: This can become rather confusing once you make the surface transparent, as the
same rules still apply and produce a result that is counterintuitive. If you view from the
frontside a transparent two-sided surface illuminated from the backside, it looks unlit.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There are two inputs on the Channel Boolean Node: one for the foreground material, and one for the
background material. Both inputs accept either a 2D image or a 3D material like Blinn, Cook-Torrence,
or Phong node.
In the below example, the Channel Boolean node combines the Cook Torrance and Blinn materials. It
uses the math operands in the Channel Boolean to switch, invert, and mix the two inputs, creating a
neon flickering effect.
A Channel Boolean used to combine and operate on Cook Torrance and Blinn nodes
Inspector
Operand A/B
The Operand menus, one for each output RGBA channel, allow you to set the desired input
information for the corresponding channel.
— Red/Green/Blue/Alpha FG
Reads the color information of the foreground material.
— Red/Green/Blue/Alpha BG
Reads the color information of the background material.
— Black/White/Mid Gray
Sets the value of the channel to 0, 0.5, or 1.
— Hue/Lightness/Saturation FG
Reads the color information of the foreground material, converts it into the HLS color space, and
puts the selected information into the corresponding channel.
— Hue/Lightness/Saturation BG
Reads the color information of the background material, converts it into the HLS color space, and
puts the selected information into the corresponding channel.
— Luminance FG
Reads the color information of the foreground material and calculates the luminance value for
the channel.
— Luminance BG
Reads the color information of the background material and calculates the luminance value for
the channel.
— X/Y/Z Position FG
Sets the value of the channel to the position of the pixel in 3D space. The vector information is
returned in eye space.
— U/V/W Texture FG
Applies the texture space coordinates of the foreground material to the channels.
— U/V/W EnvCoords FG
Applies the environment texture space coordinates to the channels. Use it upstream of nodes
modifying the environment texture coordinates like the Reflect 3D node.
— X/Y/Z Normal
Sets the value of the channel to the selected axis of the normal vector. The vector is returned in
eye space.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The Cook Torrance node outputs a 3D Material that can be connected to the material inputs on any 3D
geometry node.
— Diffuse Color Material: The orange Diffuse Color material input accepts a 2D image or a
3D material to be used as overall color and texture of the object.
— Specular Color Material: The green Specular Color material input accepts a 2D image or a
3D material to be used as the color and texture of the specular highlight.
— Specular Intensity Material: The magenta Specular Intensity material input accepts a 2D image
or a 3D material to alter the intensity of the specular highlight. When the input is a 2D image, the
Alpha channel is used to create the map, while the color channels are discarded.
— Specular Roughness Material: The white Specular Roughness material input accepts a 2D image
or a 3D material to be used as a map for modifying the roughness of the specular highlight.
The Alpha of the texture map is multiplied by the value of the roughness control.
— Specular Refractive Index Material: The white Specular Refractive Index material input accepts
a 2D image or a 3D material, using the RGB channels as the refraction texture.
— Bump Map Material: The white Bump Map material input accepts only a 3D material. Typically,
you connect the texture into a Bump Map node, and then connect the Bump Map node to
this input. This input uses the RGB information as texture-space normals.
Each of these inputs multiplies the pixels in the texture map by the equivalently named parameters in
the node itself. This provides an effective method for scaling parts of the material.
When nodes have as many inputs as this one does, it is often difficult to make connections with
any precision. Hold down the Option (macOS) or Alt (Windows) key while dragging the output from
another node over the node tile, and keep holding Option or Alt when releasing the left mouse button.
A small drop-down menu listing all the inputs provided by the node appears. Click on the desired
input to complete the connection.
A Cook Torrance shader with diffuse and specular color materials connected
Controls Tab
The Controls tab contains parameters for adjusting the main color, highlight, and lighting properties
of the Cook Torrance shader node.
Diffuse
Diffuse describes the base surface characteristics without any additional effects like reflections or
specular highlights. Besides defining the base color of an object, the diffuse color also defines the
transparency of the object. The Alpha in a diffuse texture map can be used to make portions of the
surface transparent.
Diffuse Color
A material’s Diffuse Color describes the base color presented by the material when it is lit indirectly or
by ambient light. If a diffuse texture map is provided, then the color value provided here is multiplied
by the color values in the texture.
Alpha
This slider sets the material’s Alpha channel value. This affects diffuse and specular colors equally, and
affects the Alpha value of the material in the rendered output. If a diffuse texture map is provided,
then the Alpha value set here is multiplied by the Alpha values in the texture map.
Opacity
Reducing the material’s Opacity decreases the color and Alpha values of the specular and diffuse
colors equally, making the material transparent.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular
a material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that inherit their color from the
material color. If a specular texture map is provided, then the value provided here is multiplied by the
color values from the texture.
Specular Intensity
Specular Intensity controls how strong the specular highlight is. If the specular intensity texture is
provided, then this value is multiplied by the Alpha value of the texture.
Roughness
The Roughness of the specular highlight describes diffusion of the specular highlight over the surface.
The greater the value, the wider the falloff, and the more brushed and metallic the surface appears.
If the roughness texture map is provided, then this value is multiplied by the Alpha value from
the texture.
Do Fresnel
Selecting this checkbox adds Fresnel calculations to the materials illumination model. This provides
more realistic-looking metal surfaces by taking into account the refractiveness of the material.
Refractive Index
This slider appears when the Do Fresnel checkbox is selected. The Refractive Index applies only to
the calculations for the highlight; it does not perform actual refraction of light through transparent
surfaces. If the refractive index texture map is provided, then this value is multiplied by the Alpha
value of the input.
Transmittance
Transmittance controls the way light passes through a material. For example, a solid blue sphere
casts a black shadow, but one made of translucent blue plastic would cast a much lower density
blue shadow.
There is a separate Opacity option. Opacity determines how transparent the actual surface is when
it is rendered. Fusion allows adjusting both opacity and transmittance separately. At first, this might
be a bit counterintuitive to those who are unfamiliar with 3D software. It is possible to have a surface
that is fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/
emissive surface.
Attenuation
Attenuation determines how much color is passed through the object. For an object to have
transmissive shadows, set the attenuation to (1, 1, 1), which means 100% of green, blue, and red light
passes through the object. Setting this color to RGB (1, 0, 0) means that the material transmits 100% of
the red arriving at the surface but none of the green or blue light. This can be used to create “stained
glass”-styled shadows.
Color Detail
The Color Detail slider modulates light passing through the surface by the diffuse color + texture
colors. Use this to throw a shadow that contains color details of the texture applied to the object.
Increasing the slider from 0 to 1 brings in more diffuse color + texture color into the shadow. Note
that the Alpha and opacity of the object are ignored when transmitting color, allowing an object with a
solid Alpha to still transmit its color to the shadow.
Saturation
The Saturation slider controls the saturation of the color component transmitted to the shadow.
Setting this to 0.0 results in monochrome shadows.
Receives Lighting/Shadows
These checkboxes control whether the material is affected by lighting and shadows in the scene. If
turned off, the object is always fully lit and/or unshadowed.
Two-Sided Lighting
This effectively makes the surface two sided by adding a second set of normals facing the opposite
direction on the backside of the surface. This is normally off to increase rendering speed, but it can
be turned on for 2D surfaces or for objects that are not fully enclosed, to allow the reverse or interior
surfaces to be visible as well.
Normally, in a 3D application, only the front face of a surface is visible and the back face is culled, so
that if a camera were to revolve around a plane in a 3D application, when it reached the backside, the
plane would become invisible. Making a plane two sided in a 3D application is equivalent to adding
another plane on top of the first but rotated by 180 degrees so the normals are facing the opposite
direction on the backside. Thus, when you revolve around the back, you see the second image plane,
which has its normals facing the opposite way.
NOTE: This can become rather confusing once you make the surface transparent, as the
same rules still apply and produce a result that is counterintuitive. If you view from the
frontside a transparent two-sided surface illuminated from the backside, it looks unlit.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The node also provides a mechanism for assigning a new material identifier to the combined material.
Inputs
The Material Merge node includes two inputs for the two materials you want to combine.
Controls Tab
The Controls tab includes a single slider for blending the two materials together.
Blend
The Blend behavior of the Material Merge is similar to the Dissolve (DX) node for images. The two
materials/textures are mixed using the value of the slider to determine the percentage each input
contributes. While the background and foreground inputs can be a 2D image instead of a material, the
output of this node is always a material.
Unlike the 2D Dissolve node, both foreground and background inputs are required.
Material ID
This slider sets the numeric identifier assigned to the resulting material. This value is rendered into
the MatID auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Phong [3Ph]
Inputs
There are five inputs on the Phong node that accept 2D images or 3D materials. These inputs
control the overall color and image used for the 3D object as well as controlling the color and texture
used in the specular highlight. Each of these inputs multiplies the pixels in the texture map by the
equivalently named parameters in the node itself. This provides an effective method for scaling parts
of the material.
— Diffuse Material: The orange Diffuse material input accepts a 2D image or a 3D material to be
used as a main color and texture of the object.
— Specular Color Material: The green Specular Color material input accepts a 2D image or a 3D
material to be used as a highlight color and texture of the object.
— Specular Intensity Material: The magenta Specular Intensity material input accepts a 2D image
or a 3D material to be used as an intensity map for the material’s highlights. When the input is a
2D image, the Alpha channel is used to create the map, while the color channels are discarded.
— Specular Exponent Material: The teal Specular Exponent material input accepts a 2D image or a
3D material to be used as a falloff map for the material’s specular highlights. When the input is a
2D image, the Alpha channel is used to create the map, while the color channels are discarded.
— Bump Map Material: The white Bump Map texture input accepts only a 3D material. Typically, you
connect the texture into a Bump Map node, and then connect the Bump Map node to this input.
This input uses the RGB information as texture-space normals.
When nodes have as many inputs as this one does, it is often difficult to make connections with
any precision. Hold down the Option or Alt key while dragging the output from another node
over the node tile, and keep holding Option or Alt when releasing the left mouse button. A small
drop-down menu listing all the inputs provided by the node appears. Click on the desired input to
complete the connection.
Phong controls
Controls Tab
The Controls tab contains parameters for adjusting the main color, highlight, and lighting properties
of the Phong shader node.
Diffuse
Diffuse describes the base surface characteristics without any additional effects like reflections or
specular highlights. Besides defining the base color of an object, the diffuse color also defines the
transparency of the object.
The Alpha in a diffuse texture map can be used to make portions of the surface transparent.
Diffuse Color
A material’s Diffuse Color describes the base color presented by the material when it is lit indirectly or
by ambient light. If a diffuse texture map is provided, then the color value provided here is multiplied
by the color values in the texture.
Alpha
This slider sets the material’s Alpha channel value. This affects diffuse and specular colors equally and
affects the Alpha value of the material in the rendered output. If a diffuse texture map is provided,
then the Alpha value set here is multiplied by the Alpha values in the texture map.
Opacity
Reducing the material’s Opacity decreases the color and Alpha values of the specular and diffuse
colors equally, making the material transparent.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular
a material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that inherit their color from the
material color. If a specular texture map is provided, then the value provided here is multiplied by the
color values from the texture.
Specular Intensity
Specular Intensity controls how strong the specular highlight is. If the specular intensity texture is
provided, then this value is multiplied by the Alpha value of the texture.
Specular Exponent
Specular Exponent controls the falloff of the specular highlight. The greater the value, the sharper
the falloff, and the smoother and glossier the material appears. If the specular exponent texture is
provided, then this value is multiplied by the Alpha value of the texture map.
Transmittance
Transmittance controls the way light passes through a material. For example, a solid blue sphere
casts a black shadow, but one made of translucent blue plastic would cast a much lower density
blue shadow.
There is a separate Opacity option. Opacity determines how transparent the actual surface is when
it is rendered. Fusion allows adjusting both opacity and transmittance separately. At first, this might
be a bit counterintuitive to those who are unfamiliar with 3D software. It is possible to have a surface
that is fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/
emissive surface.
Attenuation
Attenuation determines how much color is passed through the object. For an object to have
transmissive shadows, set the attenuation to (1, 1, 1), which means 100% of green, blue, and red light
passes through the object. Setting this color to RGB (1, 0, 0) means that the material transmits 100% of
the red arriving at the surface but none of the green or blue light. This can be used to create “stained
glass”-styled shadows.
Alpha Detail
When the Alpha Detail slider is set to 0, the Alpha channel of the object is ignored and the entire
object casts a shadow. If it is set to 1, the Alpha channel determines what portions of the object
cast a shadow.
Color Detail
The Color Detail slider modulates light passing through the surface by the diffuse color + texture
colors. Use this to throw a shadow that contains color details of the texture applied to the object.
Increasing the slider from 0 to 1 brings in more diffuse color + texture color into the shadow. Note
that the Alpha and opacity of the object are ignored when transmitting color, allowing an object with a
solid Alpha to still transmit its color to the shadow.
Receives Lighting/Shadows
These checkboxes control whether the material is affected by lighting and shadows in the scene. If
turned off, the object is always fully lit and/or unshadowed.
Two-Sided Lighting
This effectively makes the surface two sided by adding a second set of normals facing the opposite
direction on the backside of the surface. This is normally off to increase rendering speed, but it can
be turned on for 2D surfaces or for objects that are not fully enclosed, to allow the reverse or interior
surfaces to be visible as well.
Normally, in a 3D application, only the front face of a surface is visible and the back face is culled, so
that if a camera were to revolve around a plane in a 3D application, when it reached the backside, the
plane would become invisible. Making a plane two sided in a 3D application is equivalent to adding
another plane on top of the first but rotated by 180 degrees so the normals are facing the opposite
direction on the backside. Thus, when you revolve around the back, you see the second image plane,
which has its normals facing the opposite way.
Fusion does exactly the same thing as 3D applications when you make a surface two sided. The
confusion about what two-sided lighting does arises because Fusion does not cull back-facing
polygons by default. If you revolve around a one-sided plane in Fusion, you still see it from
the backside (but you are seeing the frontside duplicated through to the backside as if it were
transparent). Making the plane two sided effectively adds a second set of normals to the backside of
the plane.
NOTE: This can become rather confusing once you make the surface transparent, as the
same rules still apply and produce a result that is counterintuitive. If you view from the
frontside a transparent two-sided surface illuminated from the backside, it looks unlit.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Control is offered over the face on and glancing strength, falloff, per channel refraction indexes, and
tinting. Several texture map inputs can modify the behavior of each parameter.
For more information, see Chapter 25, “3D Compositing Basics,” in the Fusion Reference Manual.
Inputs
There are five inputs on the Reflect node that accept 2D images or 3D materials. These inputs control
the overall color and image used for the 3D object as well as controlling the color and texture used in
the reflective highlights.
When nodes have as many inputs and some using the same color as this one does, it is often difficult
to make connections with any precision. Hold down the Option or Alt key while dragging the output
from another node over the node tile, and keep holding Option or Alt when releasing the left mouse
Inspector
Reflect controls
Controls Tab
The Controls tab contains parameters for adjusting the reflective strength based on the orientation of
the object, as well as the tint color of the Reflect shader node.
Glancing Strength
[By Angle] Glancing Strength controls the intensity of the reflection for those areas of the geometry
where the reflection faces away from the camera.
Face On Strength
[By Angle] Face On Strength controls the intensity of the reflection for those parts of the geometry
that reflect directly back to the camera.
Falloff
[By Angle] Falloff controls the sharpness of the transition between the Glancing and Face On Strength
regions. It can be considered similar to applying gamma correction to a gradient between the Face On
and Glancing values.
Constant Strength
[Constant Angle] This control is visible only when the reflection strength variability is set to Constant.
In this case, the intensity of the reflection is constant despite the incidence angle of the reflection.
Refraction
If the incoming background material has a lower opacity than 1, then it is possible to use an
environment map as refraction texture, and it is possible to simulate refraction effects in
transparent objects.
Refraction Index
This slider controls how strongly the environment map is deformed when viewed through a surface.
The overall deformation is based on the incidence angle. Since this is an approximation and not a
simulation, the results are not intended to model real refractions accurately.
Refraction Tint
The refraction texture is multiplied by the tint color for simulating color-filtered refractions. It can be
used to simulate the type of coloring found in tinted glass, as seen in many brands of beer bottles,
for example.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
This node has two inputs that are both required for this node to work. Both inputs accept either a 2D
image or a 3D material.
— LeftMaterial: The orange left material input accepts a 2D image or a 3D material to be used as
the material for the left eye rendering. If a 2D image is used, it is converted to a diffuse texture
map using the basic material type.
— RightMaterial: The green right material input accepts a 2D image or a 3D material to be used as
the material for the right eye rendering. If a 2D image is used, it is converted to a diffuse texture
map using the basic material type.
While the inputs can be either 2D images or 3D materials, the output is always a material.
A Stereo Mix node used to combine left and right images into a single stereo material
Controls Tab
The Controls tab contains a single switch that swaps the left and right material inputs.
Swap
This option swaps both inputs of the node.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Ward [3Wd]
Specifically, the Ward node is ideal for simulating brushed metal surfaces, as the highlight
can be elongated along the U or V directions of the mapping coordinates. This is known as an
anisotropic highlight.
The Ward node outputs a 3D Material that can be connected to the material inputs on any 3D
geometry node.
— Diffuse Material: The orange Diffuse material input accepts a 2D image or a 3D material to be
used as a main color and texture of the object.
— Specular Color Material: The green Specular Color material input accepts a 2D image or a
3D material to be used as a highlight color and texture of the object.
— Specular Intensity Material: The magenta Specular Intensity material input accepts a 2D image
or a 3D material to be used as an intensity map for the material’s highlights. When the input is a
2D image, the Alpha channel is used to create the map, while the color channels are discarded.
— Spread U Material: The white Spread U material input accepts a 2D image or a 3D material. The
value of the Spread U option in the node’s controls is multiplied against the pixel values in the
material’s Alpha channel.
— Spread V Material: The white Spread V material input accepts a 2D image or a 3D material.
The value of the Spread V option in the node’s controls is multiplied against the pixel values in the
material’s Alpha channel.
— Bump Map Material: The white Bump Map material input accepts only a 3D material. Typically,
you connect the texture into a Bump Map node, and then connect the Bump Map node to this
input. This input uses the RGB information as texture-space normals.
When nodes have as many inputs and some using the same color as this one does, it is often difficult
to make connections with any precision. Hold down the Option or Alt key while dragging the output
from another node over the node tile, and keep holding Option or Alt when releasing the left mouse
button. A small drop-down menu listing all the inputs provided by the node appears. Click on the
desired input to complete the connection.
A Ward node used with a diffuse connection and specular color connection
Ward controls
Controls Tab
The Controls tab contains parameters for adjusting the main color, highlight, and lighting properties
of the Ward shader node.
Diffuse
Diffuse describes the base surface characteristics without any additional effects like reflections or
specular highlights. Besides defining the base color of an object, the diffuse color also defines the
transparency of the object. The Alpha in a diffuse texture map can be used to make portions of the
surface transparent.
Diffuse Color
A material’s Diffuse Color describes the base color presented by the material when it is lit indirectly or
by ambient light. If a diffuse texture map is provided, then the color value provided here is multiplied
by the color values in the texture.
Alpha
This slider sets the material’s Alpha channel value. This affects diffuse and specular colors equally and
affects the Alpha value of the material in the rendered output. If a diffuse texture map is provided,
then the Alpha value set here is multiplied by the Alpha values in the texture map.
Opacity
Reducing the material’s Opacity decreases the color and Alpha values of the specular and diffuse
colors equally, making the material transparent.
Specular Color
Specular Color determines the color of light that reflects from a shiny surface. The more specular
a material is, the glossier it appears. Surfaces like plastics and glass tend to have white specular
highlights, whereas metallic surfaces like gold have specular highlights that inherit their color from the
material color. If a specular texture map is provided, then the value provided here is multiplied by the
color values from the texture.
Specular Intensity
Specular Intensity controls how strong the specular highlight is. If the specular intensity texture is
provided, then this value is multiplied by the Alpha value of the texture.
Spread U
Spread U controls the falloff of the specular highlight along the U-axis in the UV map of the object.
The smaller the value, the sharper the falloff, and the smoother and glossier the material appears in
this direction. If the Spread U texture is provided, then this value is multiplied by the Alpha value of
the texture.
Spread V
Spread V controls the falloff of the specular highlight along the V-axis in the UV map of the object.
The smaller the value, the sharper the falloff, and the smoother and glossier the material appear in
this direction. If the Spread V texture is provided, then this value is multiplied by the Alpha value of
the texture.
Transmittance
Transmittance controls the way light passes through a material. For example, a solid blue sphere
casts a black shadow, but one made of translucent blue plastic would cast a much lower density
blue shadow.
There is a separate Opacity option. Opacity determines how transparent the actual surface is when
it is rendered. Fusion allows adjusting both opacity and transmittance separately. At first, this might
be a bit counterintuitive to those who are unfamiliar with 3D software. It is possible to have a surface
that is fully opaque but transmits 100% of the light arriving upon it, effectively making it a luminous/
emissive surface.
Attenuation
Attenuation determines how much color is passed through the object. For an object to have
transmissive shadows, set the attenuation to (1, 1, 1), which means 100% of green, blue, and red light
passes through the object. Setting this color to RGB (1, 0, 0) means that the material transmits 100% of
the red arriving at the surface but none of the green or blue light. This can be used to create “stained
glass”-styled shadows.
Alpha Detail
When the Alpha Detail slider is set to 0, the Alpha channel of the object is ignored, and the entire
object casts a shadow. If it is set to 1, the Alpha channel determines what portions of the object
cast a shadow.
Saturation
The Saturation slider controls the saturation of the color component transmitted to the shadow.
Setting this to 0.0 results in monochrome shadows.
Receives Lighting/Shadows
These checkboxes control whether the material is affected by lighting and shadows in the scene. If
turned off, the object is always fully lit and/or unshadowed.
Two-Sided Lighting
This effectively makes the surface two sided by adding a second set of normals facing the opposite
direction on the backside of the surface. This is normally off to increase rendering speed, but it can
be turned on for 2D surfaces or for objects that are not fully enclosed, to allow the reverse or interior
surfaces to be visible as well.
Normally, in a 3D application, only the front face of a surface is visible and the back face is culled, so
that if a camera were to revolve around a plane in a 3D application, when it reached the backside, the
plane would become invisible. Making a plane two sided in a 3D application is equivalent to adding
another plane on top of the first but rotated by 180 degrees so the normals are facing the opposite
direction on the backside. Thus, when you revolve around the back, you see the second image plane,
which has its normals facing the opposite way.
Fusion does exactly the same thing as 3D applications when you make a surface two sided.
The confusion about what two-sided lighting does arises because Fusion does not cull back-
facing polygons by default. If you revolve around a one-sided plane in Fusion you still see it from
the backside (but you are seeing the frontside duplicated through to the backside as if it were
transparent). Making the plane two sided effectively adds a second set of normals to the backside of
the plane.
NOTE: This can become rather confusing once you make the surface transparent, as the
same rules still apply and produce a result that is counterintuitive. If you view from the
frontside a transparent two-sided surface illuminated from the backside, it looks unlit.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail in the following “The Common Controls” section.
Settings Tab
Common Settings tab can be found on most tools in Fusion The following controls are specific settings
for 3D nodes
Comment Tab
The Comment tab contains a single text control that is used to add comments and notes to the tool.
When a note is added to a tool, a small red dot icon appears next to the setting’s tab icon, and a text
bubble appears on the node. To see the note in the Node Editor, hold the mouse pointer over the
node for a moment. The contents of the Comments tab can be animated over time, if required.
Scripting Tab
The Scripting tab is present on every tool in Fusion. It contains several edit boxes used to add scripts
that process when the tool is rendering. For more details on the contents of this tab, please consult
the scripting documentation.
3D Texture Nodes
This chapter details the 3D Texture nodes available when creating
3D composites in Fusion. The abbreviations next to each node
name can be used in the Select Tool dialog when searching for tools
and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Bump Map [3Bu]��������������������������������������������������������������������������������������������������������� 861
Inputs
The Bump Map node includes a single orange input for connecting a 2D image you want to use as the
bump map texture, or it can accept the output of the Create Bump Map node.
— ImageInput: The orange Image input is used to connect a 2D RGBA image for the bump
calculation or an existing bump map from the Create Bump map node.
A Bump Map is connected to the Bump Map material input on a material node.
Controls Tab
The Controls tab contains all parameters for modifying the input source and the appearance of
the bump map.
Filter Size
A custom filter generates the bump information. The drop-down menu sets the filter size.
Height Channel
Sets the channel from which to extract the grayscale information.
Clamp Z Normal
Clips the lower values of the blue channel in the resulting bump texture.
Height Scale
Changes the contrast of the resulting values in the bump map. Increasing this value yields a more
visible bump map.
Texture Depth
Optionally converts the resulting bump map texture into the desired bit depth.
Wrap Mode
Wraps the image at the borders, so the filter produces correct result when using seamless
tile textures.
Height Map
Bump Map
Normals Map
To understand the Catcher node, it helps to understand the difference between light-based
projections and texture-based projections. Choosing Light from the projection mode menu on the
Projector 3D or Camera 3D nodes simply adds the values of the RGB channels in the projected image
to the diffuse texture of any geometry that lies within the projection cone. This makes it impossible to
clip away geometry based on the Alpha channel of an image when using light mode projections.
Imagine a scenario where you want to project an image of a building onto an image plane as part of a
set extension shot. You first rotoscope the image to mask out the windows. This makes it possible to
see the geometry of the rooms behind the wall in the final composite. When this image is projected as
light, the Alpha channel is ignored, so the masked windows remain opaque.
By connecting the Catcher to the diffuse texture map of the material applied to the image plane,
and then switching the projection mode menu in the Projector 3D or Camera 3D node from Light or
Ambient Light mode to Texture mode, the projected image is applied as a texture map. When using
this technique for the example above, the windows would become transparent, and it would be
possible to see the geometry behind the window.
The main advantages of this approach over light projection are that the Catcher can be used to
project Alpha onto an object, and it doesn‘t require lighting to be enabled. Another advantage is that
the Catcher is not restricted to the diffuse input of a material, making it possible to project specular
intensity maps, or even reflection and refraction maps.
NOTE: The Catcher material requires a Projector 3D or Camera 3D node in the scene, set to
project an image in Texture mode on the object to which the Catcher is connected. Without
a projection, or if the projection is not set to Texture mode, the Catcher simply makes the
object transparent and invisible.
Inputs
The Catcher node has no inputs. The output of the node is connected to the diffuse color material
input of the Blinn, Cook Torrance, or other material node applied to the 3D geometry.
A Catcher node output is connected to the input of the geometry node that receives the texture projection
Inspector
Catcher controls
Controls Tab
The Options in the Controls tab determine how the Catcher handles the accumulation
of multiple projections.
Enable
Use this checkbox to enable or disable the node. This is not the same as the red switch in the upper-
left corner of the Inspector. The red switch disables the tool altogether and passes the image on
without any modification. The Enable checkbox is limited to the effect part of the tool. Other parts, like
scripts in the Settings tab, still process as normal.
Color Mode
The Color mode menu is used to control how the Catcher combines the light from multiple projectors.
It has no effect on the results when only one projector is in the scene. This control is designed
to work with the software renderer in the Renderer 3D node and has no effect when using the
OpenGL renderer.
Threshold
The Threshold can be used to exclude certain low values from the accumulation calculation.
For example, when using the Median Accumulation mode, a threshold of 0.01 would exclude any pixel
with a value of less than 0.01 from the median calculation.
Restrict by Projector ID
When active, the Catcher only receives light from projectors with a matching ID. Projectors with a
different ID are ignored.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the
MatID auxiliary channel if the corresponding option is enabled in the Renderer 3D node.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
CubeMap [3Cu]
A cube map is produced by mounting six cameras at 90 degrees angle of views to point up, down, left,
right, front, and back.
The node provides options to set the reference coordinate system and rotation for the resulting
texture map. The Cube Map node is typically used to produce environment maps for distant areas
(such as skies or horizons) or reflection and refraction maps.
Inputs
The Inputs on this node change based on the settings of the Layout menu in the Inspector. The single
input uses a 2D image for the entire cube, while six inputs can handle a different 2D image for each
side of a cube.
— CrossImage: The orange Cross Image input is visible by default or when the Layout menu in the
Inspector is set to either Vertical Cross or Horizontal Cross. The input accepts a 2D image.
— CubeMap.[DIRECTION]: These six multi-colored inputs are visible only when the Layout menu
in the Inspector is set to Separate Images. Each input accepts an image aligned to match the left,
right, top, bottom, front, and back faces.
A Cube Map node receives a cross image input, creating an environment for the Shape 3D
Controls Tab
Layout
The Layout menu determines the type and number of inputs for the cube map texture.
Valid options are:
— Separate Images: This option exposes six inputs on the node, one for each face of the cube. If
the separate images are not square or not of the same size, they are rescaled into the largest 1:1
image that can contain all of them.
— Vertical Cross: This option exposes a single input on the node. The image should be an
unwrapped texture of a cube containing all the faces organized into a Vertical Cross formation,
where the height is larger than the width. If the image aspect of the cross image is not 3:4, the
CubeMap node crops it down so it matches the applicable aspect ratio.
— Horizontal Cross: This option exposes a single input on the node. The image should be an
unwrapped texture of a cube containing all the faces organized into a Horizontal Cross formation,
where the width is larger than the height. If the image aspect of the cross image is not 4:3, the
CubeMap node crops it down so that matches the applicable aspect ratio.
Coordinate System
The coordinate system menu sets the position values used when converting the image into a texture.
— Model: This option orients the texture along the object local coordinate system.
— World: This option orients the resulting texture using the global or world coordinate system.
— Eye: This option aligns the texture map to the coordinate system of the camera or viewer.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Falloff [3Fa]
Falloff example
— Face On Material: The orange Face On material input accepts a 2D image or a 3D material. If a 2D
image is provided, it is turned into a diffuse texture map using the basic material shader. This
input is used for the material that is reflecting directly back to the camera
— Glancing Material: The green Glancing material input accepts a 2D image or a 3D material. If a
2D image is provided, it is turned into a diffuse texture map using the basic material shader. This
input is used for the material that is reflecting away from the camera and into the scene.
While the inputs for this node can be images, the output is always a material.
The Falloff node uses one input for the material facing the camera and
one for the material not directly facing the camera.
Inspector
Falloff controls
Color Variation
— Two Tone: Two regular Color controls define the colors for Glancing and Face On.
— Gradient: A Gradient control defines the colors for Glancing and Face On. This can be
used for a multitude of effects, like creating Toon Shaders, for example.
Face On Color
The Face On Color defines the color of surface parts facing the camera. If the Face On texture map is
provided, then the color value provided here is multiplied by the color values in the texture.
Reducing the material’s opacity decreases the color and Alpha values of the Face On material, making
the material transparent.
Glancing Color
The Glancing Color defines the color of surface parts more perpendicular to the camera. If the
Glancing material port has a valid input, then this input is multiplied by this color.
Reducing the material’s opacity decreases the color and Alpha values of the Glancing material, making
the material transparent.
Falloff
This value controls the transition between Glancing and Face On strength. It is very similar to a
gamma operation applied to a gradient, blending one value into another.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Fast Noise Texture node includes an optional input that can be used to connect a 2D image
or material.
— SourceMaterial: The Source Materials input accepts a 2D image or a 3D material. The image is
then altered by the noise pattern.
A Fast Noise Texture node generates a seamless texture, taking advantage of UVW coordinates.
Controls Tab
The parameters of the Fast Noise Texture node control the appearance and, for 2D, the animation of
the noise.
Output Mode
— 2D: Calculates the noise texture based on 2D texture coordinates (UV). This setting allows
smoothly varying the noise pattern with animation.
— 3D: Calculates the noise texture based on 3D texture coordinates (UVW). Nodes like Shape 3D
automatically provide a third texture coordinate; otherwise, a 3D texture space can be created
using the UV Map node. The 3D setting does not support animation of the noise pattern.
Detail
Increase the value of this slider to produce a greater level of detail in the noise result. Larger values
add more layers of increasingly detailed noise without affecting the overall pattern. High values take
longer to render but can produce a more natural result (not all graphics cards support higher detail
levels in hardware).
Brightness
This control adjusts the overall Brightness of the noise map.
Contrast
This control increases or decreases the overall Contrast of the noise map. It can exaggerate the effect
of the noise.
Scale
The scale of the noise map can be adjusted using the Scale slider, changing it from gentle variations
over the entire image to a tighter overall texture effect. This value represents the scale along
the UV axis.
Seethe
(2D only) The Seethe control smoothly varies the 2D noise pattern.
Seethe Rate
(2D only) As with the Seethe control above, the Seethe Rate also causes the noise map to evolve and
change. The Seethe Rate defines the rate at which the noise changes each frame, causing an animated
drift in the noise automatically, without the need for spline animation.
Discontinuous
Normally, the noise function interpolates between values to create a smooth continuous gradient of
results. You can enable the Discontinuous checkbox to create hard discontinuity lines along some of
the noise contours. The result is a dramatically different effect.
Invert
Enable the Invert checkbox to invert the noise, creating a negative image of the original pattern.
This is most effective when Discontinuous is also enabled.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Gradient 3D [3Gd]
— Texture Transform Node: The Texture Transform node can be used to adjust the
mapping per pixel.
The gradient defaults to a linear gradient that goes from -1 to +1 along the Z-axis. All primitives in the
Shape 3D node can output a third texture coordinate for UVW mapping.
Inputs
The Gradient node has no Inputs. The output of the node is connected to a material input on
3D geometry.
Inspector
Gradient 3D controls
Gradient Type
Determines the type or pattern used for the gradient.
Gradient 3D modes
Gradient Bar
The Gradient control consists of a bar where it is possible to add, modify, and remove color stops of
the gradient. Each triangular color stop on the Gradient bar represents a color in the gradient. It is
possible to animate the color as well as the position of the point. Furthermore, a From Image modifier
can be applied to the gradient to evaluate it from an image.
Interpolation Space
The gradient is linearly interpolated from point to point in RGB color space by default. This can
sometimes lead to unwanted colors. Choosing another color space may provide a better result.
Scale
Allows sizing of the gradient.
Offset
Allows panning through the gradient.
Repeat
Defines how the left and right borders of the gradient are treated.
— Once: When using the Gradient Offset control to shift the gradient, the border colors keep their
values. Shifting the default gradient to the left results in a white border on the left, while shifting it
to the right results in a black border on the right.
Sub Pixel
Determines the accuracy with which the gradient is created.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The single image input on the Sphere Map node accepts a 2D image texture in an equirectangular
format (where the X-axis represents 0–360 degrees longitude, and the Y-axis represents –90 to +90
degrees latitude.)
— ImageInput: The orange Image input accepts a 2D RGBA image. Preferably, this is an
equirectangular image that shows the entire vertical and horizontal angle of view up
to 360 degrees.
Inspector
Controls Tab
The Controls tab in the Inspector modifies the mapping of the image input to the sphere map.
Angular Mapping
Adjusts the texture coordinate mapping so the poles are less squashed and areas in the texture get
mapped to equal areas on the sphere. It turns the mapping of the latitude lines from a hemispherical
fisheye to an angular fisheye. This mapping attempts to preserve area and makes it easier to paint on
or modify a sphere map since the image is not as compressed at the poles.
Rotation
Offers controls to rotate the texture map.
The node expects an image with an aspect ratio of 2:1. Otherwise, the image is clamped according to
the following rules:
— 2 * width > height: The width is fitted onto the sphere, and the poles display clamped edges.
— 2 * width < height: The height is fitted onto the sphere, and there is clamping about the
0-degree longitude line.
Common Controls
Settings tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: If you pipe the texture directly into the sphere, it is also mirrored horizontally. You
can change this by using a Transform node first.
NOTE: Background pixels may have U and V values of 0.0, which set those pixels to
the color of the texture’s corner pixel. To restrict texturing to specific objects, use an
effect mask based on the Alpha of the object, or its Object or Material ID channel.
For more information, see Chapter 18, “Understanding Image Channels,” in the Fusion
Reference Manual.
Inputs
— Image Input: The orange image input expects a 2D image.
A Texture 2D node is used to set the 3D texture metadata for the input image.
Texture 2D controls
Controls Tab
The Controls tab of the Inspector includes the following options.
U/V Offset
These sliders can be used to offset the texture along the U and V coordinates.
U/V Scale
These sliders can be used to scale the texture along the U and V coordinates.
Wrap Mode
If a texture is transformed in the texture space (using the controls below or the UV Map node), then
it’s possible that areas beyond the image borders will be mapped on the object. The Wrap Mode
determines how the image is applied in these areas.
— Wrap: This wraps the edges of the image around the borders of the image.
— Clamp: The color at the edges of the images is used for texturing. This mode is similar to the
Duplicate mode in the Transform node.
— Black: The image is clipped along its edges. A black color with Alpha = 0 is used instead.
— Mirror: The image is mirrored in both X and Y.
— Nearest: The simplest filtering technique is very fast but can cause artifacts when scaling
textures.
— Bilinear: A standard isotropic filtering technique for scaling textures into multiple resolutions.
Works well for magnification of textures.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other 3D nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Texture Transform node includes a single input that is used to connect the image or material you
want to transform.
— Material Input: The orange Material input accepts a 2D image or 3D material whose texture
coordinates are transformed using the controls in the Inspector.
NOTE: Not all Wrap modes are supported by all graphics cards.
Controls Tab
The Controls tab for the Texture Transform node includes many common transform controls that are
used to transform the texture using UVW coordinates.
Translation
The U, V, W translation sliders shift the texture along U, V, and W axes.
Rotation
Rotation Order buttons set the order in which the rotation is applied. In conjunction with the buttons,
the UVW dials define the rotation around the UVW axes.
Scale
U, V, W sliders scale the texture along the UVW axes.
Pivot
U, V, W Pivot sets the reference point for rotation and scaling.
Material ID
This slider sets the numeric identifier assigned to this material. This value is rendered into the MatID
auxiliary channel if the corresponding option is enabled in the renderer.
Settings Tab
The Common Settings tab can be found on most tools in Fusion. The following controls are specific
settings for 3D nodes.
Comment Tab
The Comment tab contains a single text control that is used to add comments and notes to the tool.
When a note is added to a tool, a small red dot icon appears next to the setting’s tab icon, and a text
bubble appears on the node. To see the note in the Node Editor, hold the mouse pointer over the
node for a moment. The contents of the Comments tab can be animated over ime, if required.
Scripting Tab
The Scripting tab is present on every tool in Fusion. It contains several edit boxes used to add scripts
that process when the tool is rendering. For more details on the contents of this tab, please consult
the scripting documentation.
Fusion Page Effects | Chapter 32 3D Texture Nodes 884
Chapter 33
Blur Nodes
This chapter details the Blur nodes available in Fusion.
The abbreviations next to each node name can be used
in the Select Tool dialog when searching for tools and in
scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Blur [Blur]��������������������������������������������������������������������������������������������������������������������� 886
Inputs
The two inputs on the Blur node are used to connect a 2D image and an effect mask that can be used
to limit the blurred area.
— Input: The orange input is used for the primary 2D image that is blurred.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the blur to only those
pixels within the mask. An effect mask is applied to the tool after the tool is processed.
Inspector
Blur controls
Controls Tab
The Controls tab contains the primary controls necessary for customizing the blur operation, including
five filter algorithms.
Filter
The Filter menu is where you select the type of filter used to create the blur.
— Box Blur: This option is faster than the Gaussian blur but produces a lower-quality result.
— Bartlett: This option is a more subtle, anti-aliased blur filter.
— Multi-box: Multi-box uses a Box filter layered in multiple passes to approximate a Gaussian
shape. With a moderate number of passes (e.g., four), a high-quality blur can be obtained, often
faster than the Gaussian filter and without any ringing.
— Gaussian: Gaussian applies a smooth, symmetrical blur filter, using a sophisticated constant-time
Gaussian approximation algorithm.
— Fast Gaussian: Gaussian applies a smooth, symmetrical blur filter, using a sophisticated constant-
time Gaussian approximation algorithm. This mode is the default filter method.
NOTE: This is not the same as the RGBA checkboxes found under the common controls.
The node takes these selections into account before it processes the image, so deselecting
a channel causes the node to skip that channel when processing, speeding up the rendering
of the effect. In contrast, the channel controls under the Common Controls tab are applied
after the node has processed.
Lock X/Y
Locks the X and Y Blur sliders together for symmetrical blurring. This is enabled by default.
Blur Size
Sets the amount of blur applied to the image. When the Lock X and Y control is deselected,
independent control over each axis is provided.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
— Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
— None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Blend
The Blend slider determines the percentage of the affected image that is mixed with original image. It
blends in more of the original image as the value gets closer to 0.
This control is a cloned instance of the Blend slider in the Common Controls tab. Changes made to this
control are simultaneously made to the one in the common controls.
Examples
Following is a comparison of Blur filters visualized as “cross-sections” of a filtered edge. As you can
see, Box creates a linear ramp, while Bartlett creates a somewhat smoother ramp. Multi-box and
Gaussian are indistinguishable unless you zoom in really close on the slopes. They both lead to even
smoother ramps, but as mentioned above, Gaussian overshoots slightly and may lead to negative
values if used on floating-point images.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Defocus node are for connecting a 2D image and an effect mask that can be
used to limit the simulated defocused area.
— Input: The orange input is used for the primary 2D image for defocusing.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the defocus to only
those pixels within the mask. An effect mask is applied to the tool after it is processed.
Inspector
Defocus controls
Filter
Use this menu to select the exact method applied to create the defocus. Gaussian applies a simplistic
effect, while Lens mode creates a more realistic defocus. Lens mode takes significantly longer
than Gaussian.
Lock X/Y
When Lock X/Y is selected, this performs the same amount of defocusing to both the X- and Y-axis of
the image. Deselect to obtain individual control.
Defocus Size
The Defocus Size control sets the size of the defocus effect. Higher values blur the image by greater
amounts and produce larger blooms.
Bloom Level
The Bloom Level control determines the intensity and size of the blooming applied to pixels that are
above the bloom threshold.
Bloom Threshold
Pixels with values above the set Bloom Threshold are defocused and have a glow applied (blooming).
Pixels below that value are only defocused.
The following four lens options are available only when the Filter is set to Lens.
— Lens Type: The basic shape used to create the “bad bokeh” effect. This can be refined further with
the Angle, Sides, and Shape sliders.
— Lens Angle: Defines the rotation of the shape. Best visible with NGon lens types. Because of the
round nature of a circle, this slider has no visible effect when the Lens Type is set to Circle.
— Lens Sides: Defines how many sides the NGon shapes have. Best visible with NGon lens
types. Because of the round nature of a circle, this slider has no visible effect when the Lens
Type is set to Circle.
— Lens Shape: Defines how pointed the NGons are. Higher values create a more pointed, starry
look. Lower values create smoother NGons. Best visible with NGon lens types and Lens Sides
between 5 and 10. Because of the round nature of a circle, this slider has no visible effect when
the Lens Type is set to Circle.
Clipping Mode
This option determines how edges are handled when performing domain-of-definition rendering. This
is profoundly important for nodes like Blur, which may require samples from portions of the image
outside the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Directional Blur node are used to connect a 2D image and an effect mask which
can be used to limit the blurred area.
— Input: The orange input is used for the primary 2D image that has the directional blur applied.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the directional blur to
only those pixels within the mask. An effect mask is applied to the tool after it is processed.
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the
directional blur operation.
Type
This menu is used to select the type of directional blur to be applied to the image.
— Linear: Linear distorts the image in a straight line, resembling the scenery that appears in the
window of a speeding train.
— Radial: Radial creates a distortion that originates at some arbitrary center, radiating outward the
way that a view would appear if one were at the head of the train looking forward.
— Centered: The Centered button produces a similar result to linear, but the blur effect is equally
distributed on both sides of the original.
— Zoom: Zoom creates a distortion in the scale of the image smear to simulate the zoom streaking
of a camera filming with a slow shutter speed.
Center X and Y
This coordinate control and related viewer crosshair affects the Radial and Zoom Motion Blur types
only. It is used to position where the blurring effect starts.
Length
Length adjusts the strength and heading of the effect. Values lower than zero cause blurs to
head opposite the angle control. Values greater than the slider maximum may be typed into the
slider’s edit box.
Angle
In both Linear and Center modes, this control modifies the direction of the directional blur. In the
Radial and Zoom modes, the effect is similar to the camera spinning while looking at the same spot. If
the setting of the length slider is other than zero, the effect creates a whirlpool effect.
Glow
This adds a Glow to the directional blur, which can be used to duplicate the effect of increased camera
exposure to light caused by longer shutter speeds.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition to use
the full frame of the image, effectively ignoring the current domain of definition. If the upstream
DoD is smaller than the frame, the remaining area in the frame is treated as black/transparent.
— Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
— None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Glow [Glo]
Inputs
The Glow node has three inputs: an orange one for the primary 2D image input, a blue one for an
effect mask, and a third white input for a Glow mask.
— Input: The orange input is used for the primary 2D image that has the glow applied.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input restricts the
source of the glow to only those pixels within the mask. An effect mask is applied to the tool
after it is processed.
The Glow mask allows the glow to extend beyond the borders of the mask, while restricting the source
of the glow to only those pixels within the mask.
Inspector
Glow controls
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the glow operation. A
Color Scale section at the bottom of the Inspector can be used for tinting the glow.
Filter
Use this menu to select the method of Blur used in the filter. The selections are described below.
NOTE: This is not the same as the RGBA checkboxes found under the common controls.
The node takes these selections into account before it processes the image, so deselecting
a channel causes the node to skip that channel when processing, speeding up the rendering
of the effect. In contrast, the channel controls under the Common Controls tab are applied
after the node has processed.
Lock X/Y
When Lock X/Y is checked, both the horizontal and vertical glow amounts are locked. Otherwise,
separate amounts of glow may be applied to each axis.
Glow Size
Glow Size determines the size of the glow effect. Larger values expand the size of the glowing
highlights of the image.
Num Passes
Only available in Multi-box mode. Larger values lead to a smoother distribution of the effect, but also
increase render times. It’s good to find the line between desired quality and acceptable render times.
Glow
The Glow slider determines the intensity of the glow effect. Larger values tend to completely blow the
image out to white.
Clipping Mode
This option determines how edges are handled when performing domain-of-definition rendering.
This is profoundly important for nodes like Blur, which may require samples from portions of the
image outside the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If
the upstream DoD is smaller than the frame, the remaining area in the frame is treated as
black/transparent.
Blend
The Blend slider determines the percentage of the affected image that is mixed with original image. It
blends in more of the original image as the value gets closer to 0.
This control is a cloned instance of the Blend slider in the Common Controls tab. Changes made to this
control are simultaneously made to the one in the common controls.
Apply Mode
Three Apply Modes are available when it comes to applying the glow to the image.
— Normal: Default. This mode simply adds the glow directly over top of the original image.
— Merge Under: Merge Under places the glow beneath the image, based on the Alpha channel.
Threshold mode permits clipping of the threshold values.
— Threshold: This control clips the effect of the glow. A new range slider appears. Pixels in the
glowed areas with values below the low value are pushed to black. Pixels with values greater than
high are pushed to white.
— High-Low Range Control: Available only in Threshold mode. Pixels in the glowed areas
with values below the low value are pushed to black. Pixels with values greater than high are
pushed to white.
By click and holding on the Pick button, then dragging the pointer over the viewer, you can select a
specific color from the image.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Sharpen node are used to connect a 2D image and an effect mask that can limit
the area affected by the sharpen.
— Input: The orange input is used for the primary 2D image for sharpening.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the sharpen to only
those pixels within the mask. An effect mask is applied to the tool after it is processed.
Inspector
Sharpen controls
NOTE: This is not the same as the RGBA checkboxes found under the common controls.
The node takes these selections into account before it processes the image, so deselecting
a channel causes the node to skip that channel when processing, speeding up the rendering
of the effect. In contrast, the channel controls under the Common Controls tab are applied
after the node has processed.
Lock X/Y
This locks the X and Y Sharpen sliders together for symmetrical sharpening. This is checked by default.
Amount
This slider sets the amount of sharpening applied to the image. When the Lock X/Y control is
deselected, independent control over each axis is provided.
Clipping Mode
This option determines how edges are handled when performing domain-of-definition rendering. This
is profoundly important for nodes like Blur, which may require samples from portions of the image
outside the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
— Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
— None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Blend
The Blend slider determines the percentage of the affected image that is mixed with original image. It
blends in more of the original image as the value gets closer to 0.
This control is a cloned instance of the Blend slider in the Common Controls tab. Changes made to this
control are simultaneously made to the one in the common controls.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
This node is perfect for atmospheric haze around planets, skin tones, and simulating dream
like environments.
Inputs
Like the Glow node, Soft Glow also has three inputs: an orange one for the primary image input, a
blue one for an effect mask, and a third white input for a Glow mask.
— Input: The orange input is used for the primary 2D image for the soft glow.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the soft glow to only
those pixels within the mask. An effect mask is applied to the tool after it is processed.
— Glow Mask: The Soft Glow node supports pre-masking using the white glow mask input. A Glow
pre-mask filters the image before applying the soft glow. The soft glow is then merged back over
the original image. This is different from a regular effect mask that clips the rendered result.
The Glow mask allows the soft glow to extend beyond the borders of the mask, while restricting the
source of the soft glow to only those pixels within the mask.
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the soft glow operation.
A color scale section at the bottom of the Inspector can be used for tinting the soft glow.
Filter
Use this menu to select the method of Blur used in the filter. The selections are described below.
NOTE: This is not the same as the RGBA checkboxes found under the common controls.
The node takes these selections into account before it processes the image, so deselecting
a channel causes the node to skip that channel when processing, speeding up the rendering
of the effect. In contrast, the channel controls under the Common Controls tab are applied
after the node has processed.
Threshold
This control is used to limit the effect of the soft glow. The higher the threshold, the brighter the pixel
must be before it is affected by the glow.
Lock X/Y
When Lock X/Y is checked, both the horizontal and vertical glow amounts are locked. Otherwise,
separate amounts of glow may be applied to each axis of the image.
Glow Size
This amount determines the size of the glow effect. Larger values expand the size of the glowing
highlights of the image.
Num Passes
Available only in Multi-box mode. Larger values lead to a smoother distribution of the effect, but also
increase render times. It’s good to find the line between desired quality and acceptable render times.
Clipping Mode
This option determines how edges are handled when performing domain-of-definition rendering. This
is profoundly important for nodes like Blur, which may require samples from portions of the image
outside the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
— Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
— None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Blend
The Blend slider determines the percentage of the affected image that is mixed with original image.
It blends in more of the original image as the value gets closer to 0.
This control is a cloned instance of the Blend slider in the Common Controls tab. Changes made to this
control are simultaneously made to the one in the common controls.
By click and holding on the Pick button, then dragging the pointer over the viewer, you can select a
specific color from the image.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
This filter extracts a range of frequencies from the image and blurs them to reduce detail. The blurred
result is then compared to the original images. Pixels with a significant difference between the original
and the blurred image are likely to be an edge detail. The pixel is then brightened to enhance it.
Inputs
The two inputs on the Unsharp Mask node are used to connect a 2D image and an effect mask for
limiting the effect.
— Input: The orange input is used for the primary 2D image for the Unsharp Mask.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the Unsharp Mask to
only those pixels within the mask. An effect mask is applied to the tool after it is processed.
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the
Unsharp Mask operation.
NOTE: This is not the same as the RGBA checkboxes found under the common controls.
The node takes these selections into account before it processes the image, so deselecting
a channel causes the node to skip that channel when processing, speeding up the rendering
of the effect. In contrast, the channel controls under the Common Controls tab are applied
after the node has processed.
Lock X/Y
When Lock X/Y is checked, both the horizontal and vertical sharpen amounts are locked. Otherwise,
separate amounts of glow may be applied to each axis of the image.
Size
This control adjusts the size of blur filter applied to the extracted image. The higher this value, the
more likely it is that pixels are identified as detail.
Gain
The Gain control adjusts how much gain is applied to pixels identified as detail by the mask. Higher
values create a sharper image.
Threshold
This control determines the frequencies from the source image to be extracted. Raising the value
eliminates lower-contrast areas from having the effect applied.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There are two inputs on the Vari Blur node for the primary image: the blur map image, and an
effect mask.
— Input: The gold image input is a required connection for the primary image you wish to blur.
— Blur Image: The green input is also required, but it can accept a spline shape, text object, still
image, or movie file as the blur map image. Once connected, you can choose red, green, blue,
Alpha, or luminance channel to create the shape of the blur.
— Effect Mask: The optional blue effect mask input expects a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
limits the Vari Blur to only those pixels within the mask. An effect mask is applied to the tool after
it is processed.
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the Vari Blur operation.
Method
Use this menu to select the method of Blur used in the filter. The selections are described below.
— Soften: This method varies from a simple Box shape to a Bartlett triangle to a decent-looking
Smooth blur as Quality is increased. It is a little better at preserving detail in less-blurred areas
than Multi-box.
— Multi-box: Similar to Soften, this gives a better Gaussian approximation at higher Quality settings.
— Defocus: Produces a flat, circular shape to blurred pixels that can approximate the
look of a defocus.
Quality
Increasing Quality gives smoother blurs, at the expense of speed. Quality set to 1 uses a very fast
but simple Box blur for all Method settings. A Quality of 2 is usually sufficient for low Blur Size values.
A Quality of 4 is generally good enough for most jobs unless Blur Size is particularly high.
Blur Channel
This selects which channel of the Blur Image controls the amount of blurring applied to each pixel.
Lock X/Y
When selected, only a Blur Size control is shown, and changes to the amount of blur are applied to
both axes equally. If the checkbox is cleared, individual controls appear for both X and Y Blur Size.
Blur Size
Increasing this control increases the overall amount of blur applied to each pixel. Those pixels where
the Blur image is black or nonexistent are blurred, despite the Blur Size.
Blur Limit
This slider limits the useable range from the Blur image. Some Z-depth images can have values that
go to infinity, which skew blur size. The Blur Limit is a way to keep values within range.
The vector map is typically two floating-point images: one channel specifies how far the pixel
is moving in X, and the other specifies how far the pixel is moving in Y. These channels may be
embedded in OpenEXR or RLA/RPF images, or may be provided as separate images using the node’s
Vectors input.
The vector channels should use a float16 or float32 color depth, to provide + and – values.
A value of 1 in the X channel would indicate that pixel has moved one pixel to the right, while a value of
–10 indicates ten pixels of movement to the left.
Inputs
The Vector Motion Blur node has three inputs for a 2D image, a motion vector pass, and an
effect mask.
— Input: The required orange input is for a 2D image that receives the motion blur.
— Vectors: The green input is also required. This is where you connect a motion vector AOV
rendered from a 3D application or an EXR file generated from the Optical Flow node in Fusion.
— Vector Mask: The white Vector Mask input is an optional input that masks the image before
processing.
— Effect Mask: The common blue input is used for a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
restricts the source of the motion blur to only those pixels within the mask. An effect mask is
applied to the tool after it is processed.
Inspector
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the Vector Motion
Blur operation.
X Channel
Use this menu to select which channel of the image provides the vectors for the movement of the
pixels along the X-axis.
Y Channel
Use this menu to select which channel of the image provides the vectors for the movement of the
pixels along the Y-axis.
Flip Channel
These checkboxes can be used to flip, or invert, the X and Y vectors. For instance, a value of 5 for a
pixel in the X-vector channel would become –5 when the X checkbox is enabled.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Blur nodes. These common controls are
described in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Blur category. The Settings
controls are even found on third-party Blur-type plugin tools. The controls are consistent and work the
same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
For example, if the Red button on a Blur tool is deselected, the blur is first applied to the image, and
then the red channel from the original input is copied back over the red channel of the result.
There are some exceptions, such as tools where deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not in the mask (i.e. set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels,” in the Fusion Reference Manual.
Use GPU
The GPU menu has three settings. Disable turns off GPU hardware accelerated rendering. Enabled
uses the GPU hardware for rendering the node. Auto uses a capable GPU if one is available and falls
back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the field and type the text. When a note is
added to a tool, a small red square appears in the lower-left corner of the node when the full tile is
displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the note
in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Color Nodes
This chapter details the Color nodes available in Fusion.
The abbreviations next to each node name can be used
in the Select Tool dialog when searching for tools and in
scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
ColorFX Color Curves [CCv]��������������������������������������������� 946
Chromatic Aberration Removal [CAr]��������� 924 OCIO CDL Transform [OCD]��������������������������� 970
ACESTransform [ATr]
Inputs
The two inputs on the ACES Transform node are the input and effect mask.
— Input: The orange input connects the primary 2D image for the auto gain.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the adjustment to only
those pixels within the mask. An effect mask is applied to the tool after the tool is processed.
Controls Tab
The Controls tab contains the few primary controls necessary for performing the ACES transform on
the input.
ACES Version
Lets you choose which version of ACES you want to use.
Input Transform
This menu lets you choose which IDT (Input Device Transform) to use to process the image. Typically
you would pick the camera that your footage was shot with here.
Output Transform
This menu lets you choose an ODT (Output Device Transform) with which to transform the image data
to the color space for your exported timeline.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Chromatic Adaptation node are the input and effect mask.
— Input: The orange input connects the primary 2D image for the auto gain.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the adjustment to only
those pixels within the mask. An effect mask is applied to the tool after the tool is processed.
Controls Tab
The Controls tab contains the primary controls necessary for adjusting the Chromatic Adaptation
parameters.
Method
The Method pop-up menu provides a variety of different transform methods to choose from,
defaulting to CAT02. Each option in the Method pop-up menu uses different measurement datasets
to create individual CAT matrixes to guide this transformation. As a result, each method prioritizes
different levels of accuracy for different sets of colors. For example:
— CAT02 has a non-linear component that compensates for the tendency of extremely saturated
blues to go purple, a typical weakness of other methods. It usually gives the best result for the
widest variety of measured data sets and works best for emissive sources (displays) and dim
viewing environments.
— Bradford Linear is also a commonly used method, albeit one in which extremely saturated blues
will go purple during the transform. It works well for both emissive sources in dim environments
and for reflective sources (theater screens) and dark environments.
— Von Kries is one of the oldest methods in common use, although it’s also one in which extremely
saturated blues will go purple during the transform. This, as well as all other methods, are
available if you need to match work done in another image processing application.
NOTE: Be aware that all methods listed will match neutral colors perfectly; the only
differences lie in how different ranges of saturated color are transformed.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Color Space Transform node are the input and effect mask.
— Input: The orange input connects the primary 2D image for the auto gain.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the adjustment to only
those pixels within the mask. An effect mask is applied to the tool after the tool is processed.
Inspector
Controls Tab
The Controls tab contains the primary controls necessary for performing the Color Space Transform
on the input.
Tone Mapping
Tone Mapping lets you enable tone mapping to accommodate workflows where you need to
transform one color space into another with a dramatically larger or smaller dynamic range by
automating an expansion or contraction of image contrast in such a way as to give a pleasing result
with no clipping.
— None: This setting disables Input DRT Tone Mapping. No tone mapping is applied to the
Input to Timeline Color Space conversion at all, resulting in a simple 1:1 mapping to the Timeline
Color Space.
— Clip: Hard clips all out-of-bounds values.
— Simple: Uses a simple curve to perform this transformation, compressing or expanding the
highlights and/or shadows of the timeline dynamic range to better fit the output dynamic range.
Note that the “Simple” option maps between approximately 5500 nits and 100 nits, so if you’re
mapping from an HDR source with more than 5500 nits to an SDR destination there may still be
some clipping of the highlights above 5500 nits.
— Luminance Mapping: Same as DaVinci, but more accurate when the Input Color Space of all your
media is in a single standards-based color space, such as Rec. 709 or Rec. 2020.
— DaVinci: This option tone maps the transform with a smooth luminance roll-off in the shadows
and highlights, and controlled desaturation of image values in the very brightest and darkest
parts of the image. This setting is particularly useful for wide-gamut camera media and is a good
setting to use when mixing media from different cameras.
— Saturation Preserving: This option has a smooth luminance roll-off in the shadows and
highlights but does so without desaturating dark shadows and bright highlights, so this is an
effective option for colorists who like to push color harder. However, because over-saturation in
the highlights of the image can look unnatural, two parameters are exposed to provide some
user-adjustable automated desaturation.
— Sat. Rolloff Start: Lets you set a threshold, in nits (cd/m2), at which saturation will roll off
along with highlight luminance. Beginning of the rolloff.
— Sat. Rolloff Limit: Lets you set a threshold, in nits (cd/m2), at which the image will be totally
desaturated. End of the rolloff.
— Use Custom Max Input/Output: Checking these boxes and adjusting the slider below allows
you to specify the minimum and maximum luminance of the input image in nits. Using these
two sliders together, you can set which value from the Input Gamma is mapped to which value
of the Output Gamma.
— Adaptation: Used to compensate for large differences in the viewer’s state of visual adaptation
when viewing a bright image on an HDR display versus seeing that same image on an SDR display.
For most “average” images this setting works best set between 0–10. However, when you’re
converting very bright images (for example, a snow scene at noon), then using a higher value will
yield more image detail within the highlights.
NOTE: While this node has ACES settings, it does transforms to the ACES color space
colormetrically, which is not actually correct for ACES workflows. For actual ACES
workflows, use the ACES Transform node, which uses transforms specified by the Academy.
Advanced
This drop-down menu exposes the advanced features of the Color Space Transform node.
— Apply Forward OOTF: Check this box to convert the image from scene referred to display
referred color management.
— Apply Inverse OOTF: Check this box to convert the image from display referred to scene referred
color management.
— Use White Point Adaptation: Applies a chromatic adaptation transform to account for different
white points between color spaces.
— Uncheck this box if you simply want to view the input color space’s white point unaltered in
the output color space. For example, wanting to use a P3-D60 mastered clip inside a P3-D65
timeline for reference purposes.
— Check this box to apply the chromatic adaptation transform to convert the input white point
to match the output color space’s white point. For example, wanting a P3-D60 mastered clip to
cut in with other clips mastered in a P3-D65 timeline.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Gamut Limiter node are the input and effect mask.
— Input: The orange input connects the primary 2D image for the auto gain.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the adjustment to only
those pixels within the mask. An effect mask is applied to the tool after the tool is processed.
Controls Tab
The Controls tab contains the primary controls necessary for adjusting the Gamut Limiter parameters.
Current Gamut
Choose the timeline gamut currently being used by the image.
Current Gamma
Choose the timeline gamma currently being used by the image.
Limit Gamut
Choose the gamut you want to restrict the image to here.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
— Input: The orange input connects the primary 2D image for the auto gain.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the adjustment to only
those pixels within the mask. An effect mask is applied to the tool after the tool is processed.
Inspector
Controls Tab
The Controls tab contains the primary controls necessary for adjusting the Gamut Limiter parameters.
Gamma
A pop-up menu lets you specify what type of gamma the clip is supposed to have, so set this to
whatever matches that image (this may match the timeline color space, but it depends on how
you’re working).
Advanced
— Apply Forward OOTF: Check this box to convert the image from scene referred to display
referred color management.
— Apply Inverse OOTF: Check this box to convert the image from display referred to scene referred
color management.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Revival
A sub category of the Color tools in Fusion that pertains to fixing common technical errors with
programs being finished, remastered, or restored.
Inputs
The two inputs on the Chromatic Aberration Removal node are the input and effect mask.
Inspector
Advanced Options
Advanced Options provide additional parameters for problem shots.
— – Lens Center: Center X and Y parameters let you offset the center of the lens if you’re dealing
with a reframed and re-rendered shot.
— – Stronger Correction: Shows the location of image features that resemble fringing.
Estimation Options
The Estimation Options are used solely to highlight and identify areas of fringing. They do not affect
the final output of the image. They only become available when one of the Show Estimated Fringes
boxes is checked in the Aberration Correction section.
— – R/C, G/P, B/Y Balance: These tools adjust the incoming balance between their respective
colors to better identify hard to see fringing.
— – Brightness: Magnifies the fringe indicators that are displayed when you turn on either of the
Estimated Fringes checkboxes
Aberration Correction
These tools let you make manual adjustments to correct aberration issues.
— – R/C, G/P, B/Y Scale: Adjust these sliders to eliminate the fringing from their respective
colors.
— – R/C, G/P, B/Y Edge: Adjusts to compensate for the difference in fringing due to the curvature
of the lens’ edges
— – Show Estimated Fringes: Checking this box will show just the estimated fringes over a gray
background. It will also activate the Estimation Options sliders, which will let you further highlight
and identify areas of fringing.
This makes it easy to make manual adjustments to correct the problem, using the Scale and Edge
controls to individually adjust Red/Cyan, Green/Purple, and Blue/Yellow fringing.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
This can be useful when compensating for variations in lighting, dealing with low-contrast images, or
visualizing the full color range of float images (although the viewer’s View Normalized Image option is
generally more suitable for this).
Inputs
The two inputs on the Auto Gain node are the input and effect mask.
— Input: The orange input connects the primary 2D image for the auto gain.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the auto gain
adjustment to only those pixels within the mask. An effect mask is applied to the tool after the tool
is processed.
Controls Tab
The Controls tab contains the few primary controls necessary for customizing the AutoGain operation.
NOTE: Variations over time in the input image can cause corresponding variations in the
levels of the result. For example, if a bright object moves out of an otherwise dark shot, the
remaining scene gets suddenly brighter, as the remaining darker values get stretched to
white. This also applies to sudden depth changes when Do Z is applied; existing objects may
be pushed forward or backward when a near or far object enters or leaves the scene.
Do Z
Select the Do Z checkbox to apply the Auto Gain effect to the Z or Depth channels. This can be useful
for matching the ranges of one Z-channel to another, or to view a float Z-channel in the RGB values.
Range
This Range control sets the black point and white point in the image. All tonal values in the image
rescale to fit within this range.
EXAMPLE Create a horizontal gradient with the Background node. Set one color to dark
gray (RGB Values 0.2). Set the other color to light gray (RGB Values 0.8).
Add an Auto Gain node and set the Low value to 0.0 and the High value to 0.5. This causes
the brightest pixels to be pushed down to 0.5, and the darkest pixels get pushed to black.
The remainder of the pixel values scale between those limits.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
For this to work best, image processing should operate in 32-bit floating point.
Inputs
The two inputs on the Brightness Contrast node are the input and effect mask.
— Input: The orange input connects the primary 2D image for the brightness contrast.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the Brightness
Contrast adjustment to only those pixels within the mask. An effect mask is applied to the tool
after the tool is processed.
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the brightness, contrast
operations.
NOTE: This is not the same as the RGBA checkboxes found under the common controls.
The node takes these selections into account before it processes the image, so deselecting
a channel causes the node to skip that channel when processing, speeding up the rendering
of the effect. In contrast, the channel controls under the Common Controls tab get applied
after the node has processed.
Gain
The gain slider is a multiplier of the pixel value. A Gain of 1.2 makes a pixel that is R0.5 G0.5 B0.4 into
R0.6 G0.6, B0.48 (i.e., 0.4 * 1.2 = 0.48) while leaving black pixels unaffected. Gain affects higher values
more than it affects lower values, so the effect is most influential in the midrange and top range of
the image.
Lift
While Gain scales the color values around black, Lift scales the color values around white. The pixel
values get multiplied by the value of this control. A Lift of 0.5 makes a pixel that is R0.0 G0.0 B0.0 into
R0.5 G0.5, B0.5 while leaving white pixels unaffected. Lift affects lower values more than it affects
higher values, so the effect is most influential in the midrange and low range of the image.
Contrast
Contrast is the range of difference between the light to dark areas. Increasing the value of this slider
increases the contrast, pushing color from the midrange toward black and white. Reducing the
contrast causes the colors in the image to move toward midrange, reducing the difference between
the darkest and brightest pixels in the image.
Brightness
The value of the Brightness slider gets added to the value of each pixel in the image. This control’s
effect on an image is linear, so the effect is applied identically to all pixels regardless of value.
Saturation
Use this control to increase or decrease the amount of Saturation in the image. A saturation of 0 has
no color, reducing the image to grayscale.
Leaving the high anchored at 1.0 and increasing the low is the same as inverting the image colors and
increasing the gain and inverting it back again. This pushes more of the image toward black without
affecting the whites at all.
Direction
Forward applies all values normally. Reverse effectively inverts all values.
Clip Black/White
The Clip Black and Clip White checkboxes clip out-of-range color values that can appear in an image
when processing in floating-point color depth. Out-of-range colors are below black (0.0) or above
white (1.0). These checkboxes have no effect on images processed at 8-bit or 16-bit per channel, as
such images cannot have out-of-range values.
Pre-Divide/Post-Multiply
Selecting the Pre-Divide/Post-Multiply checkbox causes the image pixel values to be divided
by the Alpha values before the color correction, and then re-multiplied by the Alpha value after
the correction.
This helps to prevent the creation of illegally additive images when color correcting images with
premultiplied Alpha channels.
Common Controls
Settings Tab
The Settings tab in the Inspector appears in other Color nodes. These common controls are described
in detail at the end of this chapter in “The Common Controls” section.
NOTE: Be aware of another similarly named Channel Boolean (3Bol), which is a 3D node
used to remap and modify channels of 3D materials. When modifying 2D channels, use the
Channel Booleans (with an “s”) node (Bol).
Inputs
There are four inputs on the Channel Booleans node in the Node Editor, but only the orange
Background input is required.
— Background: This orange input connects a 2D image that gets adjusted by the foreground input
image.
— Effect Mask: The blue effect mask input expects a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the channel booleans adjustment to only those pixels within the mask.
— Foreground: The green foreground input connects a 2D image that is used to adjust the
background input image.
— Matte: The white matte input can be used to combine external mattes with the foreground and
background operations.
Inspector
On the left side are target channels for the image connected into the orange background input.
The drop-down menu to the right lets you choose whether you want to modify the BG image with
its channels (suffix BG after list name) or with the channels from an image connected into the green
foreground input on the node (suffix FG in the drop-down list).
Operation
This menu selects the mathematical operation applied to the selected channels. The options are
as follows:
Copy
Copy the value from one color channel to another. For example, copy the foreground red channel into
the background’s Alpha channel to create a matte.
— Add: Add the color values from one color channel to another channel.
— Subtract: Subtract the color values of one color channel from another color channel.
— And: Perform a logical AND on the color values from color channel to color channel. The
foreground image generally removes bits from the color channel of the background image.
— Or: Perform a logical OR on the color values from color channel to color channel. The foreground
image generally adds bits from the color channel of the background image.
— Exclusive Or: Perform a logical XOR on the color values from color channel to color channel. The
foreground image generally flips bits from the color channel of the background image.
The default setting copies the channels from the foreground channel. Select any one of the four color
channels, as well as several auxiliary channels like Z-buffer, saturation, luminance, and hue.
Inspector
EXAMPLES To copy the Alpha channel of one image to its color channels, set the red,
green, and blue channels to Alpha BG. Set the Operation to Copy.
To copy the Alpha channel from another image, set operation type to Alpha FG.
To replace the existing Alpha channel of an image with the Alpha of another image, choose
“Do Nothing” for To Red, To Green, and To Blue and “Alpha FG” for To Alpha. Pipe the
image containing the Alpha into the foreground input on the Channel Booleans node. Set
Operation: “Copy.” The same operation is available in the Matte Control node.
To combine any mask into an Alpha channel of an image, choose “Do Nothing” for To Red,
To Green, and To Blue and “Matte” for To Alpha. Pipe the mask into the foreground input on
the Channel Booleans node. Set Operation: “Copy.”
To subtract the red channel’s pixels of another image from the blue channel, choose
“Do Nothing” for To Red and To Green and “Red FG” for To Blue. Pipe the image containing
the red channel to subtract into the foreground input on the Channel Booleans node. Set
Operation: “Subtract.”
Common Controls
Settings Tab
The Settings tab in the Inspector appears in other Color nodes. These common controls are described
in detail at the end of this chapter in “The Common Controls” section.
Controls in the Color Corrector node are separated into four tabs: Correction, Ranges, Options,
and Settings.
Inputs
The Color Corrector node includes four inputs in the Node Editor.
— Input: This orange input is the only required connection. It connects a 2D image for color
correction.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the color
corrector adjustment to only those pixels within the mask. An effect mask is applied to the tool
after the tool is processed.
— Match Reference: The green input is used to connect an image that can be a reference for
histogram matching.
— Match Mask: This optional white input accepts any mask much like an effect mask. However, this
mask defines of the area to match during a Histogram Match. It offers more flexibility in terms of
shape than the built-in Match rectangle in the Inspector.
Range
This menu determines the tonal range affected by the color correction controls in this tab. The menu
can be set to Shadows, Midtones, Highlights, and Master, where Master is the default affecting the
entire image.
The selected range is maintained throughout the Colors, Levels, and Suppress sections of the
Color Corrector node.
Adjustments made to the image in the Master channel are applied to the image after any changes
made to the Highlight, Midtone, and Shadow ranges.
NOTE: The controls are independent for each color range. For example, adjusting the
Gamma control while in Shadows mode does not change or affect the value of the Gamma
control for the Highlights mode. Each control is independent and applied separately.
The tinting is represented in the color wheel color indicator that shows the color and strength of the
tint. The Highlight setting uses a black outline for the color indicator. The Midtones and Shadows use
gray color indicators. The Master color indicator is also black, but it has a white M in the center to
distinguish it from the others.
The mouse can position the color indicator for each range only when the applicable range is selected.
For example, the Highlight color indicator cannot be moved when the Master range is selected.
Holding down the Command or Ctrl key while dragging this indicator allows you to make finer adjustments
by reducing the control’s sensitivity to mouse movements. Holding down the Shift key limits the movement
of the color indicator to a single axis, allowing you to restrict the effect to either tint or strength.
Tint Mode
This menu is used to select the speed and quality of the algorithm used to apply the hue and
saturation adjustments. The default is Better, but for working with larger images, it may be desirable
to use a faster method.
Hue
This slider is a clone of the Hue control located under the color wheel. The slider makes it easier to
make small adjustments to the value with the mouse. The Hue control provides a method of shifting
the hue of the image (or selected color range) through the color spectrum. The control value has an
effective range between -0.1 and 1.0, which represents the angle of rotation in a clockwise direction.
A value of 0.25 would be 90 degrees (90/360) and would have the effect of shifting red toward blue,
green to red, and so on.
Hue shifting can be done by dragging the slider, entering a value directly into the text control, or by
placing the mouse above the outer ring of the color wheel and dragging the mouse up or down. The
outer ring always shows the shifted colors compared to the original colors shown in the center of
the wheel.
Saturation
This slider is a clone of the Saturation control located under the color wheel. The slider makes it easier
to make small adjustments to the value with the mouse. The Saturation control is used to adjust the
intensity of the color values. A saturation of 0 produces gray pixels without any color component,
whereas a value of 1.0 produces no change in the chroma component of the input image. Higher
values generate oversaturated values with a high color component.
Saturation values can be set by dragging the slider, entering a value directly into the text control, or by
dragging the mouse to the left and right on the outer ring of the color wheel control.
Channel
This menu is set for the Histogram, Color, and Levels sections of the Color Corrector node. When the
red channel is selected, the controls in each mode affect the red channel only, and so on.
The controls are independent, so switching to blue does not remove or eliminate any changes made
to red, green, or Master. The animation and adjustments made to each channel are separate. This
menu simply determines what controls to display.
Gain
The Gain slider is a multiplier of the pixel value. A gain of 1.2 makes a pixel that is R0.5 G0.5 B0.4 into
R0.6 G0.6, B0.48 (i.e., 0.4 * 1.2 = 0.48), while leaving black pixels totally unaffected. Gain affects higher
values more than it affects lower values, so the effect is strongest in the midrange and top range of
the image.
Lift
While Gain scales the color values around black, Lift scales the color values around white. The pixel
values are multiplied by the value of this control. A Lift of 0.5 makes a pixel that is R0.0 G0.0 B0.0 into
R0.5 G0.5, B0.5, while leaving white pixels totally unaffected. Lift affects lower values more than it
affects higher values, so the effect is strongest in the midrange and low range of the image.
Gamma
Values higher than 1.0 raise the Gamma (mid gray), whereas lower values decrease it. The effect of
this node is not linear, and existing black or white points are not affected at all. Pure gray colors are
affected the most.
Brightness
The value of the Brightness slider is added to the value of each pixel in your image. This control’s
effect on an image is linear, so the effect is applied identically to all pixels despite value.
Range
Identical to the Range menu when Color is selected in the Menu, the Range menu determines the
tonal range affected by the color correction controls in this tab. The menu can be set to Shadows,
Midtones, Highlights, and Master, where Master is the default affecting the entire image.
The selected range is maintained throughout the Colors, Levels, and Suppress sections of the Color
Corrector node.
Adjustments made to the image in the Master channel are applied to the image after any changes
made to the Highlights, Midtones, and Shadows ranges.
NOTE: The controls are independent for each color range. For example, adjusting the
Gamma control while in Shadows mode does not change or affect the value of the Gamma
control for the Highlights mode. Each control is independent and applied separately.
Channel
This menu is used to select and display the histogram for each color channel or for the
Master channel.
Histogram Display
A histogram is a chart that represents the distribution of color values in the scene. The chart reads
from left to right, with the leftmost values representing the darkest colors in the scene and the
rightmost values representing the brightest. The more pixels in an image with the same or similar
value, the higher that portion of the chart is.
Luminance is calculated per channel; therefore, the red, green, and blue channels all have their own
histogram, and the combined result of these comprises the Master Histogram.
To scale the histogram vertically, place the mouse pointer inside the control and drag the pointer up to
zoom in or down to zoom out.
— Input Histogram: This enables or disables the display of the input image’s histogram.
— Reference Histogram: This enables or disables the display of the reference image’s histogram.
— Output Histogram: This enables or disables the display of the histogram from the post-color-
corrected image.
Histogram Controls
These controls along the bottom of the histogram display are used to adjust the input image’s
histogram, compressing or shifting the ranges of the selected color channel.
The controls can be adjusted by dragging the triangles beneath the histogram display to the left
and right.
Shifting the High value toward the left (decreasing the value) causes the histogram to slant toward
white, shifting the image distribution toward white. The Low value has a similar effect in the opposite
direction, pushing the image distribution toward black.
Output Level
The Output Level control can apply clipping to the image, compressing the histogram. Decreasing the
High control reduces the value of pixels in the image, sliding white pixels down toward gray and gray
pixels toward black.
Adjusting the Low control toward High does the opposite, sliding the darkest pixels toward white.
If the low value were set to 0.1, pixels with a value of 0.0 would be set to 0.1 instead, and other values
would increase to accommodate the change. The best way to visualize the effect is to observe the
change to the output histogram displayed above.
Channel
This menu is used to select and display the histogram for each color channel or for the
Master channel.
Histogram Display
A histogram is a chart that represents the distribution of color values in the scene. The chart reads
from left to right, with the leftmost values representing the darkest colors in the scene and the
rightmost values representing the brightest. The more pixels in an image with the same or similar
value, the higher that portion of the chart is.
Luminance is calculated per channel; therefore, the red, green, and blue channels all have their own
histogram, and the combined result of these comprises the Master Histogram.
To scale the histogram vertically, place the mouse pointer inside the control and drag the pointer up to
zoom in or down to zoom out.
— Input Histogram: This enables or disables the display of the input image’s histogram.
— Reference Histogram: This enables or disables the display of the reference image’s histogram.
— Output Histogram: This enables or disables the display of the histogram from the post-color-
corrected image.
— Corrective Curve: This toggles the display of a spline used to visualize exactly how auto color
corrections applied using a reference image are affecting the image. This can be useful when
equalizing luminance between the input and reference images.
Histogram Type
Each of these menu options enables a different type of color correction operation.
— Keep: Keep produces no change to the image, and the reference histogram is ignored.
— Equalize: Selecting Equalize adjusts the source image so that all the color values in the image are
equally represented—in essence, flattening the histogram so that the distribution of colors in the
image becomes more even.
— Match/Equalize Luminance: This slider affects the degree that the Color Corrector node
attempts to affect the image based on its luminance distribution. When this control is zero (the
default), matching and equalization are applied to each color channel independently, and the
luminance, or combined value of the three color channels, is not affected.
If this control has a positive value when equalizing the image, the input image’s luminance
distribution is flattened before any color equalization is applied.
If this control has a positive value when the correction mode is set to Match, the luminance
values of the input are matched to the reference before any correction is applied to the R, G, and
B channels.
The Luminance and RGB controls can have a cumulative effect, and generally they are not both set
to full (1.0) simultaneously.
— Lock R/G/B: When this checkbox is selected, color matching is applied to all color channels
equally. When the checkbox is not selected, individual controls for each channel appear.
Equalize/Match R/G/B
The name of this control changes depending on whether the Equalize or Match modes have been
selected. The slider can be used to reduce the correction applied to the image to equalize or match it.
A value of 1.0 causes the full effect of the Equalize or Match to be applied, whereas lower values
moderate the result.
Precision
This menu determines the color fidelity used when sampling the image to produce the histogram.
10-bit produces higher fidelity than 8-bit, and 16-bit produces higher fidelity than 10-bit.
Smooth Correction
Often, color equalization and matching operations introduce posterization in an image, which
occurs because gradients in the image have been expanded or compressed so that the dynamic
range between colors is not sufficient to display a smooth transition. This control can be used to
smooth the correction curve, blending some of the original histogram back into the result for a more
even transition.
Release Match
Click this button to release the current snapshot of the histogram and return to using the live
reference input.
To suppress a color in the selected range, drag the control that represents that color toward the
center of the color wheel. The closer the control is to the center, the more that color is suppressed
from the image.
Suppression Angle
Use the Suppression Angle control to rotate the controls on the suppression wheel and zero in on a
specific color.
Range
This menu is used to select the tonal range displayed in the viewers. They help to visualize the pixels in
the range. When the Result menu option is selected, the image displayed by the color corrector in the
viewers is that of the color corrected image. This is the default.
Selecting one of the other menu options switches the display to a grayscale image showing which
pixels are part of the selected range. White pixels represent pixels that are considered to be part of
the range, and black pixels are not in the range. For example, choosing Shadows would show pixels
considered to be shadows as white and pixels that are not shadows as black. Mid gray pixels are only
partly in the range and do not receive the full effect of any color adjustments to that range.
Channel
The Channel menu in this tab can be used to examine the range of a specific color channel. By default,
Fusion displays the luminance channel when the color ranges are examined.
Spline Display
The ranges are selected by manipulating the spline handles. There are four spline points, each with
one Bézier handle. The two handles at the top represent the start of the shadow and highlight ranges,
whereas the two at the bottom represent the end of the range. The Bézier handles are used to control
the falloff.
The midtones range has no specific controls since its range is understood to be the space between
the shadow and the highlight ranges.
The X and Y text controls below the spline display can be used to enter precise positions for the
selected Bézier point or handle.
Pre-Divide/Post-Multiply
Selecting this option divides the color channels by the value of the Alpha before applying the color
correction. After the color correction, the color values are re-multiplied by the Alpha to produce a
properly additive image. This is crucial when performing an additive merge or when working with CG
images generated with premultiplied Alpha channels.
Process Order
This menu is used to select whether adjustments to the image’s gamma are applied before or after
any changes made to the images levels.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The LUT view in the Color Corrector can be scaled using the + and - keys on the numeric keypad.
The color curves LUT fully supports out-of-range values—i.e., pixels with color values above 1.0 or
below 0.0.
The splines shown in this LUT view are also available from the Spline Editor, should greater precision
be required when adjusting the controls.
— Input: This orange input is the only required connection. It connects a 2D image that is adjusted
by the color curves.
— Effect Mask: The optional effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the color curves adjustment to only those pixels within the mask. An effect mask is applied to the
tool after it is processed.
— Reference Image: The optional green input is used to connect a second 2D image that can be
used for reference matching.
— Match Mask: This optional white input accepts any mask much like an effect mask. However, this
mask defines of the area to match during a Match. It offers more flexibility in terms of shape than
the built-in Match reference rectangle in the Inspector.
Inspector
Mode
The Mode options change between Animated and Dissolve modes. The default mode is No Animation,
where adjustments to the curves are static. Setting the mode provides a change spline for each
channel, allowing the color curve to be animated over time.
Dissolve mode is essentially obsolete and is included for compatibility reasons only.
Color Space
The splines in the LUT view represent color channels from a variety of color spaces. The default is Red,
Green, and Blue. The options in this menu allow an alternate color space to be selected.
— RGB (Red, Green, Blue): Fusion uses the RGB color space, and most nodes and displays interpret
the primary channels of an image as Red, Green, and Blue.
— YUV (Luma, Blue Chroma, and Red Chroma): The YUV color space is used in the analog
broadcast of PAL video. Historically, this format was often used to color correct images, because
of its familiarity to a large percentage of video engineers. Each pixel is described in terms of its
Luminance, Blue Chroma, and Red Chroma components.
— HLS (Hue, Luminance, and Saturation): Each pixel in the HLS color space is described in terms
of its Hue, Luminance, and Saturation components.
— YIQ (Luma, In Phase, and Quadrature): The YIQ color space is used in the analog broadcast of
NTSC video. This format is much rarer than YUV and almost never seen in production. Each pixel
is described in terms of its Luminance, Chroma (in-phase or red-cyan channel) and Quadrature
(magenta-green) components.
— CMY (Cyan, Magenta, and Yellow): Although more common in print, the CMY format is often
found in computer graphics from other software packages. Each pixel is described in terms of its
Cyan, Magenta, and Yellow components. CMY is nonlinear.
These controls do not restrict the effect of the node to a specific channel. They only select whether
the spline for that channel is editable. These controls are most often used to ensure that adding or
moving points on one channel’s spline do not unintentionally affect a different channel’s spline.
Spline Window
The Spline Window displays a standard curve editor for each RGBA channel. These splines can be
edited individually or as a group, depending on the color channels selected above.
The spline defaults to a linear range, from 0 in/0 out at the bottom left to the 1 in/1 out at the top
right. At the default setting, a color processes to the same value as the output. If a point is added in
the middle at 0.5 in/0.5 out, and the point is moved up, this raises the mid color of the image brighter.
In and Out
Use the In and Out controls to manipulate the precise values of a selected point. To change a value,
select a point and enter the in/out values desired.
Eyedropper (Pick)
Click the Eyedropper icon, also called the Pick button, and select a color from an image in the display
to automatically set control points on the spline for the selected color. The new points are drawn with
a triangular shape and can only be moved vertically (if point is locked, only the Out value can change).
Points are only added to enabled splines. To add points only on a specific channel, disable the other
channels before making the selection.
One use for this technique is white balancing an image. Use the Pick control to select a pixel from the
image that should be pure gray. Adjust the points that appear so that the Out value is 0.5 to change
the pixel colors to gray.
Use the contextual menu’s Locked Pick Points option to unlock points created using the Pick option,
converting them into normal points.
Reference
The Reference section includes controls that handle matching to sample areas of the connected
reference image.
— Match Reference: The Match Reference button adds points on the curve to match an image
connected to the green reference image input. The number of points used to match the image is
based on the Number of Samples slider below.
— Sample Reference: Clicking the Sample Reference button samples the center scanline of the
background image and creates a LUT of its color values. The number of points used to match the
samples scanline is based on the Number of Samples slider below.
— Number of Samples: This slider determines how many points are used to match the curve to the
range in the reference image.
— Show Match Rectangle: Enabling this checkbox displays a rectangle in the viewer showing the
area on the reference image used during the match process. The match rectangle affects only
the result of the Match Reference operation. The Sample reference is always done from the center
scaling of the image.
— Match Center: The X and Y parameters allow you to reposition the match rectangle to sample a
different area when matching.
— Match Width: Width controls the width of the match rectangle.
— Match Height: Heigh controls the height of the match rectangle.
— Pre-Divide/Post-Multiply: Selecting this checkbox causes the image’s pixel values to be divided
by the Alpha values prior to the color correction, and then re-multiplied by the Alpha value after
the correction. This helps to avoid the creation of illegally additive images, particularly around the
edges of a blue/green key or when working with 3D-rendered objects.
Inputs
The Color Gain node includes two inputs: one for the main image and the other for an effect mask.
— Input: This orange input is the only required connection. It connects a 2D image that gets
adjusted by the color gain.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the color gain adjustment to only those pixels within the mask. An effect mask is applied to the
tool after it is processed.
Gain Tab
The Gain tab provides control of individual RGBA Lift/Gamma/Gain parameters. These controls can
quickly enable you to fix irregular color imbalances in specific channels.
Lock R/G/B
When selected, the Red, Green, and Blue channel controls for each effect are combined into one
slider. Alpha channel effects remain separate.
Gain RGBA
The Gain RGBA controls multiply the values of the image channel in a linear fashion. All pixels are
multiplied by the same factor, but the effect is larger on bright pixels and smaller on dark pixels. Black
pixels do not change because multiplying any number times 0 is always 0.
Lift RGBA
While Gain scales the color values around black, Lift scales the color values around white. The pixel
values are multiplied by the value of this control. A Lift of 0.5 makes a pixel that is R0.0 G0.0 B0.0 into
R0.5 G0.5, B0.5, while leaving white pixels totally unaffected. Lift affects lower values more than it
affects higher values, so the effect is strongest in the midrange and low range of the image.
Gamma RGBA
The Gamma RGBA controls affect the brightness of the midrange in the image. The effect of this node
is nonlinear. White and black pixels in the image are not affected when gamma is modified, whereas
pure grays are affected most by changes to this parameter. Large changes to this control tend to push
midrange pixels into black or white, depending on the value used.
Saturation Tab
This Setting tab includes controls for the intensity of the colors in the individual RGB channels.
RGB Saturation
When adjusting an individual channel, a value of 0.0 strips out all that channel’s color. Values greater
than one intensify the color in the scene, pushing it toward the primary color.
Balance Tab
This tab in the Color Gain node offers controls for adjusting the overall balance of a color channel.
Independent color and brightness controls are offered for the High, Mid, and Dark ranges of
the image.
Colors are grouped into opposing pairs from the two dominant color spaces. Red values can be
pushed toward Cyan, Green values toward Magenta and Blue toward Yellow. Brightness can be raised
or lowered for each of the channels.
Hue Tab
Use the Hue tab of the Color Gain node to shift the overall hue of the image, without affecting the
brightness, or saturation. Independent controls of the High, Mid, and Dark ranges are offered by
three sliders.
The following is the order of the hues in the RGB color space: Red, Yellow, Green, Cyan, Blue,
Magenta and Red.
High/Mid/Dark Hue
Values above 0 push the hue of the image toward the right (red turns yellow). Values below 0 push the
hue toward the left (red turns magenta). At -1.0 or 1.0, the hue completes the cycle and returns to its
original value.
The default range of the hue sliders is -1.0 to +1.0. Values outside this range can be entered manually.
Ranges Tab
The Ranges tab contains the controls used to specify which pixels in an image are considered to be
shadows and which are considered to be highlights. The midrange is always calculated as pixels not
included in either the shadows or the highlights.
The midtones range has no specific controls since its range is understood to be the space between
the shadow and the highlight ranges. The X and Y text controls below the Spline display can be used to
enter precise positions for the selected Bézier point or handle.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Color Matrix node includes two inputs: one for the main image and the other for an effect mask.
— Input: This orange input is the only required connection. It connects a 2D image that is adjusted
by the color matrix.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the color matrix adjustment to only those pixels within the mask. An effect mask is applied to the
tool after it is processed.
Inspector
Controls Tab
Color Matrix multiplies the RGBA channels based on the values entered in a 4 x 4 grid. The fifth
column/row is an Add column.
Update Lock
When this control is selected, Fusion does not render the node. This is useful for setting up each value
of the node, and then turning off Update Lock to render it.
Matrix
This defines what type of operation actually takes place. The horizontal rows define the output values
of the node. From left to right, they are R, G, B, A, and Add. The vertical columns define the input
values. From top to bottom, they are R, G, B, A, and Add. The Add column allows simple adding of
values to the individual color channels.
— 1.0 means 100% of the Red channel input is copied to the Red channel output.
— 1.0 means 100% of the Green channel input is copied to the Green channel output.
— 1.0 means 100% of the Blue channel input is copied to the Blue channel output.
— 1.0 means 100% of the Alpha channel input is copied to the Alpha channel output.
Invert
Enabling this option inverts the Matrix. Think of swapping channels around, doing other operations
with different nodes, and then copying and pasting the original ColorMatrix and setting it to Invert to
get your channels back to the original.
Example 1: Invert
If you want to do a simple invert or negative of the color values, but leave the Alpha channel
untouched, the matrix would look like this:
Observe the fact that we have to add 1 to each channel to push the inverted values back into the
positive numbers.
Let’s follow this example step by step by viewing the waveform of a 32-bit grayscale gradient.
Original Grayscale
RGB set to -1
3 Adding 1 to each channel keeps the inversion but moves the values back into a positive range.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Changing the color space from RGB causes most images to look odd, as Fusion’s viewers still interpret
the primary channels as Red, Green, and Blue. For example, viewing an image converted to YUV in one
of the viewers shows the Y channel as Red, the U channel as Green, and the V channel as Blue.
Several common elements of the Fusion interface refer to the RGB channels directly. The four buttons
commonly found on the Inspector’s Settings tab to restrict the effect of the node to a single color
channel are one example. When a conversion is applied to an image, the labels of these buttons
remain R, G, and B, but the values they represent are from the current color space. (For example,
Red is Hue, Green is Luminance, and Blue is Saturation for an RGB to HLS conversion. The Alpha value
is never changed by the color space conversion.)
— Input: This orange input is the only required connection. It connects a 2D image that is converted
by the color space operation.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the color space adjustment to only those pixels within the mask. An effect mask is applied to the
tool after it is processed.
Inspector
Controls Tab
The Controls tab in the Color Space node consists of two menus. The top Conversion menu
determines whether you are converting an image to RGB or from RGB. The bottom menu selects the
alternative color space you are either converting to or from.
Conversion
This menu has three options. The None option has no effect on the image. When To Color is selected,
the input image is converted to the color space selected in the Color Type control found below. When
To RGB is selected, the input image is converted back to the RGB color space from the type selected in
the Color Type menu (for example, YUV to RGB).
— HSV (Hue, Saturation, and Value): Each pixel in the HSV color space is described in terms of its
Hue, Saturation, and Value components. Value is defined as the quality by which we distinguish a
light color from a dark one or brightness. Decreasing saturation roughly corresponds to adding
white to a paint chip on a palette. Increasing Value is roughly similar to adding black.
— YUV (Luma, Blue Chroma, and Red Chroma): The YUV color space is used in the analog
broadcast of PAL video. Historically, this format was often used to color correct images because
of its familiarity to a large percentage of video engineers. Each pixel is described in terms of its
Luminance, Blue Chroma, and Red Chroma components.
— YIQ (Luma, In Phase, and Quadrature): The YIQ color space is used in the analog broadcast of
NTSC video. This format is much rarer than YUV and almost never seen in production. Each pixel
is described in terms of its Luminance, Chroma (in-phase or red-cyan channel), and Quadrature
(magenta-green) components.
— CMY (Cyan, Magenta, and Yellow): Although more common in print, the CMY format is often
found in computer graphics from other software packages. Each pixel is described in terms of its
Cyan, Magenta, and Yellow components. CMY is nonlinear.
— HLS (Hue, Luminance, and Saturation): Each pixel in the HLS color space is described in terms
of its Hue, Luminance, and Saturation components. The differences between HLS and HSV color
spaces are minor.
— XYZ (CIE Format): This mode is used to convert a CIE XYZ image to and from RGB color spaces.
CIE XYZ is a weighted space, instead of a nonlinear one, unlike the other available color spaces.
Nonlinear in this context means that equal changes in value at different positions in the color
space may not necessarily produce the same magnitude of change visually to the eye.
Expressed simply, the CIE color space is a perceptual color system, with weighted values obtained
from experiments where subjects were asked to match an existing light source using three
primary light sources.
This color space is most often used to perform gamut conversion and color space matching
between image display formats because it contains the entire gamut of perceivable colors.
— Negative: The color channels are inverted. The color space remains RGBA.
— BW: The image is converted to black and white. The contribution of each channel to the luminance
of the image is adjustable via slider controls that appear when this option is selected. The default
values of these sliders represent the usual perceptual contribution of each channel to an image’s
luminance. The color space of the image remains RGBA.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The Copy Aux node is mostly a convenience node, as the copying can also be accomplished with more
effort using a Channel Booleans node. Where Channel Booleans deals with individual channels, Copy
Aux deals with channel groups. By default, the Copy Aux node automatically promotes the depth of its
output to match the depth of the aux channel.
Copy Aux also supports static normalization ranges. The advantage of static normalization versus
the dynamic normalization that Fusion’s viewers do is that colors remain constant over time. For
example, if you are viewing Z or WorldPos values for a ball, you see a smooth gradient from white
to black. Now imagine that some other 3D object is introduced into the background at a certain
time. Dynamic normalization turns the ball almost completely white while the background object
is now the new black. Dynamic normalization also causes flicker problems while viewing vector/
disparity channels, which can make it difficult to compare the aux channels of two frames at different
times visually.
Inputs
The Copy Aux node includes two inputs: one for the main image and the other for an effect mask.
— Input: This orange input is the only required connection. It connects a 2D image for the Copy Aux
node operation.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the Copy Aux operation to only those pixels within the mask. An effect mask is applied to the tool
after the tool is processed.
Inspector
Controls Tab
The Controls tab is used to copy auxiliary channel groups into RGBA channels. Although Copy Aux has
quite a few options, most of the time you select only the channel to copy and ignore the remaining
functionality.
Mode
The Mode menu determines whether the auxiliary channel is copied into the RGBA color channel (Aux
to Color) or vice versa (Color to Aux). Using this option, you can use one Copy Aux node to bring an
auxiliary channel into color, do some compositing operations on it, and then use another Copy Aux
node to write the color back into the auxiliary channel. When the Mode is set to Color to Aux, all the
options in the Controls tab except the Aux Channel menu are hidden.
Aux Channel
The Aux Channel menu selects the auxiliary channel to be copied from or written to depending on
the current mode. When the aux channel abcd has one valid component, it is copied as aaa1, two valid
components as ab01, three valid components as abc1, and four components as abcd. For example, the
Z-channel is copied as zzz1, texture coordinates as uv01, and normals as nxnynz1.
Be careful when copying float channels into integer image formats, as they can get clipped if you do
not set up Copy Aux correctly. For this node, all aux channels are considered to be float32 except
ObjectID or MaterialID, which are considered to be int16.
Channel Missing
Channel Missing determines what happens if a channel is not present. For example, this determines
what happens if you chose to copy Disparity to Color and your input image does not have a Disparity
aux channel.
— Fail: The node fails and prints an error message to the console.
— Use Default Value: This fills the RGBA channels with the default value of zero for everything
except Z, which is -1e30.
Note that the Remapping options are per channel options. That means the default scale for normals
can be set to [-1, +1] > [0, 1] and for Z it can be set [-1000, 0] > [0, 1]. When you flip between normals
and Z, both options are remembered. One way this could be useful is that you can set up the
remapping ranges and save this as a setting that you can reuse. The remapping can be useful to
squash the aux channels into a static [0, 1] range for viewing or, for example, if you wish to compress
normals into the [0, 1] range to store them in an int8 image.
— From > Min: This is the value of the aux channel that corresponds to To > Min.
— From > Max: This is the value of the aux channel that corresponds to To > Max. It is possible to set
the max value less than the min value to achieve a flip/inversion of the values.
— Detect Range: This scans the current image to detect the min/max values and then sets the
From > Min/ From > Max Value controls to these values.
— Update Range: This scans the current image to detect the min/max values and then enlarges the
current [From > Min, From > Max] region so that it contains the min/max values from the scan.
— To > Min: This is the minimum output value, which defaults to 0.
— To > Max: This is the maximum output value, which defaults to 1.
— Invert: After the values have been rescaled into the [To > Min, To > Max] range, this
inverts/flips the range.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Gamut [Gmt]
— Input: This orange input is the only required connection. It connects a 2D image output that is
the source of the gamut conversion.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the Gamut operation to only those pixels within the mask. An effect mask is applied to the tool
after the tool is processed.
Inspector
Gamut controls
Source Space
Source Space determines the input color space of the image. When placed directly after a Loader
node in Fusion or a MediaIn node in DaVinci Resolve, you would select the applicable color space
based on how the image was created and check the Remove Gamma checkbox. The output of the
node would be a linearized image. You leave this setting at No Change when you are adding gamma
using the Output Space control and placing the node directly before the Saver node in Fusion or a
MediaOut node in DaVinci Resolve.
DCI-P3
The DCI-P3 color space is most commonly used in association with DLP projectors. It is frequently
provided as a color space available with DLP projectors and as an emulation mode for 10-bit LCD
monitors such as the HP Dreamcolor and Apple’s Pro Display XDR. This color space is defined in the
SMPTE-431-2 standard.
Custom
The Custom gamut allows you to describe the color space according to CIE 1931 primaries and white
point, which are expressed as XY coordinates, as well as by gamma, limit, and slope. For example, the
DCI-P3 gamut mentioned above would have the following values if described as a Custom color space.
Gamma 2.6 –
To understand how these controls work, you could view the node attached to a gradient background
in Waveform mode and observe how different adjustments modify the output.
Output Space
Output Space converts the gamut to the desired color space. For instance, when working with
linearized images in a composite, you place the Gamut node just before the Saver node and use the
Output Space to convert to the gamut of your final output file. You leave this setting at No Change
when you want to remove gamma using the Source Space control.
NOTE: When outputting to HD specification Rec. 709, Fusion uses the term Scene to refer
to a gamma of 2.4 and the term Display for a gamma of 2.2.
Pre-Divide/Post-Multiply
Selecting this checkbox causes the image’s pixel values to be divided by the Alpha values prior to the
color correction, and then re-multiplied by the Alpha value after the correction. This helps to avoid
the creation of illegally additive images, particularly around the edges of a blue/green key or when
working with 3D-rendered objects.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The advantage of the Hue Curves node over other color correction nodes in Fusion is that the splines
can be manipulated to restrict the node’s effect to a very narrow portion of the image, or expanded
to include a wide-ranging portion of the image. Additionally, these curves can be animated to follow
changes in the image over time. Since the primary axis of the spline is defined by the image’s hue, it is
much easier to isolate a specific color from the image for adjustment.
Inputs
The Hue Curves node includes two inputs: one for the main image and the other for an effect mask to
limit the color correction area.
— Input: This orange input is the only required connection. It connects a 2D image for the Hue
Curves color correction.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the Hue Curves operation to only those pixels within the mask. An effect mask is applied to the
tool after it is processed.
Inspector
Controls Tab
The Controls tab consists of color attribute checkboxes that determine which splines are displayed
in the Spline window. The spline graph runs horizontally across with control points placed
horizontally at each of the primary colors. You can manipulate these control points to change the
selected color attribute.
Mode
The Mode options change between No Animation and Animated Points modes. The default mode is
No Animation, where adjustments to the curves are applied consistently over time. Setting the Mode
to Animated Points or Dissolve allows the color curve to be animated over time.
Dissolve mode is essentially obsolete and is included for compatibility reasons only.
When using the Eyedropper icon, a point is created on all active splines, representing the
selected color.
Spline Window
This graph display is the main interface element of the Hue Curves node, which hosts the various
splines. In appearance, the node is very similar to the Color Curves node, but here the horizontal axis
represents the image’s hue, while the vertical axis represents the degree of adjustment. The Spline
window shows the curves for the individual channels. It is a miniature Spline Editor. In fact, the curves
shown in this window can also be found and edited in the Spline Editor.
The spline curves for all components are initially flat, with control points placed horizontally at each of
the primary colors. From left to right, these are: Red, Yellow, Green, Cyan, Blue, and Magenta. Because
of the cyclical design of the hue gradient, the leftmost control point in each curve is connected to the
rightmost control point of the curve.
Right-clicking in the graph displays a contextual menu containing options for resetting the curves,
importing external curves, adjusting the smoothness of the selected control points, and more.
In and Out
Use the In and Out controls to manipulate the precise values of a selected point. To change a value,
select a point and enter the In/Out values desired.
Eyedropper
Left-clicking and dragging from the Eyedropper icon changes the current mouse cursor to an
Eyedropper. While still holding down the mouse button, drag the cursor to a viewer to pick a pixel
from a displayed image. This causes control points, which are locked on the horizontal axis, to appear
on the currently active curves. The control points represent the position of the selected color on the
curve. Use the contextual menu’s Lock Selected Points toggle to unlock points and restore the option
of horizontal movement.
Points are only added to enabled splines. To add points only on a specific channel, disable the other
channels before making the selection.
Pre-Divide/Post-Multiply
Selecting this checkbox causes the image’s pixel values to be divided by the Alpha values prior to the
color correction, and then re-multiplied by the Alpha value after the correction. This helps when color
correcting images that include a premultiplied Alpha channel.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
— The OCIO CDL Transform node allows you to create, save, load, and apply a Color Decision
List (CDL) grade.
— The OCIO Color Space allows sophisticated color space conversions, based on an OCIO config file.
— The OCIO File Transform allows you to load and apply a variety of Lookup tables (LUTs).
Generally, the OCIO color pipeline is composed from a set of color transformations defined by OCIO-
specific config files, commonly named with a “.ocio” extension. These config files allow you to share
color settings within or between facilities. The path to the config file to be used is normally specified
by a user-created environment variable called “OCIO,” although some tools allow overriding this. If no
other *.ocio config files are located, the DefaultConfig.ocio file in Fusion’s LUTs directory is used.
For in-depth documentation of the format’s internals, please refer to the official pages on opencolorio.org.
Inputs
The OCIO CDL Transform node includes two inputs: one for the main image and the other for an
effect mask to limit the area where the CDL is applied.
— Input: This orange input is the only required connection. It connects a 2D image output for
the CDL grade.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
limits the CDL grade to only those pixels within the mask. An effect mask is applied to the tool
after it is processed.
Controls Tab
The Controls tab for the OCIO CDL Transform contains primary color grading color correction controls
in a format compatible with CDLs. You can make R, G, B adjustments based on the Slope, Offset, and
Power. There is also overall Saturation control. You can also use the Controls tab to import and export
the CDL compatible adjustments.
Operation
This menu switches between File and Controls. In File mode, standard ASC-CDL files can be loaded. In
Controls mode, manual adjustments can be made to Slope, Offset, Power, and Saturation, and the CDL
file can be saved.
Direction
Toggles between Forward and Reverse. Forward applies the corrections specified in the node,
while Reverse tries to remove those corrections. Keep in mind that not every color correction can
be undone.
Imagine that all slope values have been set to 0.0, resulting in a fully black image. Reversing that
operation is not possible, neither mathematically nor visually.
Slope
Offset
Power
Applies a Gamma Curve. This is an inverse of the Gamma function of the Brightness Contrast node.
Saturation
Enhances or decreases the color saturation. This works the same as Saturation in the Brightness
Contrast node.
Export File
Allows the user to export the settings as a CDL file.
— The OCIO CDL Transform node allows you to create, save, load, and apply a Color Decision
List (CDL) grade.
— The OCIO Color Space allows sophisticated color space conversions, based on an OCIO config file.
— The OCIO File Transform allows you to load and apply a variety of Lookup tables (LUTs).
Generally, the OCIO color pipeline is composed from a set of color transformations defined by OCIO-
specific config files, commonly named with a “.ocio” extension. These config files allow you to share
color settings within or between facilities. The path to the config file to be used is normally specified
by a user-created environment variable called “OCIO,” though some tools allow overriding this. If no
other *.ocio config files are located, the DefaultConfig.ocio file in Fusion’s LUTs directory is used.
For in-depth documentation of the format’s internals, please refer to the official pages on
opencolorio.org.
The functionality of the OCIO Color Space node is also available as a View LUT node from the
View LUT menu.
Inputs
The OCIO Color Space node includes two inputs: one for the main image and the other for an effect
mask to limit the area where the color space conversion is applied.
— Input: This orange input is the only required connection. It connects a 2D image for the color
space conversion.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the color space conversion to only those pixels within the mask. An effect mask is applied to the
tool after it is processed.
An OCIO Color Space node applied to a Loader node and a Saver node in Fusion Studio
Inspector
Controls Tab
The Controls tab for the OCIO Color Space node allows you to convert an image from one color
space to another based on an OCIO config file. By default, it uses the config file included with Fusion;
however, the Controls tab does allow you to load your own config file as well.
OCIO Config
Displays a File > Open dialog to load the desired config file.
Source Space
Based on the config file, the available source color spaces are listed here.
The content of this list is based solely on the loaded profile and hence can vary immensely. If no other
OCIO config file is loaded, the DefaultConfig.ocio file in Fusion’s LUTs directory is used to populate
this menu.
Output Space
Based on the config file, the available output color spaces are listed here.
The content of this list is based solely on the loaded profile and hence can vary immensely. If no other
OCIO config file is loaded, the DefaultConfig.ocio file in Fusion’s LUTs directory is used to populate
this menu.
Look
Installed OCIO Color Transform Looks appear in this menu. If no looks are installed, this menu has
only None listed as an option.
— The OCIO CDL Transform node allows you to create, save, load,
and apply a Color Decision List (CDL) grade.
— The OCIO Color Space allows sophisticated color space conversions, based on an OCIO config file.
— The OCIO File Transform allows you to load and apply a variety of Lookup tables (LUTs).
Generally, the OCIO color pipeline is composed from a set of color transformations defined by OCIO-
specific config files, commonly named with a “.ocio” extension. These config files allow you to share
color settings within or between facilities. The path to the config file to be used is normally specified
by a user-created environment variable called “OCIO,” though some tools allow overriding this. If no
other *.ocio config files are located, the DefaultConfig.ocio file in Fusion’s LUTs directory is used.
For in-depth documentation of the format’s internals, please refer to the official pages on
opencolorio.org.
The functionality of the OCIO File Transform node is also available as a View LUT node from the
View LUT menu.
Inputs
The OCIO File Transform node includes two inputs: one for the main image and the other for an effect
mask to limit the area where the color space conversion is applied.
— Input: This orange input is the only required connection. It connects a 2D image for the LUT.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the applied LUT to only those pixels within the mask. An effect mask is applied to the tool after
it is processed.
Inspector
Controls Tab
The Controls tab for the OCIO File Transform node includes options to import the LUT, invert the
transform, and select the color interpolation method.
LUT File
Displays a File > Open dialog to load the desired LUT.
CCC ID
This is the ID key used to identify the specific file transform located within the ASC CDL color
correction XML file.
Direction
Toggles between Forward and Reverse. Forward applies the corrections specified in the node, while
Reverse tries to remove those corrections. Keep in mind that not every color correction can be
undone. Imagine that all slope values have been set to 0.0, resulting in a fully black image. Reversing
that operation is not possible, neither mathematically nor visually.
Interpolation
Allows the user to select the color interpolation to achieve the best quality/render time ratio. Nearest
is the fastest interpolation, while Best is the slowest.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The Set Canvas Color node sets the color of the workspace outside the domain of definition (DOD).
For example, if you create a circular gradient, the DoD is a square around the circular gradient in
the viewer. Everything outside the DoD is understood to be black and therefore does not have
to be rendered. To change the area outside the DoD, attach the Set Canvas Color node after the
background and change the color.
NOTE: Position the mouse pointer in a black area outside the raster to view the RGB canvas
color in the status bar at the bottom left of the Fusion window.
Inputs
The Set Canvas Color node includes two inputs: one for the main image and a second for a
foreground.
— Input: This orange input is the only required connection. It accepts a 2D image that reveals the
canvas color if the image’s DoD is smaller than the raster.
— Foreground: The optional green foreground input allows the canvas color to be sampled from an
image connected to this input.
The Set Canvas Color node is often used for adjusting keys. In the example above, the Luma Keyer
is extracting a key, and therefore assigns the area outside the DoD, which is black, as an opaque
foreground. If the element is scaled down and composited, you do not see the background. To correct
this, insert a SetBGColor before the keyed element is placed in the composite. For example, LumaKey
> Set Canvas Color > Transform > Merge.
Inspector
Controls Tab
The Controls tab for the Set Canvas Color is used for simple color selection. When the green
foreground is connected, the tab is empty.
Color Picker
Use these controls to adjust the Color and the Alpha value for the image’s canvas. It defaults to black
with zero Alpha.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Correction can be done by selecting a color temperature or by choosing a neutral color from the
original image that exhibits the color cast to be corrected.
IMPORTANT When picking neutral colors using the Custom method, make sure you are
picking from the source image, not the results of the White Balance node. This ensures that
the image doesn’t change while you are still picking, and that the White Balance node gets
an accurate idea of the original colors it needs to correct.
Inputs
The White Balance node includes two inputs: one for the main image and the other for an effect mask
to limit the area where the white balance is applied.
— Input: This orange input is the only required connection. It connects a 2D image for
the white balance.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
limits the white balance to only those pixels within the mask. An effect mask is applied to the tool
after it is processed.
Balance Tab
Space
Use this menu to select the color space of the source image, if it is known. This can make the
correction more accurate since the node can take the natural gamma of the color space into account
as part of the correction. If the color space that the image uses is unknown, leave this menu at its
default value.
Method
The White Balance node can operate using one of two methods: a Custom method or a color
Temperature method.
— Custom: The Custom method requires the selection of a pixel from the scene that should have
been pure gray. The node uses this information to calculate the color correction required to
convert the pixel so that it actually is gray. When the correction is applied without an effect
mask connected and the LockBlack/Mid/White checkbox enabled, the node white balances the
entire shot.
— Temperature: The color Temperature method requires that the actual color temperature of the
shot be specified.
Black/Mid/White Reference
These controls appear only if the Custom method is selected. They are used to select a color from a
pixel in the source image. The White Balance node color corrects the image so that the selected color
is transformed to the color set in the Result Color Picker below. Generally, this is gray. A color that is
supposed to be pure gray but is not truly gray for one reason or another should be selected.
If the Lock Black/Mid/White checkbox is deselected, different references can be selected for each
color range.
For example, try to select a pixel for the black and white references that are not clipped in any of the
color channels. In the high end, an example would be a pixel that is light pink with values of 255, 240,
240. The pixel is saturated/clipped in the red, although the color is not white. Similarly, a really dark
blue-gray pixel might be 0, 2, 10. It is clipped in red as well, although it is not black.
Neither example would be a good choice as a reference pixel because there would not be enough
headroom left for the White Balance node.
Black/Mid/White Result
These controls appear only if the Custom method is selected. They are used to select the color that
the node uses to balance the reference color. This generally defaults to pure, midrange gray.
If the Lock Black/Mid/White checkbox is deselected, different results can be selected for each
color range.
Temperature Reference
When the Method menu is set to Temperature, the Temperature reference control is used to set the
color temperature of the source image. If the Lock Black/ Mid/White checkbox is deselected, different
references can be selected for each color range.
Temperature Result
Use this control to set the target color temperature for the image. If the Lock Black/Mid/White
checkbox is deselected, different results can be selected for each color range.
Ranges Tab
The Ranges tab can be used to customize the range of pixels in the image considered to be shadows,
midtones, and highlights by the node.
Spline Display
The ranges are selected by manipulating the spline handles. There are four spline points, each with
one Bézier handle. The two handles at the top represent the start of the shadow and highlight ranges,
whereas the two at the bottom represent the end of the range. The Bézier handles are used to control
the falloff.
The midtones range has no specific controls since its range is understood to be the space between
the shadow and the highlight ranges.
The X and Y text controls below the Spline display can be used to enter precise positions for the
selected Bézier point or handle.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Color nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Color category. The Settings
controls are even found on third-party color type plugin tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options that are also
covered here.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
For example, if the red button on a blur tool is deselected, the blur is first applied to the image, and
then the red channel from the original input is copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to skip
processing that channel entirely. Tools that do this generally possess a set of identical RGBA buttons
on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs are
identical.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels not included in the mask (i.e., set to 0) to become black/
transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels,” in the Fusion Reference Manual.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If
the upstream DoD is smaller than the frame, the remaining area in the frame is treated as
black/transparent.
— Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
— None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Motion Blur
— Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
— Quality: Quality determines the number of samples used to create the blur. A quality setting of
2 causes Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
— Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one whole frame exposure. Higher values are possible
and can be used to create interesting effects.
— Center Bias: Center Bias modifies the position of the center of the motion blur. This allows the
creation of motion trail effects.
— Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the field and type the text. When a note is
added to a tool, a small red square appears in the lower-left corner of the node when the full tile is
displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the note
in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Composite Nodes
This chapter details the Dissolve and Merge nodes
available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Dissolve [Dx]��������������������������������������������������������������������������������������������������������������� 987
This quality makes it possible for you use the Dissolve node as an automatic layer switching tool when
connected to background and foreground clips with different durations. Simply connect each clip
to the background and foreground inputs, respectively, and set the Background/Foreground slider
to the input of shorter duration, to determine which is “on top.” After the last frame of that clip has
ended, the Dissolve node automatically switches to the clip that’s connected to the other input.
Besides the default dissolve, the Gradient Wipe setting of the Operation menu allows you to create
arbitrary animated dissolve patterns based on the luminance of an image connected to the optional
Gradient Wipe input. You can use this capability with images of geometric shapes or gradients of
different kinds, movie clips of fire, water ripples, or rain, the Fast Noise node, or even particle systems
you create within the Fusion page to create a variety of unique and creative transitions. Soft-edged
effect masks may also be used to add to the possible effects.
Ultimately, animating the Background/Foreground control allows you to control the transition that’s
being used to switch from the foreground input to the background, or vice versa.
Inputs
The Dissolve node provides three image inputs, all of which are optional:
— Background: The first of two images you want to switch between or mix. Unlike most
other nodes, it is unnecessary to connect the background input before connecting the
foreground input.
— Foreground: The second of two images you want to switch between or mix. The Dissolve node
works best when both foreground and background inputs are connected to images with the same
resolution.
— Gradient Map: (Optional) The Gradient Map is required only when Gradient Wipe is selected.
Resolution Handling
It is recommended to make sure that all images connected to the foreground, background, and
gradient map inputs of the Dissolve node have the same resolution and the same pixel aspect. This is
not required, however. But, the result if you mix resolutions depends on how you set the Background/
Foreground slider.
— If the input images are different sizes, but the Foreground/Background slider is set to full
Foreground (all the way to the right) or full Background (all the way to the left), then the output
resolution will be identical to the image resolution of the corresponding node input.
— If input images of different sizes are mixed by setting the Background/Foreground slider
somewhere between, the output resolution will be set to the larger of the two input resolutions
to make sure there’s enough room to contain both images. In this case, you may experience
undesirable resolution changes when the slider moves from full foreground or background to
somewhere in between.
For example, if you try to dissolve between a 4K image (connected to the background) and an 8K
image (connected to the foreground), the output of the Dissolve node will be 4K when the slider
is set to full Background, but will suddenly jump to 8K when set to full Foreground or when mixed
somewhere between the foreground and background.
Inspector
Dissolve controls
Controls Tab
These are the main controls that govern the Dissolve node’s behavior.
— Operation Pop-Up: The Operation menu contains one of seven different methods for mixing
the Foreground and Background inputs. The two images are mixed using the value of the
Background/Foreground slider to determine the percentage each image contributes.
— Wipe Style: (SMPTE Wipe only) This drop-down list allows the selection of two wipe styles:
Horizontal - Left to Right and Vertical - Top to Bottom. The direction of the wipes can be reversed
by using the Invert Wipe checkbox.
— Invert Wipe: (SMPTE Wipe only) When checked, the direction of the wipe will be reversed.
— Softness: Use this control to soften the edge of the transition.
— Border: Select the Border to enable coloring of the transition’s edge and to reveal the associated
controls. The effect is to create a border around the transition edge.
— Border Softness: (Appears only when Border is turned on) The Border Softness slider controls
the width and density of the border. Higher values will create a denser border, and lower values
will create a thinner one.
— Border Color: (Appears only when Border is turned on) Use Border Color to select the color used
in the border.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in both the Dissolve and Merge nodes. These
common controls are described in detail at the end of this chapter in “The Common Controls” section.
The Merge node can perform both additive (premultiplied) and subtractive (non-premultiplied)
compositing, depending on how your compositions and media are set up. However, you also have the
flexibility of using the Additive/Subtractive slider to blend between additive and subtractive composite
results, which has the bonus of providing solutions for problem edges in some cases.
Ordinarily, the foreground and background input connections determine the layer order of images
composited with this node. However, you can also enable Z-Depth compositing if Z-channels are
available in the input images. Z-merging compares the depth value of each pixel in each layer to
determine which pixels should be in front and which should be behind.
Inputs
The Merge node provides three image inputs, all of which are optional:
— Background: The orange background input is for the first of two images you want to composite
together. You should connect the background input before connecting the foreground input. If
you connect an image to the background without connecting anything to the foreground input,
the Merge node will output the background image.
— Foreground: The green foreground input is for the second of two images you want to composite
together, which is typically a foreground subject that should be in front of the background. If you
connect an image to the foreground input without connecting anything to the background input
first, the Merge node won’t output anything.
— Effect Mask: (Optional) The effect mask input lets you mask a limited area of the output image
to be merged where the mask is white (where the foreground image shows in front of the
background), letting the background image show through by itself where the mask is black.
Resolution Handling
While you can connect images of any resolution to the background and foreground inputs of the
Merge node, the image that’s connected to the background input determines the resolution of
the output.
TIP: If you want to change the resolution of the image connected to the background, you
can use the Crop node to change the “canvas” resolution of the image without changing the
size of the original image, or you can use the Resize node to change both the resolution and
the size of the image.
Inspector
— Center X and Y: This control determines the position of the foreground image in the composite.
The default is 0.5, 0.5, which centers the foreground image in the exact center of the background
image. The value shown is always the actual position in normalized coordinates, multiplied by the
reference size. See below for a description of the reference size controls.
— Size: Use this control to increase or decrease the size of the foreground image before it is
composited over the background. The range of values for this slider is 0.0 to 5.0, but any value
greater than 0 can be entered manually. A size of 1.0 gives a pixel-for-pixel composition, where a
single pixel in the foreground is the same size as a single pixel in the background.
— Angle: Use this control to rotate the foreground image before it is combined with the background.
— Apply Modes: The Apply Mode setting determines the math used when blending or combining
the foreground and background pixels.
— Normal: The default Normal merge mode uses the foreground’s Alpha channel as a mask to
determine which pixels are transparent and which are not. When this is active, another menu
shows possible operations, including Over, In, Held Out, Atop, and XOr.
— Screen: Screen merges the images based on a multiplication of their color values. The Alpha
channel is ignored, and layer order becomes irrelevant. The resulting color is always lighter.
Screening with black leaves the color unchanged, whereas screening with white will always
produce white. This effect creates a similar look to projecting several film frames onto the
same surface. When this is active, another menu shows possible operations, including Over,
In, Held Out, Atop, and XOr.
— Dissolve: Dissolve mixes two image sequences together. It uses a calculated average of the
two images to perform the mixture.
— Darken: Darken looks at the color information in each channel and selects the background or
foreground image’s color value, whichever is darker, as the result color. Pixels lighter than the
merged colors are replaced, and pixels darker than the merged color do not change.
— Multiply: Multiplies the values of a color channel. This will give the appearance of darkening
the image as the values are scaled from 0 to 1. White has a value of 1, so the result would be
the same. Gray has a value of 0.5, so the result would be a darker image, or an image half as
bright.
— Color Burn: Color Burn uses the foreground’s color values to darken the background image.
This is similar to the photographic dark room technique of burning by increasing the exposure
of an area of a print.
— Geometric: This blend mode is good for HDR images that have out of range colors above 1,
For values above zero the result is 2 times the foreground times the background color divided
by the foreground plus background color.
Out = 2*Fc*Bc / (Fc+Bc)
— Operator Modes: This menu is used to select the Operation mode of the merge. Changing the
Operation mode changes how the foreground and background are combined to produce a result.
This pop-up menu is visible only when the Merge node’s Apply mode is set to either Normal or Screen.
For an excellent description of the math underlying the Operation modes, read Compositing
Digital Images, Porter, Thomas, and T. Duff, ACM SIGGRAPH Computer Graphics proceedings,
1984, pages 253-259. Essentially, the math is as described below. Note that some modes not listed
in the Operator drop-down menu (Under, In, Held In, Below) are easily obtained by swapping the
foreground and background inputs (with Command-T or Ctrl-T) and choosing a corresponding
mode. The formula used to combine pixels in the merge is always (fg * x) + (bg * y). The different
operations determine exactly what x and y are, as shown in the description for each mode.
— Over: The Over mode adds the foreground layer to the background layer by replacing the
pixels in the background with the pixels from the Z wherever the foreground’s Alpha channel
is greater than 1.
x = 1, y = 1-[foreground Alpha]
— In: The In mode multiplies the Alpha channel of the background input against the pixels in
the foreground. The color channels of the foreground input are ignored. Only pixels from the
foreground are seen in the final output. This essentially clips the foreground using the mask
from the background.
x = [background Alpha], y = 0
— Atop: Atop places the foreground over the background only where the background has a matte.
x = [background Alpha], y = 1-[foreground Alpha]
— XOr: XOr combines the foreground with the background wherever either the foreground or
the background has a matte, but never where both have a matte.
x = 1-[background Alpha], y = 1-[foreground Alpha]
— Conjoint: The Conjoint mode will make a decision based combination of Alpha channels of
the foreground and background images; this is helpful in soft edge and motion blurred Alpha
where Alpha is not solid.
X= 1, Y= X+Y(1-af)/ab, if af>ab
— Disjoint: The Disjoint mode will make a decision based combination of Alpha channels of the
foreground and background images; this is helpful in combining layers as to not get out of
range Alpha, and premultiplied edges get the correct Alpha combination.
X= X+Y(1-af)/ab, Y= X+Y if af+ab<1
— Mask: The Mask mode will output the background image multiplied by the foreground Alpha.
X = X * af, Y =0
— Stencil: The Stencil mode will output the background image multiplied by the inverse
foreground Alpha.
X = X * (1-af), Y =0
— Under: The Under mode is the same operation as the Over mode but will swap foreground
and background images in the operations.
X = Y, Y =X *(1-af)
— Subtractive/Additive slider: This slider controls whether Fusion performs an Additive merge,
a Subtractive merge, or a blend of both. This slider defaults to Additive merging for most
operations, assuming the input images are premultiplied (which is usually the case). If you don’t
understand the difference between Additive and Subtractive merging, here’s a quick explanation.
— An Additive merge is necessary when the foreground image is premultiplied, meaning that
the pixels in the color channels have been multiplied by the pixels in the Alpha channel. The
result is that transparent pixels are always black, since any number multiplied by 0 always
equals 0. This obscures the background (by multiplying with the inverse of the foreground
Alpha), and then simply adds the pixels from the foreground.
— A Subtractive merge is necessary if the foreground image is not pre-multiplied.
The compositing method is similar to an additive merge, but the foreground image is first
multiplied by its own Alpha to eliminate any background pixels outside the Alpha area.
While the Additive/Subtractive option could easily have been a checkbox to select one mode or
another, the Merge node lets you blend between the Additive and Subtractive versions of the
merge operation—an operation that is occasionally useful for dealing with problem composites
with edges that are calling attention to themselves as too bright or too dark.
— Alpha Gain slider: Alpha Gain linearly scales the values of the foreground’s Alpha channel. In
Subtractive merges, this controls the density of the composite, similarly to Blend. In Additive
merges, this effectively reduces the amount that the background is obscured, thus brightening
the overall result. In an Additive merge with Alpha Gain set to 0.0, the foreground pixels are simply
added to the background.
— Burn In slider: The Burn In control adjusts the amount of Alpha used to darken the background,
without affecting the amount of foreground added in. At 0.0, the merge behaves like a straight
Alpha blend, whereas at 1.0, the foreground is effectively added onto the background (after Alpha
multiplication if in Subtractive mode). This gives the effect of the foreground image brightening
the background image, as with Alpha Gain. For Additive merges, increasing the Burn In gives an
identical result to decreasing Alpha Gain.
— Blend slider: This is a cloned instance of the Blend slider in the Common Controls tab. Changes
made to this control are simultaneously made to the one in the common controls. The Blend slider
mixes the result of the node with its input, blending back the effect at any value less than 1.0. In
this case, it will blend the background with the merged result.
Additional Controls
The remaining controls let you fine-tune the results of the above settings.
— Filter Method: For input images that are being resized, this setting lets you choose the filter
method used to interpolate image pixels when resizing clips. The default setting is Linear.
Different settings work better for different kinds of resizing. Most of these filters are useful only
when making an image larger. When shrinking images, it is common to use the Linear filter;
however, the Catmull-Rom filter will apply some sharpening to the results and may be useful for
preserving detail when scaling down an image.
— Nearest Neighbor: This skips or duplicates pixels as needed. This produces the fastest but
crudest results.
— Box: This is a simple interpolation resize of the image.
— Linear: This uses a simplistic filter, which produces relatively clean and fast results.
— Quadratic: This filter produces a nominal result. It offers a good compromise between
speed and quality.
— Cubic: This produces better results with continuous-tone images. If the images have fine
detail in them, the results may be blurrier than desired.
— Catmull-Rom: This produces good results with continuous-tone images that are resized
down. Produces sharp results with finely detailed images.
— Gaussian: This is very similar in speed and quality to Bi-Cubic.
— Mitchell: This is similar to Catmull-Rom but produces better results with finely detailed
images. It is slower than Catmull-Rom.
— Lanczos: This is very similar to Mitchell and Catmull-Rom but is a little cleaner and also slower.
— Sinc: This is an advanced filter that produces very sharp, detailed results; however, it may
produce visible “ringing” in some situations.
Resize Filters from left to right: Nearest Neighbor, Box, Linear, Quadratic,
Cubic, Catmull-Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel
— Edges Buttons: Four buttons let you choose how to handle the space around images that are
smaller than the current DoD of the canvas as defined by the resolution of the background image.
— Canvas: The area outside the frame is set to the current color/opacity of the canvas. If
you want to change this value, you can attach a Set Canvas Color node between the image
connected to the foreground input and the foreground input itself, using Set Canvas Color to
choose a color and/or transparency setting with which to fill the canvas.
— Wrap: Creates a “video wall” effect by duplicating the foreground image as a grid.
— Duplicate: Duplicates the outermost pixels along the edge of the foreground image, duplicating
them to stretch up, down, left, and right from each side to reach the end of the DoD.
— Mirror: Similar to duplicate, except every other iteration of the foreground image is flipped
and flopped to create a repeating pattern.
— Invert Transform: Select the Invert Transform control to invert any position, rotation, or scaling
transformation. This option is useful when connecting the merge to the position of a tracker for
match moving.
— Flatten Transform: The Flatten Transform option prevents this node from concatenating its
transformation with subsequent nodes. The node may still concatenate transforms from its input,
but it will not concatenate its transformation with the node at its output.
— Reference Size: The controls under Reference Size do not directly affect the image. Instead, they
allow you to control how Fusion represents the position of the Merge node’s center.
Normally, coordinates are represented as values between 0 and 1, where 1 is a distance equal to
the full width or height of the image. This allows resolution independence, because the size of the
image can be changed without having to change the value of the center.
If you specify the dimensions of the background image in the Reference Size controls, this
changes the way the center control values are displayed so that it shows the actual pixel positions
in its X and Y fields.
Internally, the Merge node still stores this value as a number between 0 to 1 and, if the center
control’s value was to be queried via scripting or the center control was to be published for use
by other nodes, the original normalized value would be retrieved. The change is only visible in the
value shown for merge center in the node control.
— Use Frame Format Settings: Select this to force the merge to use the composition’s current
frame format settings to set the reference width and reference height values.
— Width and Height: Set these sliders to the width and height of the image to change the way
that Fusion displays the values of the Merge node’s center control.
Channels Tab
The Channels tab has controls that let the Merge node use Z-channels embedded within each image
to define what’s in front and what’s behind during a Merge operation. The following controls let you
customize the result.
— Perform Depth Merge: Off by default. When turned on, the Z-channel of both images will be
used to determine the composite order. Alpha channels are still used to define transparency, but
the values of the Z-Depth channels will determine the ordering of image elements, front to back.
If a Z-channel is not available for either image, the setting of this checkbox will be ignored, and
no depth compositing will take place. If Z-Depth channels are available, turning this checkbox off
disables their use within this operation.
— Foreground Z-Offset: This slider sets an offset applied to the foreground image’s Z value. Click
the Pick button to pick a value from a displayed image’s Z-channel, or enter a value using the slider
or input boxes. Raising the value causes the foreground image’s Z-channel to be offset further
away along the Z-axis, whereas lowering the value causes the foreground to move closer.
— Subtractive/Additive: When Z-compositing, it is possible for image pixels from the background
to be composited in the foreground of the output because the Z-buffer for that pixel is closer than
the Z of the foreground pixel. This slider controls whether these pixels are merged in an Additive
or a Subtractive mode, in exactly the same way as the comparable slider in the Merge tab.
When merged over a background of a different color, the original background will still be visible in
the semitransparent areas. An Additive merge will maintain the transparencies of the image but
will add their values to the background.
MultiMerge [MMrg]
These inputs are added to the Layer List in the Inspector and are displayed in a hierarchy from top
to bottom as a stack. The layers are arranged so the layer closest to the foreground is at the top of
the list, and the one closest to the background is at the bottom. Each layer has its own separate and
independent Merge controls (see the Merge node above) that appear in the lower half of the Inspector,
allowing you to individually adjust the position, size, composite modes, etc., for each source input.
As your composite increases in complexity, you can end up with a large amount of standard Merge
nodes scattered throughout the node tree. MultiMerge can be used to combine these Merge nodes
together, both as an organizational tool and allowing you to control the order of operations in a
layer-based, rather than a node-based environment. You could conceivably composite and organize
hundreds of separate layers using one MultiMerge node on your node tree.
Inputs
The MultiMerge node provides these image inputs:
— Background: The orange background input is for the first of the images you want to composite
together. You should connect the background input before connecting the foreground input.
— Foreground: The foreground inputs are for each subsequent image you want to composite together,
which is typically a foreground subject that should be in front of the background. Connecting a
pipe from a new source onto the MultiMerge node automatically creates a new foreground input
on the node and a new layer in the Layer List. Foreground inputs are all colored white.
— Effect Mask: (Optional) The effect mask input lets you mask a limited area of the output image
to be merged where the mask is white (where the foreground image shows in front of the
background), letting the background image show through by itself where the mask is black.
Inspector
Layer List
For each foreground input connected to the MultiMerge node, a new layer is created sequentially
in the Layer List. The Layer List is hierarchically sorted from top to bottom, with layers on top being
above any layers below them in the image. You can customize this layer structure in several ways.
To Select a Layer:
Click on the layer in the Layer List to make that layer active. The active layer’s Merge controls will
appear at the bottom of the Inspector, and the pipe connecting to the layer’s input on the MultiMerge
node will glow slightly in the node tree.
You can multi-select layers by holding the Shift key down while clicking to select a range of layers, or
the Command key to select multiple non-adjacent layers.
Right-clicking on any layer opens up a contextual menu with the following options:
— Go To Source: Selects the tool that is connected to the layer’s input on the MultiMerge node and
opens the tool’s parameters in the Inspector.
— Split Here: Automatically creates another connected MultiMerge node and splits the existing
layers between the two. The layer that was selected for the split, and all layers above it, will be
spun off into a new MultiMerge node. The original MultiMerge node will be connected to the
background input of the new MultiMerge node.
To Rename a Layer:
By default each layer is named based in the input tool’s name. To change the Layer’s name to
something more meaningful, double-click on the Layer’s name and type a new name in the text field.
You can also right-click on a Layer’s name and select Rename Layer from the contextual menu.
To Disable/Enable a Layer:
Checking or unchecking the box next to the Layer’s name will enable or disable that layer respectively.
Disabling the layer turns off that input’s contribution to the overall composite in the MultiMerge but
does not delete it.
To Replace a Layer:
By design, if you disconnect the pipe from a the MultiMerge’s input, the layer will still remain in its
place in the Layer List but with a strike-through its name to let you know that nothing is currently
connected. This allows you to quickly iterate and audition several input sources (clips, graphics, etc.) to
the same layer without having to constantly rearrange the order of your Layer List.
Merge Controls
Each layer has a separate and independent set of Merge controls that appear here when the layer is
selected. For detailed information on how the Merge controls work, see the previous section in this
chapter titled Merge [Mrg].
Inspector
Settings Tab
The Settings tab in the Inspector can be found on both tools in the Composite category. The Settings
controls are even found on third-party color type plugin tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options that are also
covered here.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
For example, if the red button on a blur tool is deselected, the blur will first be applied to the image,
then the red channel from the original input will be copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this will generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls
tabs are identical.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels,” in the Fusion Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available
Comments
The Comments field is used to add notes to a tool. Click in the field and type the text. When a note is
added to a tool, a small red square appears in the lower left corner of the node when the full tile is
displayed or a small text bubble icon appears on the right when nodes are collapsed. To see the note
in the node editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Ambient Occlusion [SSAO]���������������������������������������������������������������������������������� 1007
The Ambient Occlusion node generates global lighting effects in 3D-rendered scenes as a post effect.
It quickly approximates computationally expensive ray-traced global illumination. Being a post effect,
it exposes similar aliasing issues like the Shader, Texture, and Volume Fog nodes. Hence, artifacts may
appear in certain situations.
Usage
The AO node rarely works out of the box, and requires some tweaking. The setup process involves
adjusting the Kernel Radius and Number Of Samples to get the desired affect.
The Kernel Radius depends on the natural “scale” of the scene. Initially, there might appear to be no
AO at all. In most cases, the Kernel Radius is too small or too big, and working values must be found.
Inputs
There are three inputs on the AO node. The standard effect mask is used to limit the AO effect. The
Input and Camera connections are required. If either of these is not supplied, the node does not
render an image on output.
— Input: This orange input accepts a 2D RGBA image, Z-Depth, and Normals.
— Camera: The green camera input can take either a 3D Scene or a 3D Camera that
rendered the 2D image.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the Ambient Occlusion to only those pixels within the mask. An effects mask is applied to the tool
after the tool is processed.
Inspector
Controls Tab
The controls tab includes all the main controls for compositing with AO. It controls the quality and
appearance of the effect.
Output Mode
— Color: Using the Color menu option combines the incoming image with Ambient Occlusion applied.
— AO: This option outputs the pure Ambient Occlusion as a grayscale image. White corresponds
to regions in the image that should be bright, while black correspond to regions that should be
darker. This allows you to create a lighting equation by combining separate ambient/diffuse/
specular passes. Having the AO as a separate buffer allows creative freedom to combine the
passes in various ways.
The AO factor is determined by the unoccluded rays that reach the sphere.
— Hemisphere: Rays are cast toward a hemisphere oriented to the surfaces normal. This option
is more realistic than Sphere and should be used unless there is a good reason otherwise. Flat
surfaces receive 100% ambient intensity, while other parts are darkened.
— Sphere: Rays are cast toward a sphere centered about the point being shaded. This option is
provided to produce a stylistic effect. Flat surfaces receive 50% ambient intensity, while other
parts are made darker or brighter.
Number of Samples
Increase the samples until artifacts in the AO pass disappear. Higher values can generate better
results but also increase render time.
Kernel Radius
The Kernel Radius controls the size of the filter kernel in 3D space. For each pixel, it controls how
far one searches in 3D space for occluders. The Filter Kernel should be adjusted manually for each
individual scene.
If made too small, nearby occluders can be missed. If made too large, the quality of the AO decreases
and the samples must be increased dramatically to get the quality back.
This value is dependent on the scene Z-depth. That means with huge Z values in the scene, the kernel
size must be large as well. With tiny Z values, a small kernel size like 0.1 should be sufficient.
Lift/Gamma/Tint
You can use the lift, gamma, and tint controls to adjust the AO for artistic effects.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Deep Pixel nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
TIP: Combining multiple AO passes with different kernel radii can produce better effects.
Supersampling: To render anti-aliasing with Ambient Occlusion, enable HiQ for the Z and
Normals pass in the Renderer 3D.
Viewer Dependence: AO methods work in viewer space, and the results are viewer
dependent. This means the amount of darkening can vary depending on the view location,
when in reality it should be constant. If at a point on an object the AO is 0.5, moving the
camera could change it to 0.4.
Baking of AO: The OpenGL UV renderer can be used to bake AO into the textures
on models.
Inputs
The Depth Blur node includes three inputs: one for the main image, one for a blur image, and another
for an effect mask to limit the area where the depth blur is applied.
— Input: This orange input is the only required connection. It accepts a 2D image that includes
a Z channel. The Z channel is used to determine the blur amount in different regions of the image.
— Blur Image: If the Blur Image input is connected, channels from the image are used to control
the blur. This allows general 2D per-pixel blurring effects.
Inspector
Controls Tab
The Controls tab includes parameters for adjusting the amount of blur applied and the depth of the
blurred area. It also includes options for selecting channels other than the Z channel for the blur map.
Filter
This menu selects the filter used for the blur.
— Box: This applies a basic depth-based box blur effect to the image.
— Soften: This applies a depth-based general softening filter effect.
— Super Soften: This applies a depth-based high-quality softening filter effect.
Lock X/Y
When toggled on, this control locks the X and Y Blur sliders together for symmetrical blurring.
Blur Size
This slider is used to set the strength of the horizontal and vertical blurring.
Focal Point
This control is visible only when the Blur channel menu is set to use the Z channel.
Use this control to select the distance of the simulated point of focus. Lowering the value causes the
Focal Point to be closer to the camera; raising the value causes the Focal Point to be farther away.
Depth of Field
This control is used to determine the depth of the area in focus. The focal point is positioned in the
middle of the region, and all pixels with a Z-value within the region stay in focus. For example, if the
focal point were selected from the image and set to a value of 300, and the depth of field is set to 200,
any pixel with a Z-value between 200 and 400 would remain in focus.
Z Scale
Scales the Z-buffer value by the selected amount. Raising the value causes the distances in the
Z-channel to expand. Lowering the value causes them to contract. This is useful for exaggerating the
depth effect. It can also be used to soften the boundaries of the blur. Some images with small depth
values may require the Z-scale to be set quite low, below 1.0.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Deep Pixel nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Fog node includes three inputs: one for the main image with a Z channel, one for a blur image,
and another for an effect mask to limit the area where the depth blur is applied.
— Input: This orange input is the only required connection. It accepts a 2D image that includes
a Z channel. The Z channel is used to determine the fog amount in different regions of the image.
— Blur Image: The green second image input connects an image that is used as the source of the
fog. If no image is provided, the fog consists of a single color. Generally, a noise map of some sort
is connected here.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
limits the fog to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
Fog controls
Controls Tab
The Controls tab includes parameters for adjusting the density and color of the fog.
The Near Plane is used to select the depth where the fog thins out to nothing. The Far Plane is used to
select the depth at which the fog becomes opaque.
Z Depth Scale
This option scales the Z-buffer values by the selected amount. Raising the value causes the distances
in the Z-channel to expand, whereas lowering the value causes the distances to contract. This is useful
for exaggerating the fog effect.
Fog Color
This option displays and controls the current fog color. Alpha adjusts the fog’s transparency value.
Fog Opacity
Use this control to adjust the opacity on all channels of the fog.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Deep Pixel nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Shader node includes three inputs: one for the main image with normal map channels, one for a
reflection map, and another for an effect mask to limit the area where the depth blur is applied.
— Input: This orange input is the only required connection. It accepts a 2D image that includes a
normals channel.
— Reflection Map Image: The green reflection map image input projects an image onto all
elements in the scene or to elements selected by the Object and Material ID channels in
the Common Controls. Reflection maps work best as 32-bit floating point, equirectangular
formatted images
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the shader to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
The Shader node using normals from a Renderer 3D and a reflection input
Shader controls
Controls Tab
The Controls tab for the Shader node includes parameters for adjusting the overall surface reaction
to light sources. You can modify the ambient, diffuse, specular, and reflection properties of the image
connected to the orange image input.
Light Tab
The Controls tab includes parameters for basic lighting brightness and reflections.
Ambient
Ambient controls the Ambient color present in the scene or the selected object. This is a base level of
light added to all pixels, even in completely shadowed areas.
Diffuse
This option controls the Diffuse color present in the scene or for the selected object. This is the
normal color of the object, reflected equally in all directions.
Specular
This option controls the Specular color present in the scene or for the selected object. This is the color
of the glossy highlights reflected toward the eye from a light source.
Reflection
This option controls the Reflection contribution in the scene or for the selected object. High levels
make objects appear mirrored, while low levels overlay subtle reflections giving a polished effect. It
has no effect if no reflection map is connected.
Reflection Type
This menu determines the type of reflection mapping used to project the image in the second input.
— Screen: Screen causes the reflection map to appear as if it were projected on to a screen behind
the point of view.
— Spherical: Spherical causes the reflection map to appear as if it were projected on to a huge
sphere around the whole scene.
— Refraction: Refraction causes the reflection map to appear as if it were refracting or distorting
according to the geometry in the scene.
Polar Height
Polar Height controls the top to bottom angle of the light generated and mapped by the Shader node
for the scene or the selected object.
Shader Tab
The Shader tab is used to adjust the falloff of the Diffuse and Specular light and the tint color of the
specular highlight.
In and Out
These options are used to display and edit point values on the spline.
Specular Color
Use the Diffuse curve to manipulate the diffuse shading and the Specular curve to affect the specular
shading. Drag a box over several points to group-select them. Right-clicking displays a menu with
options for adjusting the spline curves.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Deep Pixel nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
NOTE: Background pixels may have U and V values of 0.0, which set those pixels to the
color of the texture’s corner pixel. To restrict texturing to specific objects, use an effect
mask based on the Alpha of the object or its Object or Material ID channel.
Inputs
The Texture node includes three inputs: one for the main image with UV map channels, one for
a texture map image, and another for an effect mask to limit the area where the replace texture
is applied.
— Input: This orange input accepts a 2D image that includes UV channels. If the UV channels are not
in the images, this node has no effect.
— Texture: The green texture map input provides the texture that is wrapped around objects,
replacing the current texture.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the texture to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
Inspector
Texture controls
Texture Tab
The Texture tab controls allow you to flip, swap, scale, and offset the UV texture image connected to
the texture input.
Swap UV
When this checkbox is selected, the U and V channels of the source image are swapped.
Rotate 90
The texture map image is rotated 90 degrees when this checkbox is enabled.
U and V Scale
These controls change the scaling of the U and V coordinates used to map the texture. Changing
these values effectively enlarges and shrinks the texture map as it is applied.
U and V Offset
Adjust these controls to offset the U and V coordinates. Changing the values causes the texture to
appear to move along the geometry of the object.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Deep Pixel category. The Settings
controls are even found on third-party Deep Pixel-type plugin tools. The controls are consistent and
work the same way for each tool although some tools do include one or two individual options that
are also covered here.
For example, if the red button on a Blur tool is deselected, the blur is first applied to the image, and
then the red channel from the original input is copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not in the mask (i.e., set to 0) to become black/
transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information, see Chapter 18, “Understanding Image Channels,” in the Fusion
Reference Manual.
Motion Blur
— Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
— Quality: Quality determines the number of samples used to create the blur. A quality setting of
2 causes Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
— Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one whole frame exposure. Higher values are possible
and can be used to create interesting effects.
— Center Bias: Center Bias modifies the position of the center of the motion blur. This allows the
creation of motion trail effects.
— Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the field and type the text. When a note is
added to a tool, a small red square appears in the lower-left corner of the node when the full tile is
displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the note
in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Effect Nodes
This chapter details the Effect nodes in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Duplicate [Dup]�������������������������������������������������������������������������������������������������������� 1024
TV [TV]������������������������������������������������������������������������������������������������������������������������� 1051
Inputs
The two inputs on the Duplicate node are used to connect a 2D image and an effect mask, which can
be used to limit the area where duplicated objects appear.
— Input: The orange input is used for the primary 2D image that is duplicated.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the duplicated
objects to appear only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
Duplicate controls
Controls Tab
The Controls tab includes all the parameters you can use to create, offset, and scale copies of the
object connected to the input on the node.
Copies
Use this slider to set the number of copies made. Each copy is a copy of the last copy. So, when set
to 5, the parent is copied, then the copy is copied, then the copy of the copy is copied, and so on.
This allows for some interesting effects when transformations are applied to each copy using the
following controls.
Time Offset
Use the Time Offset slider to offset any animations that are applied to the original image by a set
amount per copy. For example, set the value to -1.0 and use a square set to rotate on the Y-axis as the
source. The first copy shows the animation from a frame earlier. The second copy shows animation
from a frame before that, and so forth. This can be used with great effect on textured planes, for
example, where successive frames of a clip can be shown.
Center
The X and Y Center controls set the offset position applied to each copy. An X offset of 1 would offset
each copy 1 unit along the X-axis from the last copy.
Size
The Size control determines how much scaling to apply to each copy.
Angle
The Angle control sets the amount of Z rotation applied to each copy. The angle adjustment is linear
based on the location of the pivot point.
Apply Mode
The Apply Mode setting determines the math used when blending or combining duplicated objects
that overlap.
— Normal: The default mode uses the foreground object’s Alpha channel as a mask to determine
which pixels are transparent and which are not. When this is active, another menu shows possible
operations, including Over, In, Held Out, Atop, and XOr.
— Screen: Screen blends the objects based on a multiplication of their color values. The Alpha
channel is ignored, and layer order becomes irrelevant. The resulting color is always lighter.
Screening with black leaves the color unchanged, whereas screening with white always produces
white. This effect creates a similar look to projecting several film frames onto the same surface.
When this is active, another menu shows possible operations, including Over, In, Held Out,
Atop, and XOr.
— Dissolve: Dissolve mixes overlapping objects. It uses a calculated average of the objects to
perform the mixture.
— Multiply: Multiplies the values of a color channel. This gives the appearance of darkening the
object as the values are scaled from 0 to 1. White has a value of 1, so the result would be the
same. Gray has a value of 0.5, so the result would be a darker object or, in other words, an object
half as bright.
Operator
This menu is used to select the Operation mode used when the duplicate objects overlap. Changing
the Operation mode changes how the overlapping objects are combined. This drop-down menu is
visible only when the Apply mode is set to Normal.
The formula used to combine pixels in the Duplicate node is always (fg object * x) + (bg object * y).
The different operations determine what x and y are, as shown in the description for each mode.
— Over: The Over mode adds the foreground object to the background object by replacing the
pixels in the background with the pixels from the Z wherever the foreground object’s Alpha
channel is greater than 1.
x = 1, y = 1 - [foreground object Alpha]
— In: The In mode multiplies the Alpha channel of the background object against the pixels in the
foreground object. The color channels of the foreground object are ignored. Only pixels from the
foreground object are seen in the final output. This essentially clips the foreground object using
the mask from the background object.
x = [background Alpha], y = 0
— Held Out: Held Out is essentially the opposite of the In operation. The pixels in the foreground
object are multiplied against the inverted Alpha channel of the background object.
x = 1 - [background Alpha], y = 0
— Atop: Atop places the foreground object over the background object only where the background
object has a matte.
x = [background Alpha], y = 1 - [foreground Alpha]
— XOr: XOr combines the foreground object with the background object wherever either the
foreground or the background have a matte, but never where both have a matte.
x = 1 - [background Alpha], y = 1-[foreground Alpha]
Subtractive/Additive
This slider controls whether Fusion performs an Additive composite, a Subtractive composite, or
a blend of both when the duplicate objects overlap. This slider defaults to Additive assuming the
input image’s Alpha channel is premultiplied (which is usually the case). If you don’t understand the
difference between Additive and Subtractive compositing, here’s a quick explanation.
An Additive blend operation is necessary when the foreground image is premultiplied, meaning that
the pixels in the color channels have been multiplied by the pixels in the Alpha channel. The result
is that transparent pixels are always black since any number multiplied by 0 always equals 0. This
obscures the background (by multiplying with the inverse of the foreground Alpha), and then adds the
pixels from the foreground.
A Subtractive blend operation is necessary if the foreground image is not premultiplied. The
compositing method is similar to an additive composite, but the foreground image is first multiplied
by its Alpha, to eliminate any background pixels outside the Alpha area.
While the Additive/Subtractive option is often an either/or mode in most other applications, the
Duplicate node lets you blend between the Additive and Subtractive versions of the compositing
operation. This can be useful for dealing with problem composites with bright or dark edges.
For example, using Subtractive merging on a premultiplied image may result in darker edges,
whereas using Additive merging with a non-premultiplied image causes any non-black area outside
the foreground’s Alpha to be added to the result, thereby lightening the edges. By blending between
Additive and Subtractive, you can tweak the edge brightness to be just right for your situation.
Alpha Gain linearly scales the Alpha channel values of objects in front. This effectively reduces the
amount that the objects in the background are obscured, thus brightening the overall result. When
the Subtractive/Additive slider is set to Additive with Alpha Gain set to 0.0, the foreground pixels are
simply added to the background.
When Subtractive/Additive slider is set to Subtractive, this controls the density of the composite,
similarly to Blend.
Blur
Adds a blurring effect to the duplicated layers.
— Lock Blur: Locks the X and Y Blur sliders together for symmetrical blurring. This is enabled by
default. When the Lock Blur control is deselected, independent control over each axis is provided.
— Blur: Sets the amount of blur applied to the duplicated layers in the tool. The Blur amount will not
compound based on the number of duplications.
— Glow: Adds a glow effect to the blur of the duplicated layers.
— Blend: The Blend slider determines the percentage of the affected image that is mixed with
original image. It blends in more of the original image as the value gets closer to 0.
— RGBA Scale: Allows adjusting the strength of the individual Red, Green, Blue, and Alpha channels
to the blur of the duplicated layers.
Burn In
The Burn In control adjusts the amount of Alpha used to darken the objects that fall behind other
objects, without affecting the amount of foreground objects added. At 0.0, the blending behaves like a
straight Alpha blend, in contrast to a setting of 1.0 where the objects in the front are effectively added
on to the objects in the back (after Alpha multiplication if in Subtractive mode). This gives the effect
of the foreground objects brightening the objects in the back, as with Alpha Gain. In fact, for Additive
blends, increasing the Burn In gives an identical result to decreasing Alpha Gain.
Blend
This blend control is different from the Blend slider in the Common Settings tab. Changes made to this
control apply the blend between objects. The Blend slider fades the results of the last object first, the
penultimate after that, and so on. The blending is divided between 0 and 1, with 1 being all objects are
fully opaque and 0 being only the original object showing.
Merge Under
This checkbox reverses the layer order of the duplicated elements, making the last copy the
bottommost layer and the first copy the topmost layer.
r
Duplicate Jitter tab
Random Seed
The Random Seed slider and Reseed button are used to generate a random starting point for the
amount of jitter applied to the duplicated objects. Two Duplicate nodes with identical settings but
different random seeds produce two completely different results.
Center X and Y
Use these two controls to adjust the amount of variation in the X and Y position of the
duplicated objects.
Axis X and Y
Use these two controls to adjust the amount of variation in the rotational pivot center of the
duplicated objects. This affects only the additional jitter rotation, not the rotation produced by the
Rotation settings in the Controls tab.
X Size
Use this control to adjust the amount of variation in the Scale of the duplicated objects.
Angle
Use this dial to adjust the amount of variation in the Z rotation of the duplicated objects.
Gain
The Gain RGBA controls randomly multiply the values of the image channel linearly.
Blend
Changes made to this control randomize the blend between objects.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
There are three Inputs on the Highlight node: one for the image, one for the effects mask, and
another for a highlight mask.
— Input: The orange input is used for the primary 2D image that gets the highlight applied.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input restricts the highlight to be
within the pixels of the mask. An effects mask is applied to the tool after the tool is processed.
— Highlight Mask: The Highlight node supports pre-masking using the white highlight mask input.
The image is filtered before the highlight is applied. The highlight is then merged back over the
original image. Unlike regular effect masks, it does not crop off highlights from source pixels when
the highlight extends past the edges of the mask.
Highlight controls
Controls Tab
The Controls tab includes parameters for the highlight style except for color, which is handled in the
Color Scale tab.
Curve
The Curve value changes the drop-off over the length of the highlight. Higher values cause the
brightness of the flares to drop off closer to the center of the highlight, whereas lower values drop off
farther from the center.
Length
This designates the length of the flares from the highlight.
Number of Points
This determines the number of flares emanating from the highlight.
Angle
Use this control to rotate the highlights.
Merge Over
When enabled, the effect is overlaid on the original image. When disabled, the output is the highlights
only. This is useful for downstream color correction of the highlights.
By click and holding on the Pick button, then dragging the pointer over the viewer, you can select a
specific color from the image.
Alpha Scale
Moving the Alpha slider down makes highlight falloff more transparent.
Common Controls
Setting Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
In the real world, lens flares occur when extremely bright light sources in the scene by the reflections
are reflected off elements inside the lens of the camera. One might see lens flares in a shot when
viewing a strong light source through a camera lens, like the sun or another bright star.
— Input: The required orange input is used for the primary 2D image that gets the hot spot applied.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input restricts the hot spot to be
within the pixels of the mask. An effects mask is applied to the tool after the tool is processed.
— Occlusion: The green Occlusion input accepts an image to provide the occlusion matte. The matte
is used to block the hot spot, causing it to “wink.” The white pixels in the image occlude the hot
spot. Gray pixels partially suppress the hot spot.
Inspector
Primary Strength
This control determines the brightness of the primary hot spot.
Aspect
This controls the aspect of the spot. A value of 1.0 produces a perfectly circular hot spot. Values
above 1.0 elongate the circle horizontally, and values below 1.0 elongate the circle vertically.
Aspect Angle
This control can be used to rotate the primary hot spot.
Secondary Strength
This control determines the strength, which is to say the brightness, of the secondary hot spot. The
secondary hot spot is a reflection of the primary hot spot. It is always positioned on the opposite side
of the image from the primary hot spot.
Secondary Size
This determines the size of the secondary hot spot.
Apply Mode
This control determines how the hot spot affects the underlying image.
— Add (Burn): This causes the spots created to brighten the image.
— Subtract (Dodge): This causes the spots created to dim the image.
— Multiply (Spotlight): This causes the spots created to isolate a portion of the image with light and
to darken the remainder of the image.
Occlude
This menu is used to select which channel of the image connected to the Hot Spot node’s Occlusion
input is used to provide the occlusion matte. Occlusion can be controlled from Alpha or R, G, or B
channels of any image connected to the Occlusion input on the node’s tile.
Lens Aberration
Aberration changes the shape and behavior of the primary and secondary hot spots.
— In and Out Modes: Elongates the shape of the hot spot into a flare. The hot spot stretches toward
the center when set to In mode and stretches toward the corners when set to Out mode.
— Flare In and Flare Out Modes: This option is a lens distortion effect that is controlled by the
movement of the lens effect. Flare In causes the effect to become more severe, the closer the hot
spot gets to the center. Flare Out causes the effect to increase as the hot spot gets closer to the
edges of the image.
— Lens: This mode emulates a round, ringed lens effect.
Color Tab
The Color tab is used to modify the color of the primary and secondary hot spots.
Color Mode
This menu allows you to choose between animated or static color modifications using the small curves
editor in the Inspector.
— None: The default None setting retains a static curve adjustment for the entire range.
— Animated Points: This setting allows the color curves in the spline area to be animated over
time. Once this option is selected, moving to the desired frame and making a change in the Spline
Editor sets a keyframe.
— Dissolve mode: Dissolve mode is mostly obsolete and is included for compatibility reasons only.
The vertical axis represents the intensity or strength of the color channel. The horizontal axis
represents the hot spot position along the radius, from the left outside edge to the inside right edge.
The default curve indicates that the red, green, blue, and Alpha channels all have a linear falloff.
Mix Spline
The Mix spline is used to determine the influence that the Radial controls have along the radius of the
hot spot. The horizontal axis represents the position along the circle’s circumference, with 0 being 0
NOTE: Right-clicking in the LUT displays a contextual menu with options related to
modifying spline curves.
For more information on the LUT Editor, see Chapter 7, “Using Viewers,” in the Fusion Reference Manual.
Radial Tab
Radial On
This control enables the Radial splines. Otherwise, the radial matte created by the splines is not
applied to the hot spot, and the Mix spline in the color controls does not affect the hot spot.
Radial Mode
Similar to the Color mode menu, this menu allows you to choose between animated or static radial hot
spot modifications using the small curves editor in the Inspector.
— No Animation: The default setting retains a static curve adjustment for the entire range.
— Animated Points: This setting allows the radial curves in the spline area to be animated over
time. Once this option is selected, moving to the desired frame and making a change in the Spline
Editor sets a keyframe.
The Interpolated Values option is mostly obsolete and is included for compatibility reasons only.
Length Angle
This control rotates the effect of the Radial Length spline around the circumference of the hot spot.
Density Angle
This control rotates the effect of the Radial Density spline around the circumference of the hot spot.
NOTE: Right-clicking in the spline area displays a contextual menu containing options
related to modifying spline curves.
A complete description of LUT Editor controls and options can be found in Chapter 45, “LUT Nodes,” in
the Fusion Reference Manual or Chapter 105 in the DaVinci Resolve Reference Manual.
Element Size
This determines the size of element reflections.
Element Position
This determines the distance of element reflections from the axis. The axis is calculated as a line
between the hot spot position and the center of the image.
Element Type
Use this group of buttons to choose the shape and density of the element reflections. The presets
available are described below.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
There are two Inputs on the Pseudo Color node: one for an image and one for an effects mask.
— Input: The orange input is used for the primary 2D image that gets its color modified.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input restricts the pseudo color to
be within the pixels of the mask. An effects mask is applied to the tool after the tool is processed.
Inspector
Color Checkbox
When enabled, the Pseudo Color node affects this color channel.
Wrap
When enabled, waveform values that exceed allowable parameter values are wrapped to the
opposite extreme.
Soft Edge
This slider determines the soft edge of color transition.
Waveform
This selects the type of waveform to be created by the generator. Four waveforms are available: Sine,
Triangle, Sawtooth, and Square.
Frequency
This controls the frequency of the waveform selected. Higher values increase the number of
occurrences of the variances.
Phase
This modifies the Phase of the waveform. Animating this control produces color cycling effects.
Mean
This determines the level of the waveform selected. Higher values increase the overall brightness of
the channel until the allowed maximum is reached.
Amplitude
Amplitude increases or decreases the overall power of the waveform.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
There are two inputs on the Rays node: one for the image and one for the effects mask.
— Input: The orange input is used for the primary 2D image that gets the rays applied to it.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input restricts the rays to be
within the pixels of the mask. An effects mask is applied to the tool after the tool is processed.
Inspector
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the rays.
Blend
Sets the percentage of the original image that’s blended with the light rays.
Decay
Sets the length of the light rays.
Weight
Sets the falloff of the light rays.
Exposure
Sets the intensity level of the light rays.
Threshold
Sets the luminance limit at which the light rays are produced.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in the
“The Common Controls” section at the end of this chapter.
Shadow [Sh]
Input
The three inputs on the Shadow node are used to connect a 2D image that causes the shadow.
A depth map input and an effect mask can be used to limit the area where trails appear. Typically, the
output of the shadow is then merged over the actual background in the composite.
— Input: The orange input is used for the primary 2D image with Alpha channel that is the source of
the shadow.
— Depth: The green Depth map input takes a 2D image as its input and extracts a depth matte
from a selected channel. The light Position and Distance controls can then be used to modify the
appearance of the shadow based on depth.
NOTE: The Shadow node is designed to create simple 2D drop shadows. Use a Spot Light
node and an Image Plane 3D node for full 3D shadow casting.
Inspector
Shadow Offset
This control sets the X and Y position of the shadow. When the Shadow node is selected, you can also
adjust the position of the Shadow Offset using the crosshair in the viewer.
Softness
Softness controls how blurry the shadow’s edges appear.
Shadow Color
Use this control to select the color of the shadow. The most realistic shadows are usually not totally
black and razor sharp.
Light Position
This control sets the position of the light relative to the shadow-casting object. The Light Position is
only taken into consideration when the Light Distance slider is not set to infinity (1.0).
Light Distance
This slider varies the apparent distance of the light between infinity (1.0) and zero distance from the
shadow-casting object. The advantage of setting the Light Distance is that the resulting shadow is
more realistic-looking, with the further parts of the shadow being longer than those that are closer.
Z Map Channel
This menu is used to select which color channel of the image connected to the node’s Depth Map
input is used to create the shadow’s depth map. Selections exist for the RGB and A, Luminance, and
Z-buffer channels.
Output
This menu determines if the output image contains the image with shadow applied or the
shadow only.
The shadow only method is useful when color correction, perspective, or other effects need to be
applied to the resulting shadow before it is merged back with the object.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Input
The two inputs on the Trails node are used to connect a 2D image and an effect mask that can be used
to limit the area where trails appear.
— Input: The orange input is used for the primary 2D image that receives the trails applied.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the area where the
trails effect appears. An effects mask is applied to the tool after the tool is processed.
Controls Tab
The Controls tab contains all the primary controls necessary for customizing the trails.
Restart
This control clears the image buffer and displays a clean frame, without any of the ghosting effects.
Preroll
This makes the Trails node pre-render the effect by the number of frames on the slider.
Reset/Preroll on Render
When this checkbox is enabled, the Trails node resets itself when a preview or final render is initiated.
It pre-rolls the designated number of frames.
Preroll Frames
This determines the number of frames to pre-roll.
Lock RGBA
When selected, this checkbox allows the Gain of the color channels to be controlled independently.
This allows for tinting of the Trails effect.
Rotate
The Rotate control rotates the image in the buffer before the current frame is merged into the effect.
The offset is compounded between each element of the trail. This is different than each element of the
trail rotating on its pivot point. The pivot remains over the original object.
Offset X/Y
These controls offset the image in the buffer before the current frame is merged into the effect.
Control is given over each axis independently. The offset is compounded between each element of
the trail.
Scale
The Scale control resizes the image in the buffer before the current frame is merged into the effect.
The size is compounded between each element of the trail.
Blur Size
The Blur Size control applies a blur to the trails in the buffer before the current frame is merged into
the effect. The blur is compounded between each element of the trail.
Apply Mode
The Apply Mode setting determines the math used when blending or combining the trailing objects
that overlap.
— Normal: The default mode uses the foreground object’s Alpha channel as a mask to determine
which pixels are transparent and which are not. When this is active, another menu shows possible
operations, including Over, In, Held Out, Atop, and XOr.
— Screen: Screen blends the objects based on a multiplication of their color values. The Alpha channel
is ignored, and layer order becomes irrelevant. The resulting color is always lighter. Screening with
black leaves the color unchanged, whereas screening with white always produces white. This effect
creates a similar look to projecting several film frames onto the same surface. When this is active,
another menu shows possible operations, including Over, In, Held Out, Atop, and XOr.
— Dissolve: Dissolve mixes overlapping objects. It uses a calculated average of the objects to
perform the mixture.
— Multiply: Multiplies the values of a color channel. This gives the appearance of darkening the
object as the values are scaled from 0 to 1. White has a value of 1, so the result would be the
same. Gray has a value of 0.5, so the result would be a darker object or, in other words, an object
half as bright.
Operator
This menu is used to select the Operation mode used when the trailing objects overlap. Changing the
Operation mode changes how the overlapping objects are combined to produce a result. This drop-
down menu is visible only when the Apply mode is set to Normal.
The formula used to combine pixels in the trails node is always (fg object * x) + (bg object * y).
The different operations determine what x and y are, as shown in the description for each mode.
— Over: The Over mode adds the foreground object to the background object by replacing the
pixels in the background with the pixels from the Z wherever the foreground object’s Alpha
channel is greater than 1.
x = 1, y = 1 - [foreground object Alpha]
— In: The In mode multiplies the Alpha channel of the background object against the pixels in the
foreground object. The color channels of the foreground object are ignored. Only pixels from the
foreground object are seen in the final output. This essentially clips the foreground object using
the mask from the background object.
x = [background Alpha], y = 0
— Held Out: Held Out is essentially the opposite of the In operation. The pixels in the foreground
object are multiplied against the inverted Alpha channel of the background object.
x = 1 - [background Alpha], y = 0
— Atop: Atop places the foreground object over the background object only where the background
object has a matte.
x = [background Alpha], y = 1 - [foreground Alpha]
— XOr: XOr combines the foreground object with the background object wherever either the
foreground or the background have a matte, but never where both have a matte.
x = 1 - [background Alpha], y = 1 - [foreground Alpha]
Subtractive/Additive
This slider controls whether Fusion performs an Additive composite, a Subtractive composite, or
a blend of both when the trailing objects overlap. This slider defaults to Additive assuming the
input image’s Alpha channel is premultiplied (which is usually the case). If you don’t understand the
difference between Additive and Subtractive compositing, below is a quick explanation.
For example, using Subtractive merging on a premultiplied image may result in darker
edges, whereas using Additive merging with a non-premultiplied image causes any non-
black area outside the foreground’s Alpha to be added to the result, thereby lightening the
edges. By blending between Additive and Subtractive, you can tweak the edge brightness
to be just right for your situation.
When the Subtractive/Additive slider is set to Subtractive, this controls the density of the composite,
similar to Blend.
Burn In
The Burn In control adjusts the amount of Alpha used to darken the objects that trail under other
objects, without affecting the amount of foreground objects added. At 0.0, the blending behaves like
a straight Alpha blend. At 1.0, the objects in the front are effectively added onto the objects in the
back (after Alpha multiplication if in Subtractive mode). This gives the effect of the foreground objects
brightening the objects in the back, as with Alpha Gain. In fact, for Additive blends, increasing the
Burn In gives an identical result to decreasing Alpha Gain.
Merge Under
When enabled, the current image is placed under the generated trail, rather than the usual, over top
operation. The layer order of the trailing elements is also reversed, making the last trail the topmost layer.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
TV [TV]
The TV node
TV Node Introduction
The TV node is a simple node designed to mimic some of the typical flaws seen in analog television
broadcasts and screens. This Fusion-specific node is mostly obsolete when using DaVinci Resolve
because of the more advanced Analog Damage ResolveFX.
— Input: The orange input is used for the primary 2D image that gets the TV distortion applied.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the area where the TV
effect to appears. An effects mask is applied to the tool after the tool is processed.
Inspector
TV node controls
Controls Tab
The Controls tab is the first of three tabs used to customize the analog TV distortion. The Controls tab
modifies the scan lines and image distortion of the effect.
Scan Lines
This slider is used to emulate the interlaced look by dropping lines out of the image. Setting it to
black, with a transparent Alpha, drops a line. A value of 1 (default) drops every second line. A value of 2
shows one line, and then drops the second and third and repeats. A value of zero turns off the effect.
Vertical
Use this slider to apply a simple Vertical offset to the image.
Skew
This slider is used to apply a diagonal offset to the image. Positive values skew the image to the
top left. Negative values skew the image to the top right. Pixels pushed off frame wrap around and
reappear on the other side of the image.
Amplitude
The Amplitude slider can be used to introduce smooth sine wave-type deformation to the edges of
the image. Higher values increase the intensity of the deformation. Use the Frequency control to
determine how often the distortion is repeated.
Frequency
The Frequency slider sets the frequency of the sine wave used to produce distortion along the edges
of the image when the amplitude control is greater than 1.
Offset
Use Offset to adjust the position of the sine wave, causing the deformation applied to the image via
the Amplitude and Frequency controls to see across the image.
Noise Tab
The Noise tab is the second of three tabs used to customize the analog TV distortion. The Noise tab
modifies the noise in the image to simulate a weak analog antenna signal.
Power
Increase the value of this slider above 0 to introduce noise into the image. The higher the value, the
stronger the noise.
Size
Use this slider to scale the noise map larger.
Random
If this thumbwheel control is set to 0, the noise map is static. Change the value over time to cause the
static to change from frame to frame.
Bar Strength
At the default value of 0, no bar is drawn. The higher the value, the darker the area covered by the
bar becomes.
Bar Size
Increase the value of this slider to make the bar taller.
Bar Offset
Animate this control to scroll the bar across the screen.
Common Controls
Settings Tab
The Settings tab controls are common to all Effect nodes, so their descriptions can be found in the
following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Effects category. The Settings
controls are even found on third-party Effects-type plugin tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options, which are
also covered here.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image. This
causes the tool to skip processing entirely, copying the input straight to the output.
For example, if the red button on a Blur tool is deselected, the blur is first applied to the image, and
then the red channel from the original input is copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to skip
processing that channel entirely. Tools that do this possess a set of like RGBA buttons on the Controls
tab in the tool. In this case, the buttons in the Settings and the Control tabs are identical.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on coverage and background channels, see Chapter 18, “Understanding Image
Channels,” in the Fusion Reference Manual.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition.
If the upstream DoD is smaller than the frame, the remaining area in the frame is treated as
black/transparent.
— None: Setting this option to None does not perform any source image clipping. Any data required
to process the node’s effect that would usually be outside the upstream DoD is treated as
black/transparent.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available
Motion Blur
— Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
— Quality: Quality determines the number of samples used to create the blur. A quality setting of
2 causes Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
— Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
— Center Bias: Center Bias modifies the position of the center of the motion blur. This allows for the
creation of motion trail effects.
— Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Film Nodes
This chapter details the Film nodes in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Cineon Log [Log]����������������������������������������������������������������������������������������������������� 1059
Input
There are two Inputs on the Cineon Log node: one for the log image and one for the effects mask.
— Input: The orange input is used for the primary 2D image that gets the highlight applied.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input restricts the log
conversion to be within the pixels of the mask. An effects mask is applied to the tool after the tool
is processed.
Controls Tab
The Controls tab includes settings for converting from log gamma to linear or from linear to log. You
first select the Mode and then the Log Type. For instance, choose Log to Lin from the Mode menu,
and then select BMD Film if you are compositing with a RAW clip from a Blackmagic Design camera.
Those settings output a linear image ready for compositing.
Depth
The Depth menu is used to select the color depth used to process the input image. The default option
is Auto. Auto determines the color depth based on the file format loaded. For example, JPEG files
automatically process at 8 bit because the JPEG file format does not store color depths greater than
8. Blackmagic RAW files load at Float, etc. If the color depth of the format is undetermined, the default
depth defined in the Frame Format preferences is used.
Mode
The Mode menu offers two options: one for converting log images to linear and one for converting
linear images to logarithmic.
Log Type
The Log Type menu allows you to select the source of the file. Typically, you select the camera used to
create the image, although the Josh Pines option is specific to film scan workflows. This menu contains
the following camera log types:
Lock RGB
When enabled, the settings in this tab affect all color channels equally.
Disable this control to convert the red, green, and blue channels of the image using separate settings
for each channel.
When processing in floating-point color space, both negative and high out-of-range values are
preserved. When using 16-bit or 8-bit mode, the out-of-range values are clipped.
Applying a soft clip of any value other than 1 causes the node to process at 16-bit integer, eliminating
all out-of-range values that do not fit within the soft clip.
Common Controls
Settings Tab
The Settings tab controls are common to all Film nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Black Rolloff
Since a mathematical log() operation on a value of zero or lower results in invalid values,
Fusion clips values below 1e-38 (0 followed by 38 zeros) to 0 to ensure correct results. This
is almost never an issue, since values that small have no visual impact on an image. To see
such tiny values, you would have to add three Brightness Contrast nodes, each with a gain
set to 1,000,000. Even then, the values would hover very close to zero.
We have seen processes where instead of cropping these minimal values, they are instead
scaled. So values between 0.0 and 1e-16 are scaled between 1e-18 and 1e-16. The idea is to
crush the majority of the visual range in a float image into values very near to zero, then
expand them again, forcing a gentle ramp to produce a small ramp in the extreme black
values. Should you find yourself facing a color pipeline using this process, here is how you
can mimic it with the help of a Custom node.
The process involves converting the log image to linear with a very small gamma and
a wider than normal black level to white level (e.g., conversion gamma of 0.6, black of
10, white of 1010). This crushes most of the image’s range into very small values. This is
followed by a Custom node (described below), and then by a linear to log conversion that
reverses the process but uses a slightly higher black level. The difference between the black
levels defines the falloff range.
The Custom node should use the following equation in the red, green, and blue
expressions:
Falloff Comparison
Input
There are two inputs on the Film Grain node: one for the image and one for the effects mask.
— Input: The orange input is used for the primary 2D image that gets the grain applied.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the grain to be within
the pixels of the mask. An effects mask is applied to the tool after the tool is processed.
Inspector
Complexity
The Complexity setting indicates the number of “layers” of grain applied to the image. With a
complexity of 1, only one grain layer is calculated and applied to the image. When complexity is set
to 4, the node calculates four separate grain layers and applies the mean combined result of each
pass to the final image. Higher complexities produce visually more sophisticated results, without the
apparent regularity often perceivable in digitally-produced grain.
Alpha Multiply
When the Alpha Multiply checkbox is enabled, the Film Grain node multiplies its results by the source
image’s Alpha channel. This is necessary when working with post-multiplied images to ensure that the
grain does not affect areas of the image where the Alpha is 0.0 (transparent).
NOTE: Since it is impossible to say what the final value of semitransparent pixels in the
image are until after they are composited with their background, you should avoid applying
log-processed grain to the elements until after they have been composited. This ensures
that the strength of the grain is accurate.
Log Processing
When this checkbox is enabled (default), the grain applied to the image has its intensity applied
nonlinearly to match the grain profile of most film. Roughly speaking, the intensity of the grain
increases exponentially from black to white. When this checkbox is disabled, the grain is applied
uniformly, regardless of the brightness of the affected pixel.
One of the primary features of grain in film is that the appearance of the grain varies radically with
the exposure so that there appears to be minimal grain present in the blacks, with the amount and
deviation of the grain increasing as the pixels exposure increases. In a film negative, the darkest
portions of the developed image appear entirely opaque, and this obscures the grain. As the negative
becomes progressively clearer, more of the grain becomes evident in the result. Chemical differences
in the R, G, B, layer’s response to light also cause each color component of the film to present a
different grain profile, typically with the blue channel presenting the most significant amount of grain.
As a result, an essential control in the Film Grain node is the Log Processing checkbox, which should
be enabled when matching film, and disabled when working with images that require a more linear
grain response. Having this checkbox enabled closely mimics the results of preceding the old Grain
node with a Linear to Log conversion and following with a Log to Linear conversion immediately after.
Seed
The Seed slider and Reseed button are presented whenever a Fusion node relies on a random result.
Two nodes with the same seed values produce the same random results. Click on the Reseed button
to randomly select a new seed value, or adjust the slider to select a new seed value manually.
Time Lock
Enabling Time Lock stops the random seed from generating new grain on every frame.
Size
The grain size is calculated relative to the size of a pixel. Consequently, changing the resolution of the
image does not impact the relative appearance of the grain. The default grain size of 1.0 produces
grain kernels that cover roughly 2 pixels.
Strength
Grain is expressed as a variation from the original color of a pixel. The stronger the grain’s strength,
the wider the possible variation from the original pixel value. For example, given a pixel with an
original value of p, and a Grain node with complexity = 1 size = 1; roughness = 0; log processing = off;
the grain produces an output value of p +/- strength. In other words, a pixel with a value of 0.5 with a
grain strength of 0.02 could end up with a final value between 0.48 and 0.52.
Once again, that’s a slight oversimplification, especially when the complexity exceeds 1. Enabling the
Log Processing checkbox also causes that variation to be affected such that there is less variation in
the blacks and more variation in the whites of the image.
NOTE: When visualizing the effect of the grain on the image, the more mathematically
inclined may find it helps to picture a sine wave, where each lobe of the sine wave covers
1 pixel when the Grain Size is 1.0. The Grain Size controls the frequency of the sine
wave, while the Grain Strength controls its amplitude. Again, this is something of an
oversimplification.
Roughness
The Roughness slider applies low frequency variation to give the impression of clumping in the grain.
Try setting the roughness to 0, and observe that the grain produced has a very even luminance
variation across the whole image. Increase the roughness to 1.0 and observe the presence of “cellular”
differences in the luminance variation.
Offset
The Offset control helps to match the intensity of the grain in the deep blacks by offsetting the values
before the intensity (strength) of the grain is calculated. So an offset of 0.1 would cause a pixel with a
value of 0.1 to receive grain as if its value was 0.2.
Common Controls
Settings Tab
The Settings tab controls are common to all Film nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Grain [Grn]
— Input: The orange input is used for the primary 2D image that gets the grain applied.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the grain to be within
the pixels of the mask. An effects mask is applied to the tool after the tool is processed.
A Grain node used to add grain back for a more realistic composite
Inspector
Grain controls
Controls Tab
The Controls tab includes all the parameters for modifying the appearance of the grain.
Power
This slider determines the strength of the grain. A higher value increases visibility, making the grain
more prevalent.
Grain Softness
This slider controls the blurriness or fuzziness of the grain. Smaller values cause the grain to be more
sharp or coarse.
Grain Size
This slider determines the size of the grain particles. Higher values increase the grain size.
Grain Spacing
This slider determines the density or amount of grain per area. Higher values cause the grain to
appear more spaced out.
Aspect Ratio
This slider adjusts the aspect of the grain so that it can be matched with anamorphic images.
Alpha-Multiply
When enabled, this checkbox multiplies the image by the Alpha, clearing the black areas of any
grain effect.
Spread Tab
The Spread tab uses curves for the red, green, and blue channels to control the amount of grain over
each channel’s tonal range.
RGB Checkboxes
The red, green, and blue checkboxes enable each channel’s custom curve, allowing you to control how
much grain appears in each channel. To mimic usual film responses, more grain would appear in the
blue channel than the red, and the green channel would receive the least. Right-clicking in the spline
area displays a contextual menu containing options related to modifying spline curves.
For more information on the LUT Editor’s controls see Chapter 45, “LUT Nodes,” in the Fusion
Reference Manual.
In and Out
This control provides direct editing of points on the curve by setting In/Out point values.
Bell-Shaped Spread
Setting a bell shape is often a good starting point to create a more realistic-looking grain.
Here we have a non-uniform distribution with different amounts of grain in the red, green,
and blue channels.
In both examples, the grain’s power has been exaggerated to show the effect a bit better.
Common Controls
Setting Tab
The Settings tab controls are common to all Film nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
There are two Inputs on the Light Trim node: one for the 2D image and one for the effects mask.
— Input: The orange input is used for the primary Log 2D image that gets its exposure adjusted.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the exposure
change to be within the pixels of the mask. An effects mask is applied to the tool after the tool
is processed.
Inspector
Lock RGBA
When selected, the Lock RGBA control collapses control of all image channels into one slider.
This selection is on by default. To manipulate the various color channels independently, deselect
this checkbox.
Trim
This slider shifts the color in film, optical printing, and lab printing points. 8 points equals one stop
of exposure.
Common Controls
Settings Tab
The Settings tab controls are common to all Film nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
To use this node, view the image and look at the red channel. Then increase the Red Softness until the
grain appears to be gone. Next, increase the sharpness until the detail reappears, but stop before the
grain reappears. Repeat for the green and blue channels.
Inputs
There are two inputs on the Remove Noise node: one for the 2D image and one for the effects mask.
— Input: The orange input is used for the primary 2D image that gets noise removed.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the noise removal
change to be within the pixels of the mask. An effects mask is applied to the tool after the tool
is processed.
Inspector
Controls Tab
The Controls tab switches the noise removal between two methods: Color and Chroma. When the
Method is set to Color, the Controls tab adjusts the amount of blur and sharpness individually for each
RGB channel. When the Method is set to Chroma, the blur and sharpness is adjusted based on Luma
and Chroma controls.
Method
This menu is used to choose whether the node processes color using the Color or Chroma method.
This also gives you a different set of control sliders.
Lock
This checkbox links the Softness and Detail sliders of each channel together.
Common Controls
Settings Tab
The Settings tab controls are common to all Film nodes, so their descriptions can be found in the
following “The Common Controls” section.
Inspector
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Commonly, this causes the tool to skip processing entirely, copying the input straight to the output.
For example, if the red button on a Blur tool is deselected, the blur is first applied to the image, and
then the red channel from the original input is copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels not included in the mask (i.e., set to 0) to become black/
transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels,” in the Fusion Reference Manual.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering. This
is mostly important for nodes like Blur, which may require samples from portions of the image outside
the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition to use
the full frame of the image, effectively ignoring the current domain of definition. If the upstream
DoD is smaller than the frame, the remaining area in the frame is treated as black/transparent.
— Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
— None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Filter Nodes
This chapter details the Filter nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Create Bump Map [CBu]��������������������������������������������������������������������������������������� 1077
Input
The Create Bump Map node includes two inputs: one for the main image and the other for an effect
mask to limit the area where the bump map is created.
— Input: The orange input takes the RGBA channels from an image to calculate the bump map.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the creation of the bump map to only those pixels within the mask. An effects mask is applied to
the tool after the tool is processed.
A Create Bump Map node produces a bump map as an RGB image for further image processing
Controls Tab
The Controls tab contains all parameters for creating the bump map.
Filter Size
This menu sets the filter size for creating the bump map. You can set the filter size at 3 x 3 pixels or
5 x 5 pixels, thus determining the radius of the pixels sampled. The larger the size, the more time it
takes to render.
Height Source
The Height Source menu selects the channel for extracting the grayscale information.
Clamp Normal.Z
This slider clips the lower values of the blue channel in the resulting bump texture.
Wrap Mode
This menu determines how the image wraps at the borders, so the filter produces a correct result
when using seamless tiling textures.
Height Scale
The height scale menu modifies the contrast of the resulting values in the bump map. Increasing this
value yields in a more visible bump map.
NOTE: The below definitions are provided to clarify some of the terminology used in the
Create Bump Map node and other similar types of nodes.
The Custom filter uses an array (or grid) of either 3 x 3, 5 x 5, or 7 x 7 values. (Note: The array in the
Inspector always shows a 7 x 7 grid; however, setting the Matrix Size to 3 x 3 uses only the center
9 cells.) The center of the array represents the current pixel, and entries nearby represent adjacent
pixels. A value of 1 applies the full value of the pixel to the filter. A value of 0 ignores the pixel’s value.
A value greater than 1 multiplies the pixel’s effect on the result. Negative values can also be entered,
where the value of the pixel is subtracted from the average. Only integer values can be entered;
0.x is not valid.
Input
The Custom Filter node includes two inputs: one for the main image and the other for an effect mask
to limit the area where the custom filter is applied.
— Input: The orange input takes the RGBA channels from an image to calculate the custom filter.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the custom filter to only those pixels within the mask. An effects mask is applied to the tool after
the tool is processed.
Inspector
Controls Tab
The Controls tab is used to set the filter size and then use the filter matrix to enter convolution
filter values.
This is not the same as the RGBA checkboxes found under the Common Controls. The node takes
these controls into account before it processes. Deselecting a channel causes the node to skip that
channel when processing, speeding up the rendering of the effect. In contrast, these controls under
the Common Controls tab are applied after the node has processed.
Matrix Size
This menu is used to set the size of the filter at 3 x 3 pixels, 5 x 5 pixels, or 7 x 7 pixels, thus setting the
radius of the pixels sampled. The larger the size, the more time it takes to render.
Update Lock
When this control is selected, Fusion does not render the filter. This is useful for setting up each value
of the filter, and then turning off Update Lock and rendering the filter.
The default Matrix size is 3 x 3. Only the pixels immediately adjacent to the current pixel are analyzed.
If a larger Matrix size is set, more of the text boxes in the grid are enabled for input.
Normalize
This controls the amount of filter normalization that is applied to the result. Zero gives a normalized
image. Positive values brighten or raise the level of the filter result. Negative values darken or lower
the level.
Floor Level
This adds or subtracts a minimum, or Floor Level, to the result of the filtered image. Zero does not add
anything to the image. Positive values add to the filtered image, and negative values subtract from
the image.
Examples
Original Image Example
...has zero effect from its neighboring pixels, and the resulting image would be unchanged.
Original image
Softening Example
Emboss Example
The example below subtracts five times the value from the top left and adds five times
the value from the lower right.
-5 0 0
0 1 0
0 0 5
If parts of the processed image are very smooth in color, the neighboring values are
very similar.
In parts of the image where the pixels are different (e.g., an edge), the results are
different and tend to highlight or emboss edges in the image.
Exposure Example
Relief Example
... and adjusting Floor Level to a positive value creates a Relief filter.
Common Controls
Settings Tab
The Settings tab controls are common to all Filter nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
The Erode Dilate node includes two inputs: one for the main image and the other for an effect mask to
limit the area where the erode or dilate is applied.
— Input: The orange input takes the RGBA channels from an image to calculate the custom filter.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the erode or dilate to only those pixels within the mask. An effects mask is applied to the tool after
the tool is processed.
Controls Tab
The Controls tab includes the main Amount slider that determines whether you are performing an
erode by entering a negative value or a dilate by entering a positive value.
This is not the same as the RGBA checkboxes found under the Common Controls. The node takes
these controls into account before it processes. Deselecting a channel causes the node to skip that
channel when processing, speeding up the rendering of the effect. In contrast, the channel controls
under the Common Controls tab are applied after the node has processed.
Lock X/Y
The Lock X/Y checkbox is used to separate the Amount slider into amount X and amount Y, allowing a
different value for the effect on each axis.
Amount
A negative value for Amount causes the image to erode. Eroding simulates the effect of an
underexposed frame, shrinking the image by growing darker areas of the image so that they eat away
at brighter regions.
A positive value for Amount causes the image to dilate, similar to the effect of overexposing a camera.
Regions of high luminance and brightness grow, eating away at the darker regions of the image. Both
techniques eradicate fine detail in the image and tend to posterize fine gradients.
The Amount slider scale is based on the input image width. An amount value of 1 = image width. So, if
you want to erode or dilate by exactly 1 pixel on an HD image, you would enter 1/1920, or 0.00052083.
Common Controls
Settings Tab
The Settings tab controls are common to all Filter nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
The Filter node includes two inputs: one for the main image and the other for an effect mask to limit
the area where the filter is applied.
— Input: The orange input is used for the primary 2D image that gets the filter applied.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
limits the filter to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
Filter controls
Controls Tab
The Controls tab is used to set the filter type, the channels the filter is applied to, and the amount it
blends with the original image.
Filter Type
The Filter Type menu provides a selection of filter types described below.
— Relief: This appears to press the image into metal, such as an image on a coin. The image appears
to be bumped and overlaid on gray.
— Emboss Over: Embosses the image over the top of itself, with adjustable highlight and shadow
height and direction.
— Noise: Uniformly adds noise to images. This is often useful for 3D computer-generated images
that need to be composited with live action, as it reduces the squeaky-clean look that is inherent
in rendered images. The frame number acts as the random generator seed. Therefore, the effect
is different on each frame and is repeatable.
— Defocus: This filter type blurs the image.
— Sobel: Sobel is an advanced edge detection filter. Used in conjunction with a Glow filter, it creates
impressive neon light effects from live-action or 3D-rendered images.
— Laplacian: Laplacian is a very sensitive edge detection filter that produces a finer edge than the
Sobel filter.
— Grain: Adds noise to images similar to the grain of film (mostly in the midrange). This is useful
for 3D computer-generated images that need to be composited with live action as it reduces the
squeaky-clean look that is inherent in rendered images. The frame number acts as the random
generator seed. Therefore, the effect is different on each frame and is repeatable.
Power
Values range from 1 to 10. Power proportionately increases the amount by which the selected filter
affects the image. This does not apply to the Sobel or Laplacian filter type.
Median
Depending on which Filter Type is selected, the Median control may appear. It varies the Median filter’s
effect. A value of 0.5 produces the true median result, as it finds the middle values. A value of 0.0 finds
the minimums, and 1.0 finds the maximums. This applies to the Median setting only.
Seed
This control is visible only when applying the Grain or Noise filter types. The Seed slider can be used
to ensure that the random elements of the effect are seeded with a consistent value. The randomizer
always produces the same result, given the same seed value.
Animated
This control is visible only when applying the Grain or Noise filter types. Select the checkbox to cause
the noise or grain to change from frame to frame. To produce static noise, deselect this checkbox.
Common Controls
Settings Tab
The Settings tab controls are common to all Filter nodes, so their descriptions can be found in
“The Common Controls” section at the end of this chapter.
Inputs
The Rank Filter node includes two inputs: one for the main image and the other for an effect mask to
limit the area where the filter is applied.
— Input: The orange input is used for the primary 2D image that gets the Rank filter applied.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the rank filter to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
Inspector
l
Rank Filter controls
Controls Tab
The Controls tab is used to set the size and rank value of the filter.
Size
This control determines the size in pixels of the area sampled by the filter. A value of 1 samples 1 pixel
in each direction, adjacent to the center pixel. This produces a total of 9 pixels, including the center
sampled pixel. Larger values sample from a larger area.
Low Size settings are excellent for removing salt and pepper style noise, while larger Size settings
produce an effect similar to watercolor paintings.
Rank
The Rank slider determines which value from the sampled pixels is chosen. A value of 0 is the lowest
value (darkest pixel), and 1 is the highest value (brightest pixel).
Example
Below is a before and after example of a Rank filter with Size set to 7 and a Rank of 0.7 to
create a watercolor effect.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Filter category. The Settings
controls are even found on third-party filter-type plugin tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options, which are
also covered here.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
For example, if the red button on a Blur tool is deselected, the blur is first applied to the image, and
then the red channel from the original input is copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to skip
processing that channel entirely. Tools that do this generally possess a set of identical RGBA buttons
on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs are
identical.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels,” in the Fusion Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Flow Nodes
This chapter details the Sticky Note and Underlay
features available in Fusion.
The abbreviations next to each feature name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Sticky Note [Nte]���������������������������������������������������������������������������������������������������� 1094
Usage��������������������������������������������������������������������������������������������������������������������������� 1094
Usage���������������������������������������������������������������������������������������������������������������������������� 1095
Usage
To create a Sticky Note, click in an empty area of the Node Editor where you want a Sticky Note to
appear. Then, from the Effects Library, click the Sticky Note effect located in the Tools > Flow category
or press Shift-Spacebar and search for the Sticky Note in the Select Tool window.
Like Groups, Sticky Notes are created in a smaller, collapsed form. They can be expanded by double-
clicking on them. Once expanded, they can be resized using any side or corner of the note or moved
by dragging on the name header. To collapse the Sticky Note again, click the icon in the top-left corner.
To rename, delete, copy, or change the color of the note, right-click over the note and choose from the
contextual menu. Using this menu, you can also lock the note to prevent editing.
To edit the text in a Sticky Note, first expand it by double-clicking anywhere on the note, and then click
below its title bar. If the note is not locked, you can edit the text.
The Underlay
Underlay Introduction
Underlays are a convenient method of visually organizing areas of a composition. As with Groups,
Underlays can improve the readability of a comp by separating it into labeled functional blocks. While
Groups are designed to streamline the look of a comp by collapsing complex layers down to single
nodes, Underlays highlight, rather than hide, and do not restrict outside connections.
Usage
As with Sticky Notes , an Underlay can be added to a comp by selecting it from the Flow category in
the Effects Library or searching for it in the Select Tool window. The Underlay to the Node Editor with
its title bar is centered on the last-clicked position.
Underlays can be resized using any side or corner. This will not affect any nodes.
Underlays can also be used as simple selection groups. Activating an Underlay, by clicking its title, will
select all the tools contained wholly within it as well, allowing the entire set to be moved, duplicated,
passed through, and so on.
To rename an Underlay, first ensure that nodes contained within the Underlay are not selected. Then,
Option-click on the Underlay title to select the Underlay without selecting the nodes it contains. Once
selected, right-click over the title and choose Rename. Underlays can be assigned a color using the
same right-click contextual menu.
Flow Organizational
Nodes
This chapter details the Groups, Macro, and Pipe Router nodes,
which are designed to help organize your compositions, making the
node tree easier to see and understand.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Groups������������������������������������������������������������������������������������������������������������������������� 1097
Usage���������������������������������������������������������������������������������������������������������������������������� 1097
Macro��������������������������������������������������������������������������������������������������������������������������� 1098
Usage���������������������������������������������������������������������������������������������������������������������������� 1098
Usage���������������������������������������������������������������������������������������������������������������������������� 1100
Router��������������������������������������������������������������������������������������������������������������������������� 1100
Groups Introduction
Groups are used to keep complex node trees organized. You can select any number of nodes in the
node tree and then group them to create a single node icon in the Node Editor. Groups are non-
destructive and can be opened at any time.
Usage
— To group nodes, select them in the Node Editor, and then right-click over any of the selected
nodes and choose Group from the contextual menu.
— To edit the individual nodes in a group, right-click and choose Expand Group from the contextual
menu. All individual nodes contained in the group are displayed in a floating node tree window.
When opened, groups hover over existing elements, allowing editing of the enclosed nodes.
— To remove or decompose a group and retain the individual nodes, right-click the group and
choose Ungroup.
Macro Introduction
Macros can be used to combine multiple nodes and expose a user-definable set of controls.
They are meant as a fast and convenient way of building custom nodes.
Usage
To create a Macro, select the nodes intended for the macro. The order in which the nodes are selected
becomes the order in which they are displayed in the Macro Editor. Right-click on any of the selected
nodes and choose Macro > Create Macro from the contextual menu.
Macro Editor
The Macro Editor allows you to specify and rename the controls that are exposed in the final
macro tool.
In the example below, the tool is named Light_Wrap at the top. The Blur slider for Matte Control 1 is
enabled and renamed to Softness, as it will appear in the Inspector.
To add the macro to your node tree, right-click anywhere on the node tree and select Macro >
[NameOfYourMacro] from the contextual menu.
To save a title macro so it appears in the Edit page Effects Library, save the macro to:
— macOS: Users > UserName > Library > Application Support > Blackmagic Design >
DaVinci Resolve > Fusion > Templates > Edit > Titles
— Windows: C Drive > Users > UserName > AppData > Roaming > Blackmagic Design >
DaVinci Resolve > Support > Fusion > Templates > Edit > Titles
As another example, you could take a single Channel Boolean, set it to Add mode, and make it into a
macro exposing no controls at all, thus creating the equivalent of an Add Mix node like the one that
can be found in programs like Nuke.
Router Introduction
Routers can be used to neatly organize your comps by creating “elbows” in your node tree, so the
connection lines do not overlap nodes, making them easier to understand. Routers do not have any
influence on render times.
Usage
Router
To insert a router along a connection line, Option- or Alt-click on the line. The router can then be
repositioned to arrange the connections as needed.
Although routers have no actual controls, they still can be used to add comments to a comp.
Fuses
This chapter introduces Fuses, which are scriptable plugins that can
be used within Fusion.
Contents
Fuses [Fus]������������������������������������������������������������������������������������������������������������������ 2210
A Fuse node
Fuses Introduction
Fuses are plugins. The difference between a Fuse and an Open FX plugin is that a Fuse is created
using a Lua script. Fuses can be edited within Fusion or DaVinci Resolve, and the changes you make
compile on-the-fly.
Using a Lua script makes it easy for even non-programmers to prototype and develop custom nodes.
A new Fuse can be added to a composition, edited and reloaded, all without having to close the
current composition. They can also be used as modifiers to manipulate parameters, curves, and text
very quickly. ViewShader Fuses can make use of the GPU for faster performance. This makes Fuses
much more convenient than an Open FX plugin that uses Fusion’s OFX SDK. However, this flexibility
comes at a cost. Since a Fuse is compiled on-the-fly, it can be significantly slower than the identical
node created using the Open FX SDK.
As an example, Fuses could generate a mask from the over-exposed areas of an image, or create initial
particle positions based on the XYZ position stored within a text file.
Please contact Blackmagic Design for access to the SDK (Software Developer Kit) documentation.
Installing Fuses
Fuses are installed in the Fusion:\Fuses path map. By default this folder is located at
Users/ User_Name/Library Application Support/Blackmagic Design/Fusion (or DaVinci Resolve)/Fuses
on macOS or C:\Users\User_Name\AppData\Roaming\Blackmagic Design\Fusion (or DaVinci Resolve)\
Fuses, on Windows. Files must use the extension .fuse, or they will be ignored by Fusion.
NOTE: Any changes made to a Fuse’s script do not immediately affect other copies of the
same Fuse node already added to a composition. To use the updated Fuse script on all
similar Fuses in the composition, either close and reopen the composition, or click on the
Reload button in each Fuse’s Inspector.
When a composition containing a Fuse node is opened, the currently saved version of the Fuse script
is used. The easiest way to ensure that a composition is running the current version of a Fuse is to
close and reopen the composition.
Generator Nodes
This chapter details the Generator nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Background [Bg]������������������������������������������������������������������������������������������������������� 1105
Follower����������������������������������������������������������������������������������������������������������������������� 1134
Inputs
There is one input on the Background node for an effect mask input.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the background color to only those pixels within the mask.
Inspector
Type
This control is used to select the style of background generated by the node. Four selections are
available:
Horizontal/Vertical/Four Point
When the Type menu is set to Horizontal, Vertical, or Four Corner, two- or four-color swatches are
displayed where the left/right, top/bottom, or four corners of the gradient colors can be set.
Gradient
When the Type menu is set to Gradient, additional controls are displayed where the gradient colors’
direction can be customized.
Gradient Type
This menu selects the form used to draw the gradient. There are six choices:
— Linear: Draws the gradient along a straight line from the starting color stop to the
ending color stop.
— Reflect: Draws the gradient by mirroring the linear gradient on either side of the starting point.
— Square: Draws the gradient by using a square pattern when the starting point is at the
center of the image.
— Cross: Draws the gradient using a cross pattern when the starting point is at the center
of the image.
— Radial: Draws the gradient in a circular pattern when the starting point is at the center
of the image.
— Angle: Draws the gradient in a counterclockwise sweep when the starting point is at the
center of the image.
You can add, move, copy, and delete color from the gradient using the gradient bar.
To modify one of the colors, select the triangle below the color on the bar.
Interpolation Space
This menu determines what color space is used to calculate the colors between color stops.
Offset
The Offset control is used to offset the position of the gradient relative to the start and end
markers. This control is most useful when used in conjunction with the repeat and ping-pong modes
described below.
Repeat
This menu includes three options used to set the behavior of the gradient when the Offset control
scrolls the gradient past its start and end positions. Selecting Once keeps the color continuous for
offset. Selecting Repeat loops around to the start color when the offset goes beyond the end color.
Selecting Ping-pong repeats the color pattern in reverse.
Sub-Pixel
The Sub-Pixel menu controls the sub-pixel precision used when the edges of the gradient become
visible in repeat mode, or when the gradient is animated. Higher settings will take significantly longer
to render but are more precise.
Inputs
There is a single input on the Day Sky node for an effect mask to limit the area where the day sky
simulation occurs is applied.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the Day Sky to only those pixels within the mask.
Inspector
Controls Tab
The Controls tab is used to set the location and time of the daylight simulation. This will determine the
overall look that is generated.
Location
The Latitude and Longitude sliders are used to specify the location used to create the Day Sky
simulation.
Turbidity
Turbidity causes light to be scattered and absorbed instead of transmitted in straight lines through
the simulation. Increasing the turbidity will give the sky simulation a murky feeling, as if smoke or
atmospheric haze were present.
Generally, this option should be deselected only if the resulting image will later be color corrected as
part of a floating-point color pipeline.
Exposure
Use this control to select the exposure used for tone mapping.
Advanced Tab
The Advanced tad provides more specific controls over the brightness and width of the different
ranges in the generated sky.
Horizon Brightness
Use this control to adjust the brightness of the horizon relative to the sky.
Luminance Gradient
Use this control to adjust the width of the gradient separating the horizon from the sky.
Backscattered Light
Use this control to increase or decrease the backscatter light in the simulation.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two map inputs on the Fast Noise node allow you to use masks to control the value of the noise
detail and brightness controls for each pixel. These two optional inputs can allow some interesting
and creative effects. There is also a standard effect mask input for limiting the Fast Noise size.
— Noise Detail Map: A soft-edged mask connected to the gray Noise Detail Map input will give a flat
noise map (zero detail) where the mask is black, and full detail where it is white, with intermediate
values smoothly reducing in detail. It is applied before any gradient color mapping. This can
be very helpful for applying maximum noise detail in a specific area, while smoothly falling off
elsewhere.
— Noise Brightness Map: A mask connected to this white input can be used to control the noise
map completely, such as boosting it in certain areas, combining it with other textures, or if Detail
is set to 0, replacing the Perlin Noise map altogether.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the Fast Noise to only those pixels within the mask.
Noise Tab
The Noise tab controls the shape and pattern of the noise for the Fast Noise node.
Discontinuous
Normally, the noise function interpolates between values to create a smooth continuous gradient of
results. Enable this checkbox to create hard discontinuity lines along some of the noise contours. The
result will be a dramatically different effect.
Inverted
Select this checkbox to invert the noise, creating a negative image of the original pattern. This is most
effective when Discontinuous is also enabled.
Center
Use the Center coordinate control to pan and move the noise pattern.
Detail
Increase the value of this slider to produce a greater level of detail in the noise result. Larger values
add more layers of increasingly detailed noise without affecting the overall pattern. High values take
longer to render but can produce a more natural result.
Brightness
This control adjusts the overall brightness of the noise map, before any gradient color mapping is
applied. In Gradient mode, this has a similar effect to the Offset control.
Contrast
This control increases or decreases the overall contrast of the noise map, prior to any gradient
color mapping. It can exaggerate the effect of the noise and widen the range of colors applied in
Gradient mode.
Angle
Use the Angle control to rotate the noise pattern.
Seethe
Adjust this thumbwheel control to interpolate the noise map against a different noise map.
This will cause a crawling shift in the noise, as if it was drifting or flowing. This control must be
animated to affect the gradient over time, or you can use the Seethe Rate control below.
Seethe Rate
As with the Seethe control above, the Seethe Rate also causes the noise map to evolve and change.
The Seethe Rate defines the rate at which the noise changes each frame, causing an animated drift in
the noise automatically, without the need for spline animation.
Color Tab
The Color tab allows you to adjust the gradient colors used in the generated noise pattern.
Gradient
The Advanced Gradient control in Fusion is used to provide more control over the color gradient used
with the noise map.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Mandelbrot [Man]
Inputs
The one input on the Mandelbrot node is for an effect mask to limit the area where the fractal noise
is applied.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the fractals to only those pixels within the mask.
Noise Tab
The Noise tab controls the shape and pattern of the noise for the Mandelbrot node.
Position X and Y
This chooses the image’s horizontal and vertical position or seed point.
Zoom
Zoom magnifies the pattern in or out. Every magnification is recalculated so that there is no practical
limit to the zoom.
Escape Limit
Defines a point where the calculation of the iteration is aborted. Low values lead to blurry halos.
Iterations
This determines the repetitiveness of the set. When animated, it simulates a growing of the set.
Rotation
This rotates the pattern. Every new angle requires recalculation of the image.
Grad Method
Use this control to determine the type of gradation applied at the borders of the pattern.
Continuous Potential
This causes the edges of the pattern to blend to the background color.
Iterations
This causes the edges of the pattern to be solid.
Gradient Curve
This affects the width of the gradation from the pattern to the background color.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in other generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Plasma [Plas]
Inputs
The one input on the Plasma node is for an effect mask to limit the area where the plasma pattern
is applied.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the plasma to only those pixels within the mask.
Inspector
Circles Tab
The Circles tab controls the shape and pattern generated by the Plasma node.
Scale
The Scale control is used to adjust the size of the pattern created.
Operation
The options in this menu determine the mathematical relationship among the four circles whenever
they intersect.
Circle Type
Select the type of circle to be used.
Circle Center
Report and change the position of the circle center.
Circle Scale
Determines the size of the circle to be used for the pattern.
Phase
Phase changes the color phase of the entire image. When animated, this creates psychedelic
color cycles.
R/G/B/A Phases
Changes the phase of the individual color channels and the Alpha. When animated, this creates color
cycling effects.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These
common controls are described in detail at the end of this chapter in “The Common Controls” section.
Text+ [Txt+]
Any TrueType, OpenType, or PostScript 1 font installed on the computer can be used to create text.
Support for multibyte and Unicode characters allows text generation in any language, including right
to left and vertically oriented text.
This node generates a 2D image. To produce extruded 3D text with optional beveling, see the
Text 3D node.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the text to only those pixels within the mask.
Inspector
Styled Text
The edit box in this tab is where the text to be created is entered. Any common character can be typed
into this box. The common OS clipboard shortcuts (Command-C or Ctrl-C to copy, Command-X or Ctrl-X
to cut, Command-V or Ctrl-V to paste) will also work; however, right-clicking in the edit box displays a
custom contextual menu. More information on these modifiers can be found at the end of this section.
Font
Two Font menus are used to select the font family and typeface, such as Regular, Bold, and Italic.
Color
Sets the basic fill color of the text. This is the same control displayed in the Shading tab color swatch.
Size
This control is used to increase or decrease the size of the text. This is not like selecting a point size in
a word processor. The size is relative to the width of the image.
Tracking
The Tracking parameter adjusts the uniform spacing between each character of text.
Line Spacing
Line Spacing adjusts the distance between each line of text. This is sometimes called leading in word-
processing applications.
V Anchor
The vertical anchor controls consist of three buttons and a slider. The three buttons are used to align
the text vertically to the top of the text, middle of the text, or bottom baseline. The slider can be used
to customize the alignment. Setting the vertical anchor will affect how the text is rotated as well as the
location for line spacing adjustments. This control is most often used when the Layout type is set to
Frame in the Layout tab.
H Anchor
The horizontal anchor controls consist of three buttons and a slider. The three buttons justify the text
alignment to the left edge, middle, or right edge of the text. The slider can be used to customize the
justification. Setting the horizontal anchor will affect how the text is rotated as well as the location for
tracking (leading) spacing adjustments. This control is most often used when the Layout type is set to
Frame in the Layout tab.
H Justify
The horizontal justify slider allows you to customize the justification of the text from the H Anchor
setting to full justification so it is aligned evenly along the left and right edges. This control is most
often used when the Layout type is set to Frame in the Layout tab.
Direction
This menu provides options for determining the Direction in which the text is to be written, either
horizontally or vertically in either direction. This allows certain Asian languages to flow properly
during animation.
Line Direction
These menu options are used to determine the text flow from top to bottom, bottom to top, left to
right, or right to left.
Write On
This range control is used to quickly apply simple Write On and Write Off effects to the text. To create
a Write On effect, animate the End portion of the control from 1 to 0 over the length of time required.
To create a Write Off effect, animate the Start portion of the range control from 0 to 1.
Tab Spacing
Tab Spacing
The controls in the Tabs section are used to configure the horizontal screen positions of eight
separate tab stops. Any tab characters in the text will conform to these positions.
You can add tabs directly in the Styled Text input as you type. You can also add tabs by copying from
another document, such as Text on macOS or Notepad on Windows, and paste it into the text box.
Alignment
Each tab can be set either left aligned, right aligned, or centered. This slider ranges from -1.0 to 1.0,
where -1.0 is a left-aligned tab, 0.0 is a centered tab and 1.0 is a right-aligned tab. Clicking the tab
handles in the viewer will toggle the alignment of the tab among the three states.
Reading Direction
These options allow you to set the reading direction of the text, either automatically or manually. You
can specify Left to Right languages like English, German, etc. or Right to Left languages like Arabic
and Hebrew.
Force Monospaced
This slider control can be used to override the kerning (spacing between characters) defined in the
font. Setting this slider to zero (the default value) will cause Fusion to rely entirely on the kerning
defined with each character. A value of one will cause the spacing between characters to be
completely even, or monospaced.
Use Ligatures
If your font supports ligatures, you can activate them here by choosing All Scripts. Ligatures combine
individual letters into single glyphs, like ff and fl. If you’re animating individual text letters, often you
want the ligature letters separated individually rather than as a single glyph, so None is the default for
Latin characters. Ligatures are required to render some languages like Arabic correctly, and use the
Non-Latin setting.
Stylistic Set
If your font includes stylistic sets, you can select them in the drop-down menu.
Font Features
This allows you to set OpenType 4 letter tags to activate certain font features. For example, "smcp"
will show small capitals, and "frac" will display fractions as ½ instead of 1/2. Not all features are
supported by a particular font. A full list of OpenType feature codes can be found here: https://docs.
microsoft.com/en-us/typography/opentype/spec/featurelist
Layout Tab
The controls used to position the text are located in the Layout tab. One of four layout types can be
selected using the Type drop-down menu.
— Point: Point layout is the simplest of the layout modes. Text is arranged around an
adjustable center point.
— Frame: Frame layout allows you to define a rectangular frame used to align the text. The
alignment controls are used for justifying the text vertically and horizontally within the boundaries
of the frame.
— Circle: Circle layout places the text around the curve of a circle or oval. Control is offered over
the diameter and width of the circular shape. When the layout is set to this mode, the Alignment
controls determine whether the text is positioned along the inside or outside of the circle’s edge,
and how multiple lines of text are justified.
— Path: Path layout allows you to shape your text along the edges of a path. The path can be used
simply to add style to the text, or it can be animated using the Position on Path control that
appears when this mode is selected.
Center X, Y, and Z
These controls are used to position the center of the layout element in space. X and Y are onscreen
controls, and Center Z is a slider in the node controls.
Size
This slider is used to control the scale of the layout element.
Perspective
This slider control is used to add or remove perspective from the rotations applied by the Angle X, Y,
and Z controls.
Rotation
Rotation consists of a series of buttons allowing you to select the order in which 3D rotations are
applied to the text. Angle dials can be used to adjust the angle of the Layout element along any axis.
Fit Characters
This menu control is visible only when the Layout type is set to Circle. This menu is used to select how
the characters are spaced to fit along the circumference.
Position on Path
The Position on Path control is used to control the position of the text along the path. Values less than
0 or greater than 1 will cause the text to move beyond the path in the same direction as the vector of
the path between the last two keyframes.
Background Color
The text generated by this node is normally rendered with a transparent background. This Color Picker
control can be used to set a background color.
For more information, see Chapter 11, “Animating with Motion Paths,” the Fusion Reference Manual.
Transform Tab
The Transform tab is used to move, rotate, shear and scale text based on a character, word, or line.
— Characters: Each character of text is transformed along its own center axis.
— Words: Each word is transformed separately on the word’s center axis.
— Lines: Each line of the text is transformed separately on that line’s center axis.
Spacing
The Spacing slider is used to adjust the space between each line, word, or character. Values less than 1
will usually cause the characters to begin overlapping.
Pivot X, Y, and Z
This provides control over the exact position of the axis. By default, the axis is positioned at the
calculated center of the line, word, or character. The Axis control works as an offset, such that a value
of 0.1, 0.1 in this control would cause the axis to be shifted downward and to the right for each of the
text elements. Positive values in the Z-axis slider will move the axis of rotation further along the axis
(away from the viewer). Negative values will bring the axis of rotation closer.
Rotation
These buttons are used to determine the order in which transforms are applied. X, Y, and Z would
mean that the rotation is applied to X, then Y, and then Z.
X, Y, and Z
These controls can be used to adjust the angle of the text elements in any of the three dimensions.
Shear X and Y
Adjust these sliders to modify the slanting of the text elements along the X- and Y-axis.
Size X and Y
Adjust these sliders to modify the size of the text elements along the X- and Y-axis.
Shading Element
The eight number values in the menu are used to select the element affected by adjustments in this tab.
Enabled
Select this checkbox to enable or disable each layer of shading elements. Element 1, which is the
fill color, is enabled by default. The controls for a shading element will not be displayed unless this
checkbox is selected.
Sort By
This menu allows you to sort the shading elements by number priority, with 1 being the topmost
element and 8 being the bottommost element, or Z depth, based on the Z Position parameter.
Name
This text label can be used to assign a more descriptive name to each shading element you create.
Appearance
The four Appearance buttons determine how the shading element is applied to the text. Different
controls will appear below depending on the appearance type selected.
— Text Fill: The shading element is applied to the entire text. This is the default mode.
— Text Outline: The shading element is drawn as an outline around the edges of the text.
— Border Fill: The shading element fills a border surrounding the text. Five additional controls are
provided with this shading mode.
— Border Outline: The Border Outline mode draws an outline around the border that surrounds
the text. It offers several additional controls.
Opacity
The Opacity slider controls the overall transparency of the shading element. It is usually better to
assign opacity to a shading element than to adjust the Alpha of the color applied to that element.
Blending
This menu is used to select how the renderer deals with an overlap between two characters in the text.
Thickness
(Outline only) Thickness adjusts the thickness of the outline. Higher values equal thicker outlines.
Join Style
(Outline only) These buttons provide options for how the corners of the outline are drawn. Options
include Sharp, Rounded, and Beveled.
Line Style
(Outline only) This menu offers additional options for the style of the line. Besides the default solid
line, a variety of dash and dot patterns are available.
Level
(Border Fill only) This is used to control the portion of the text border filled.
Round
(Border Fill and Border Outline only) This slider is used to round off the edges of the border.
Color Types
Besides solid shading, it is also possible to use a gradient fill or map an external image onto the text.
This menu is used to determine if the color of the shading element is derived from a user-selected
color or gradient, or if it comes from an external image source. Different controls will be displayed
below depending on the Color Type selected.
— Solid: When the Type menu is set to Solid mode, color selector controls are provided to select the
color of the text.
— Image: The output of a node in the node tree will be used to texture the text. The node used is
chosen using the Color Image control that is revealed when this option is selected.
— Gradient: When the Type menu is set to Gradient, additional controls are displayed where the
gradient colors can direction can be customized.
You can add, move, copy and delete color using the gradient bar.
Image Source
(Image Mode only) The Image Source menu includes three options for acquiring the image used to
fill the text.
— Tool: Displays a Color image text field where you can add a tool from the node tree as
the fill for text.
— Clip: Provides a Browse button to select a media file from your hard drive as the fill for text.
— Brush: Displays a Color Brush menu where you can select one of Fusion’s paint brush bitmaps as
the fill for text.
Image Sampling
(Image Mode only) This menu is used to select the sampling type for shading rendering and
transformations. The default of Pixel shading is sufficient for 90% of tasks. To reduce detectable
aliasing in the text, set the sampling type to Area. This is slower but may produce better-quality
results. A setting of None will render faster, but with no additional sampling applied so the quality
will be lower.
Image Edges
(Image Mode only) This menu is used to choose how transformations applied to image shading
elements are handled when they wrap off the text’s edges.
Shading Mapping
(Image Mode only) This menu is used to select whether the entire image is stretched to fill the text or
scaled to fit, maintaining the aspect ratio but cropping part of the image as needed.
Mapping Size
(Image and Gradient Modes only) This control scales the image or gradient.
Mapping Aspect
(Image and Gradient Modes only) This control vertically stretches or shrinks the image or gradient.
Mapping Level
(Image and Gradient Modes only) The Mapping Level menu is used to select how the image is mapped
to the text.
Softness X and Y
These sliders control the softness of the text outline used to create the shading element. Control is
provided for the X- and Y-axis independently.
Softness Glow
This slider will apply a glow to the softened portion of the shading element.
Softness Blend
This slider controls the amount that the result of the softness control is blended back with the original.
It can be used to tone down the result of the soften operation.
Priority Back/Front
Only enabled when the Sort By menu is set to Priority, this slider overrides the priority setting and
determines the layer’s order for the shading elements. Slide the control to the right to bring an
element closer to the front. Move it to the left to tuck one shading element behind another.
Offset X, Y, and Z
These controls are used to apply offset from the text’s global center (as set in the Layout tab) for the
shading elements. A value of X0.0, Y0.1 in the coordinate controls would place the shading element
centered with 10 percent of the image further down the screen along the Y-axis. Positive values in the
Z-Offset slider control will push the center further away from the camera, while positive values will
bring it closer to the camera.
Pivot X, Y, and Z
These controls are used to set the exact position of the axis for the currently selected shading
element. By default, the axis is positioned at the calculated center of the line, word, or character.
Rotation X, Y, and Z
These controls are used to adjust the angle of the currently selected shading element in any of the
three dimensions.
Shear X and Y
Adjust these sliders to modify the slanting of the currently selected shading element along the X
and Y axis.
Size X and Y
Adjust these sliders to modify the size of the currently selected shading element along the
X and Y axis.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Text+ Toolbar
When the Text node is selected, a toolbar will appear in the viewer. Each button is described below
from left to right.
Text+ toolbar
To animate the position of each character, right-click on the control label Manual Font Kerning in the
Inspector’s Advanced Controls and select Animate from the contextual menu. A new key will be set on
the animation spline each time a character is moved. All characters are animated with the same spline,
as with polyline mask animation.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are duplicated in many Generator nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Text+ Modifiers
Text+ modifiers
Text modifiers can be assigned by right-clicking in the Styled Text box and selecting a modifier from
the contextual menu. Once a modifier is selected, its controls are found in the Modifiers tab at the
top of the Inspector.
NOTE: Character Level Styling can only be directly applied to Text+ nodes, not to Text 3D
nodes. However, styled text from a Text+ node can be applied to a Text 3D node by copying
the Text+, right-clicking on the Text 3D, and choosing Paste Settings.
Inspector
Text Tab
The Styled Text box in the Modifiers tab displays the same text in the Tools tab of the Text+ Inspector.
However, individual characters you want to modify cannot be selected in the Styled Text box; they
must be selected in the viewer. Once text is selected in the viewer, the Text tab includes familiar text
formatting options that will apply only to the selected characters.
Controls
This modifier has no controls.
Follower
The Follower modifier allows sequencing text animations. The modifier is applied by right-clicking
in the Styled Text field of a Text+ node and selecting Follower. In the Modifiers tab, you start by
animating the parameters of the text (note that changing any parameter in the Modifiers tab will
not be visible unless a keyframe is added.) Then, in the Timing tab you set the animation’s delay
between characters.
Inspector
Timing Tab
Once the text is animated using the controls in the Modifiers tab, the Timing tab is used to choose the
direction and offset of the animation.
Range
The Range menu is used to select whether all characters should be influenced or only a selected
range. To set the range, you can drag-select over the characters directly in the viewer.
Order
The Order menu determines in which direction the characters are influenced. Notice that empty
spaces are counted as characters as well. Available options include:
— Left to right: The animation ripples from left to right through all characters.
— Right to left: The animation ripples from right to left through all characters.
— Inside out: The animation ripples symmetrically from the center point of the
characters toward the margin.
— Outside in: The animation ripples symmetrically from the margin toward the center
point of the characters.
Delay Type
Determines what sort of delay is applied to the animation. Available options include:
— Between Each Character: The more characters there are in your text, the longer the animation
will take to the end. A setting of 1 means the first character starts the animation, and the second
character starts 1 frame later, the third character starts 1 frame after the second, and so on.
— Between First and Last Character: No matter how many characters are in your text, the
animation will always be completed in the selected amount of time.
For a detailed description on the various parameters, see the Text+ node documentation.
Text Scramble
The Text Scramble randomly replaces the characters with others from a user definable set. It can be
applied by right-clicking into the Styled Text field of a Text+ node and selecting Text Scramble.
The Controls for the Text Scramble are then adjusted in the Modifiers tab.
Inspector
Randomness
Defines how many characters are exchanged randomly. A value of 0 will change no characters at all.
A value of 1 will change all characters in the text. Animating this thumbwheel to go from 0 to 1 will
gradually exchange all characters.
Input Text
This reflects the original text in the Text+ Styled Text field. Text can be entered either here or in the
Text+ node.
Animate on Time
When enabled, the characters are scrambled randomly on every new frame. This switch has no effect
when Randomness is set to 0.
Animate on Randomness
When enabled, the characters are scrambled randomly on every new frame, when the Randomness
thumbwheel is animated.
Substitute Chars
This field contains the characters used to scramble the text.
Text Timer
The Text Timer makes the Text+ node either a Countdown, a Timer, or a Clock. This is useful for
onscreen real-time displays or to burn in the creation time of a frame into the picture. It can be
applied by right-clicking in the Styled Text field of a Text+ node and selecting Text Timer.
Inspector
Mode
This menu sets the mode the timer is working in. The choices are CountDown, Timer, and Clock. In
Clock mode, the current system time will be displayed.
Start
Starts the Counter or Timer. Toggles to Stop once the timer is running.
Reset
Resets the Counter and Timer to the values set by the sliders.
Time Code
The Time Code only works on Text+ nodes. It sets the Styled text to become a counter based on the
current frame. This is quite useful for automating burn-ins for daily renderings.
It can be applied by right-clicking in the Styled Text field of a Text+ node and selecting Time Code.
Inspector
Controls Tab
The Controls tab for the Time Code modifier is used to set up the time code display that is generated
by this modifier.
Drop Frame
Activate this checkbox to match the time code with footage that has drop frames—for example,
certain NTSC formats.
Inspector
Image Tab
The controls in this tab are used to set the resolution, color depth, and pixel aspect of the image
produced by the node.
Process Mode
Use this menu control to select the Fields Processing mode used by Fusion to render changes
to the image. The default option is determined by the Has Fields checkbox control in the Frame
Format preferences.
The node will not produce an image on frames outside this range.
Width/Height
This pair of controls is used to set the Width and Height dimensions of the image to be created
by the node.
Pixel Aspect
This control is used to specify the Pixel Aspect ratio of the created images. An aspect ratio of 1:1 would
generate a square pixel with the same dimensions on either side (like a computer display monitor),
and an aspect of 0.9:1 would create a slightly rectangular pixel (like an NTSC monitor).
NOTE: Right-click on the Width, Height, or Pixel Aspect controls to display a menu listing
the file formats defined in the preferences Frame Format tab. Selecting any of the listed
options will set the width, height, and pixel aspect to the values for that format, accordingly.
Depth
The Depth drop-down menu is used to set the pixel color depth of the image created by the Creator
node. 32-bit pixels require 4X the memory of 8-bit pixels but have far greater color accuracy. Float
pixels allow high dynamic range values outside the normal 0…1 range, for representing colors that are
brighter than white or darker than black.
— Auto: Automatically reads and passes on the metadata that may be in the image.
— Space: Displays a Color Space Type menu where you can choose the correct color
space of the image.
— Auto: Automatically reads and passes on the metadata that may be in the image.
— Space: Displays a Gamma Space Type menu where you can choose the correct gamma
curve of the image.
— Log: Brings up the Log/Lin settings, similar to the Cineon tool. For more information,
see Chapter 38, “Film Nodes,” in the Fusion Reference Manual.
Remove Curve
Depending on the selected Gamma Space or on the Gamma Space found in Auto mode, the Gamma
Curve is removed from, or a log-lin conversion is performed on, the material, effectively converting it
to a linear output space.
Settings Tab
The Settings Tab in the Inspector can be found on every tool in the Color category. The Settings
controls are even found on third-party Color-type plugin tools. The controls are consistent and work
the same way for each tool, although some tools do include one or two individual options, which are
also covered here.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming. Normally,
this will cause the tool to skip processing entirely, copying the input straight to the output.
For example, if the red button on a Blur tool is deselected, the blur will first be applied to the image,
and then the red channel from the original input will be copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this will generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information see Chapter 18, “Understanding Image Channels,” in the Fusion Reference Manual.
Motion Blur
— Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
— Quality: Quality determines the number of samples used to create the blur. A quality setting of 2
will cause Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
— Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one whole frame exposure. Higher values are possible
and can be used to create interesting effects.
— Center Bias: Center Bias modifies the position of the center of the motion blur. This allows the
creation of motion trail effects.
— Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the field and type the text. When a note is
added to a tool, a small red square appears in the lower-left corner of the node when the full tile is
displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the note
in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
I/O Nodes
This chapter details the input and output of media using Loader
and Saver nodes within Fusion Studio as well as the MediaIn and
MediaOut nodes in DaVinci Resolve.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
Contents
Loader Node [Ld]���������������������������������������������������������������������������������������������������� 1145
When using Fusion Studio, the Loader node is the node you use to select and load footage from your
hard drive into the Node Editor. There are three ways to add a Loader node, and consequently a clip,
to the Node Editor.
— Add the Loader from the Effects Library or toolbar (Fusion Studio only), and then use Loader’s file
browser to bring a clip into the Node Editor
— Drag clips from an OS window directly into the Node Editor, creating a Loader node in the
Node Editor.
— Choose File > Import > Footage (Fusion Studio only), although this method creates a new
composition as well as adds the Loader node to the Node Editor.
When a Loader is added to the Node editor, a File dialog is displayed automatically to allow the
selection of a clip from your hard drives.
NOTE: You can disable the automatic display of the file browser by disabling Auto Clip
Browse in the Global > General Preferences.
Once clips are brought in using the Loader node, the Loader is used for trimming, looping, and
extending the footage, as well as setting the field order, pixel aspect, and color depth. The Loader is
arguably the most important tool in Fusion Studio.
Inputs
The single input on the Loader node is for an effect mask to crop the image brought in by the Loader.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the loaded image to
appear only within the mask. An effects mask is applied to the tool after the tool is processed.
Inspector
File Tab
The File tab for the Loader includes controls for trimming, creating a freeze frame, looping, and
reversing the clip. You can also reselect the clip that the Loader links to on your hard drive.
If the Global In and Out values are decreased to the point where the range between the In and Out
values is smaller than the number of available frames in the clip, Fusion automatically trims the clip
by adjusting the Trim range control. If the Global In/Out values are increased to the point where the
range between the In and Out values is larger than the number of available frames in the clip, Fusion
automatically lengthens the clip by adjusting the Hold First/Last Frame controls. Extended frames are
visually represented in the range control by changing the color of the held frames to green in the control.
Filename
The Filename field shows the file path of the clip imported to the Node Editor by the Loader node.
Clicking on the Browse button opens a standard file browser. The path to the footage can also be
typed directly using the field provided. The text box supports filename completion. As the name of a
directory or file is typed in the text box, Fusion displays a pop-up that lists possible matches. Use the
arrow keys to select the correct match and complete the path.
NOTE: Loading image sequences is common practice for compositing, whether the image
sequence comes from a 3D renderer or a digital cinema camera. If the last part of a file’s
name is a number (not counting the file extension), Fusion automatically scans the directory
looking for files that match the sequence. For example, the following filenames would be
valid sequences:
or
The following would not be considered a sequence since the last characters are
not numeric.
It is not necessary to select the first file in the sequence. Fusion searches the entire folder
for files matching the sequence in the selected filename. Also, Fusion determines the length
of the sequence based on the first and last numeric value in the filenames. Missing frames
are ignored. For example, if the folder contains two files with the following names:
image.0001.exr, image.0100.exr
Fusion sees this as a file sequence with 100 frames, not an image sequence containing
two frames. The Missing Frames drop-down menu is used to choose how Fusion handles
missing frames.
The Trim In/Trim Out control’s context menu can also be used to force a specific clip length
or to rescan the folder. Both controls are described in greater detail below.
Occasionally, you want to load only a single frame out of a sequence—e.g., a photograph
from a folder containing many other files as well. By default, Fusion detects those as a
sequence, but if you hold Shift while dragging the file from the OS window to the Node
Editor, Fusion takes only that specific file and disregards any sequencing.
Trim
The Trim range control is used to trim frames from the start or end of a clip. Adjust the Trim In to
remove frames from the start and adjust Trim Out to specify the last frame of the clip. The values used
here are offsets. A value of 5 in Trim In would use the fifth frame in the sequence as the start, ignoring
the first four frames. A value of 95 would stop loading frames after the 95th.
Reverse
Select this checkbox to reverse the footage so that the last frame is played first, and the first frame is
played last.
Loop
Select this checkbox to loop the footage until the end of the project. Any lengthening of the clip using
Hold First/Last Frame or shortening using Trim In/Out is included in the looped clip.
Missing Frames
The Missing Frames menu determines the Loader behavior when a frame is missing or is unable to
load for any reason.
— Fail: The Loader does not output any image unless a frame becomes available. Rendering is
aborted.
— Hold Previous Output: The last valid frame is held until a frame becomes available again. This
fails if no valid frame has been seen—for example, if the first frame is missing.
— Output Black: Outputs a black frame until a valid frame becomes available again.
— Wait: Fusion waits for the frame to become available, checking every few seconds. This is useful
for rendering a composition simultaneously with a 3D render. All rendering ceases until the
frame appears.
You can either enter the Comp:\ manually into the filename field of a Loader, or turn on
the Enable Reverse Mapping of Paths Preferences checkbox in the Path Map preferences.
Enabling the Path Map preference check box will use the Comp:\ automatically.
So as long as all your source footage is stored in subfolders of your Comp folder, Fusion
finds that footage regardless of the actual hard drive or network share name.
You could, for example, copy an entire shot from the network to your local drive, set up
your Loaders and Savers to use the Comp variable, work all your magic locally (i.e., set up
your composition), and then copy just the composition back to the server and issue a net-
render. All render slaves automatically find the source footage.
Some examples:
Your composition is stored in:
X:\Project\Shot0815\Fusion\Shot0815.comp
X:\Project\Shot0815\Fusion\Greenscreen\0815Green_0000.dpx
Comp:\Greenscreen\0815Green_0000.dpx
X:\Project\Shot0815\Footage\Greenscreen\0815Green_0000.dpx
Comp:\..\Footage\ Greenscreen\0815Green_0000.dpx
Observe how the two dots .. set the directory to go up one folder, much like CD .. in a
Command Shell window.
Import Tab
The Import tab includes settings for the frame format and how to deal with fields, pixel aspect, 3:2 pull
down/pull up conversion, and removing gamma curve types for achieving a linear workflow.
Process Mode
Use this menu to select the Fields Processing mode used by Fusion when loading the image. The
Has Fields checkbox control in the Frame Format preferences determines the default option, and the
default height as well. Available options include:
— Full frames
— NTSC fields
— PAL/HD fields
— PAL/HD fields (reversed)
— NTSC fields (reversed).
The two reversed options load fields in the opposite order and thus result in the fields being spatially
swapped both in time order and in vertical order as well.
Use the Swap Field Dominance checkbox (described below) to swap fields in time only.
Depth
The Depth menu is used to select the color depth used to process footage from this Loader.
The default option is Format.
— Format: The color depth is determined by the color depth supported in the file format loaded.
For example, JPEG files automatically process at 8 bit because the JPEG file format does not
store color depths greater than 8. EXR files load at Float. If the color depth of the format is
undetermined, the default depth defined in the Frame Format preferences is used. Formats that
support multiple color depths are set to the appropriate color depth automatically.
Pixel Aspect
This menu is used to determine the image’s pixel aspect ratio.
— From File: The loader conforms to the image aspect detected in the saved file. There are a few
formats that can store aspect information. TIFF, JPEG, and OpenEXR are examples of image
formats that may have the pixel aspect embedded in the file’s header. When no aspect ratio
information is stored in the file, the default frame format method is used.
— Default: Any pixel aspect ratio information stored in the header of the image file is ignored. The
pixel aspect set in the composition’s frame format preferences is used instead.
— Custom: Select this option to override the preferences and set a pixel aspect for the clip manually.
Selecting this button causes the X/Y Pixel Aspect control to appear.
Import Mode
This menu provides options for removing pull-up from an image sequence. Pull-up is a reversible
method of combining frames used to convert 24 fps footage into 30 fps. It is commonly used to
broadcast NTSC versions of films.
— Normal: This passes the image without applying pull-up or pull-down 2:3.
— Pull Up: This removes existing 3:2 pull-down applied to the image sequence, converting from
30 fps to 24 fps 2:3.
— Pull Down: The footage has pull-down applied, converting 24 fps footage to 30 fps by creating
five frames out of every four. The process mode of a Loader set to Pull Down should always
be Full Frames.
First Frame
This menu appears when the Import Mode is set to either Pull Up or Pull Down. It is used to determine
which frame of the 3:2 sequence is used as the first frame of the loaded clip.
Post-Multiply by Alpha
Enabling this option causes the color value of each pixel to be multiplied by the Alpha channel for
that pixel. This option can be used to convert subtractive (non-premultiplied) images to additive
(premultiplied) images.
— Auto: Passes along any metadata that might be in the incoming image.
— Space: Allows the user to set the color space based on the recording device used to capture
content or software settings used when rendering the content in another application.
Curve Type
This menu is used to determine the gamma curve of the footage. Once the Gamma Curve Type is set,
you can choose to remove the curve to help achieve a linear workflow.
— Auto: Passes along any metadata that might be in the incoming image.
— Space: Allows the user to set the gamma curve based on the recording device used to capture
content or software settings used when rendering the content in another application.
— Log: Displays the Log/Lin settings, similar to the Cineon Log node. For more information on the
Log settings, see Chapter 38, “Film Nodes,” in the Fusion Reference Manual.
Remove Curve
Depending on the selected Curve Type or on the Gamma Space found in Auto mode, the associated
Gamma Curve is removed from, or a log-lin conversion is performed on, the material, effectively
converting it to a linear output space.
Format Tab
The Format tab contains file format-specific controls that dynamically change based on the selected
Loader and the file it links to. Some formats contain a single control or no controls at all. Others like
Camera RAW formats contain RAW-specific debayering controls. A partial format list is provided below
for reference.
— OpenEXR: EXR provides a compact and flexible format to support high dynamic range images
(float). The format also supports a variety of extra channels and metadata.The Format tab for
OpenEXR files provides a mechanism for mapping any non-RGBA channels to the channels
supported natively in Fusion. Using the Format tab, you can enter the name of a channel
contained in the OpenEXR file into any of the edit boxes next to the Fusion channel name.
A command line utility for dumping the names of the channels can be found at https://www.
openexr.com.
— QuickTime: QuickTime files can potentially contain multiple tracks. Use the format options to
select one of the tracks.
— Cinema DNG: CinemaDNG is an open format capable of high-resolution raw image data with a
wide dynamic range. It was one of the formats recorded by Blackmagic Design cameras before
switching over to the BRAW format.
— Photoshop PSD Format: Fusion can load any one of the individual layers stored in the PSD file,
or the completed image with all layers. Transformation and adjustment layers are not supported.
To load all layers in a PSD file with appropriate blend modes, use File > Import > PSD.
Common Controls
Settings Tab
The Settings tab controls are common to both Loader and Saver nodes, so their descriptions can be
found in “The Common Controls” section at the end of this chapter.
The MediaIn node is the foundation of every composition you create in DaVinci Resolve’s Fusion page.
In most cases, it replaces the Loader node used in Fusion Studio for importing clips. There are four
ways to add a MediaIn node to the Node Editor.
— In the Edit or Cut page, position the playhead over a clip in the Timeline, and then click the Fusion
page button. The clip from the Edit or Cut page Timeline is represented as a MediaIn node in the
Node Editor.
— Drag clips from the Media Pool into the Node Editor, creating a MediaIn node in the Node Editor.
— Drag clips from an OS window directly into the Node Editor, creating a MediaIn node
in the Node Editor.
— Choose Fusion > Import> PSD when importing PSD files into the Node Editor. Each PSD layer is
imported as a separate MediaIn node.
NOTE: Although a MediaIn tool is located in the I/O section of the Effects Library, it is not
used as a method to import clips.
When clips are brought in from the Media Pool, dragged from the OS window, or via the Import
PSD menu option, you can use the MediaIn node’s Inspector for trimming, looping, and extending the
footage, as well as setting the source’s color and gamma space.
Inputs
The single input on the MediaIn node is for an effect mask to crop the image brought in by
the MediaIn.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the source image to
appear only within the mask. An effects mask is applied to the tool after the tool is processed.
Two MediaIn nodes: one from the Edit page Timeline and one from the Media Pool
Inspector
Image Tab
When brought in from the Media Pool or dragged from an OS window, the MediaIn node’s Image tab
includes controls for trimming, creating a freeze frame, looping, and reversing the clip. You can also
reselect the clip the MediaIn links to on your hard drive. A subset of these controls is available when
the MediaIn node is brought in from the Edit or Cut page Timeline.
If the Global In and Out values are decreased to the point where the range between the In and Out
values is smaller than the number of available frames in the clip. Fusion automatically trims the clip
by adjusting the Trim range control. If the Global In/Out values are increased to the point where the
To slide the clip in time or move it through the project without changing its length, place the mouse
pointer in the middle of the range control and drag it to the new location, or enter the value manually
in the Global In value box.
Process Mode
Use this menu to select the Fields Processing mode used by Fusion when loading the image. The
Has Fields checkbox control in the Frame Format preferences determines the default option, and the
default height as well. Available options include:
— Full frames
— NTSC fields
— PAL/HD fields
— PAL/HD fields (reversed)
— NTSC fields (reversed).
The two reversed options load fields in the opposite order and thus result in the fields being spatially
swapped both in time order and in vertical order as well.
Media Source
Selects where the media is linked from, allowing you to access Edit track composite results.
Layer
Used to identify the layer in a PSD file or compound clip. When a PSD file is brought in from the Media
Pool, the drop-down menu allows you to select an individual layer for output instead of the entire
PSD composite.
Trim
The Trim range control is used to trim frames from the start or end of a clip. Adjust the Trim In to
remove frames from the start and adjust Trim Out to specify the last frame of the clip. The values used
here are offsets. A value of 5 in Trim In would use the fifth frame in the sequence as the start, ignoring
the first four frames. A value of 95 would stop loading frames after the 95th frame.
Reverse
Select this checkbox to reverse the footage so that the last frame is played first, and the first frame is
played last.
— Auto: uses the Timeline color space, or whichever color space is assigned by Resolve Color
Management (RCM) if it’s enabled.
— Space: Space lets you choose a specific setting from a Color Space drop-down menu, while a
visual “horseshoe” graph lets you see a representation of the color space you’ve selected.
— Auto: Uses the Timeline gamma, or whichever gamma is assigned by Resolve Color Management
(RCM) if it’s enabled.
— Space: Lets you choose a specific setting from a Gamma Space drop-down menu, while a visual
graph lets you see a representation of the gamma setting you’ve selected.
— Log: Displays the Log Type drop-down menu where you can choose a specific log encoding
profile. A visual graph shows a representation of the log setting chosen from the menu. Additional
Lock RGB, Level, Soft Clip, Film Stock Gamma, Conversion Gamma, and Conversion table options
are presented to finesse the gamma output.
— Remove Curve: The associated gamma curve is removed from, or a log-lin conversion is
performed on, the material, effectively converting it to a linear output space.
— Pre-Divide/Post-Multiply: Lets you convert “straight” Alpha channels into premultiplied Alpha
channels, when necessary.
Audio Tab
The Inspector for the MediaIn node contains an Audio tab, where you can choose to solo the audio
from the clip or hear all the audio tracks in the Timeline.
If the audio is out of sync when playing back in Fusion, the Audio tab’s Sound Offset wheel allows you
to slip the audio in subframe frame increments. The slipped audio is only modified in the Fusion page.
All other pages retain the original audio placement.
To hear audio from a clip brought in through the Media Pool, do the following:
1 Select the clip in the Node Editor.
2 In the Inspector, click the Audio tab and select the clip name from the Audio Track
drop-down menu.
If more than one MediaIn node exists in the comp, the audio last selected in the Inspector
is heard. You can use the Speaker icon in the toolbar to switch between the MediaIn node
audio files.
3 Right-click the Speaker icon in the toolbar, then choose the MediaIn for the clip you want to hear.
To purge the audio cache after any change to the audio playback:
— Click the Purge Audio Cache button in the Inspector.
The audio will be updated when you next playback the composition.
When using Resolve Color Management or ACES, each MediaOut node converts the output image
back to the Timeline color space for handoff to the Color page.
NOTE: Additional MediaOut nodes can be added to the Node Editor from the Effects
Library. Additional MediaOut nodes are used to pass mattes to the Color page.
Inputs
The single input on the MediaOut node is where you connect the final composite image you want
rendered back into the Edit page.
— Input: The orange input is a required input. It accepts any 2D image that you want rendered back
into the Edit page.
MediaOut1 node rendering to the Edit page, and MediaOut2 sending mattes to the Color page
NOTE: The Saver node in DaVinci Resolve is only used for exporting EXR files.
The Saver node can also be used to add scratch track audio to your composition, which can be heard
during interactive playback.
Inputs
The single input on the Saver node is for the final composition you want to render.
— Image Input: The orange input is used to connect the resulting image you want rendered.
Saver node added to the end of a node tree to render the composition
File Tab
The Saver File tab is used to set the location and output format for the rendered file.
Filename
The Filename dialog is used to select the name and path of the rendered image output. Click on the
Browse button to open a file browser and select a location for the output.
Sequence numbering is automatically added to the filename when rendering a sequential image file
format. For example, if c\renders\image.exr is entered as the filename and 30 frames of output are
rendered, the files are automatically numbered as image0000.tga, image0001.exr, image0003.exr...
and so on. Four-digit padding is automatically used for numbers lower than 10000.
You can specify the number of digits to use for padding by explicitly entering the digits into
the filename.
For example, image000000.exr would apply 6-digit padding to the numeric sequence, image.001.exr
would use 3-digit padding, and image1.exr would use none.
Output Format
This menu is used to select the image format to be saved. Be aware that selecting a new format
from this menu does not change the extension used in the filename to match. Modify the filename
manually to match the expected extension for that format to avoid a mismatch between name and
image format.
Save Frames
This control selects between two modes of rendering: Full Renders Only or High Quality Interactive.
— Full Renders Only: This is the common setting for most situations. Images are saved to disk when
a final render is started using the Start Render button in the Time Ruler.
— High Quality Interactive: This render mode is designed for real-time rendering when painting
and rotoscoping. Fusion saves each frame to disk as it is processed interactively. When used
correctly, this feature can eliminate the need to perform a final render after rotoscoping.
Frame Offset
This thumbwheel control can be used to set an explicit start frame for the number sequence applied
to the rendered filenames. For example, if Global Start is set to 1 and frames 1-30 are rendered, files
are normally numbered 0001-0030. If the Sequence Start Frame is set to 100, the rendered output
would be numbered from 100-131.
Export Tab
Process Mode
Use this menu to select the Fields Processing mode used by Fusion when saving the images or movie
file to disk. The Has Fields checkbox control in the Frame Format preferences determines the default
option, and the default height as well. Available options include:
— Full frames
— NTSC fields
— PAL/HD fields
— PAL/HD fields (reversed)
— NTSC fields (reversed).
The two reversed options save fields in the opposite order and thus result in the fields being spatially
swapped both in time order and in vertical order as well.
Export Mode
This menu is used to render out the file normally or apply a SMPTE standard 3:2 pulldown to the
footage, converting the footage from 24 fps to 30 fps.
— Frame: The default Frame setting clips to the parts of the image visible within its visible
dimensions. It breaks any infinite-workspace behavior. If the upstream DoD is smaller than the
frame, the remaining area in the frame is treated as black/transparent.
— None: This setting does not perform any source image clipping at all. This means that any data
that would normally be needed outside the upstream DoD is treated as black/transparent.
Be aware that this might create humongous images that can consume a considerate amount of
disk space. So you should use this option only if really needed.
For more information about ROI, DoD, and Infinite Workspace, see Chapter 7, “Using Viewers,” in the
in the Fusion Reference Manual.
— Auto: Passes along any metadata that might be in the rendered image.
— Space: Allows the user to set the color space based on the output format.
Curve Type
This menu is used to select a Gamma curve of the rendered file. Once the gamma curve type is set,
you can choose to apply the curve for output.
— Auto: Passes along any metadata that might be in the incoming image.
— Space: Allows the user to set the gamma curve based on the selected file format.
— Log: Displays the Log/Lin settings, similar to the Cineon Log node. For more detail on the Log
settings, see Chapter 38, “Film Nodes,” in the Fusion Reference Manual.
Apply Curve
Depending on the selected Curve Type or on the Gamma Space found in Auto mode, the associated
Gamma Curve is applied, effectively converting from a linear working space.
The audio functionality is included in Fusion Studio for scratch track (aligning effects to audio and clip
timing) purposes only. Final renders should almost always be performed without audio. The smallest
possible audio files should be used, as Fusion loads the entire audio file into memory for efficient
display of the waveform in the Timeline. The audio track is included in the saved image if a Quicktime
file format is selected. Fusion currently supports playback of WAV audio.
Source Filename
You can enter the file path and name of the audio clip you want to use in the Source Filename field.
You can also click the Browse button to open a file browser window and locate the audio scratch track.
Select the WAV file of choice, and then in the keyframes panel expand the Saver bar to view the audio
waveform. Drag the pointer over the audio wave in the Timeline layout to hear the track.
Sound Offset
Drag the control left or right to slide the Timeline position of the audio clip, relative to other nodes in
the Node Editor.
Legal Tab
The Legal tab includes settings for creating “broadcast safe” saturation and video range files
for output.
Video Type
Use this menu to select the standard to be used for broadcast legal color correction. NTSC, NHK, or
PAL/SECAM can be chosen.
— Adjust to Legal: This causes the images to be saved with legal colors relevant to the
Video Type selected.
— Indicate as Black: This causes the illegal colors to be displayed as black in the views.
— Indicate as White: This causes the illegal colors to be displayed as white in the views.
— No Changes: This causes the images to be saved unaffected.
Adjust Based On
This menu is used to choose whether Fusion makes legal the image to 75% or 100% amplitude. Very
few broadcast markets permit 100% amplitude, but for the most part this should be left to 75%.
Soft Clip
The Soft Clip control is used to draw values that are out of range back into the image. This is done
by smoothing the conversion curve at the top and bottom of the curve, allowing more values
to be represented.
Format Tab
The Format tab contains information, options, and settings specific to the image format being saved.
The controls for an EXR sequence is entirely different from the ones displayed when a MOV file is saved.
If Data Is Linear is enabled, then the DPX is marked in its header as containing linear
data. In turn, that means that when the DPX is loaded back into Fusion, or into other
apps that evaluate the header, those apps think the data is linear and do not perform any
log‑lin conversion.
Common Controls
Settings Tab
The Settings tab controls are common to both Loader and Saver nodes, so their descriptions can be
found in “The Common Controls” section at the end of this chapter.
Clipping Mode
This menu, sometimes considered source image clipping, defines how the edges of the image should
be treated.
— Frame: The default Frame setting clips to the parts of the image visible within its visible
dimensions. It breaks any infinite-workspace behavior. If the upstream DoD is smaller than the
frame, the remaining areas in the frame are treated as black/transparent.
— None: This setting does not perform any source image clipping at all. This means that any data
that would normally be needed outside the upstream DoD is treated as black/transparent. Be
aware that this might create humongous images which can consume a considerable amount of
disk space. So you should use this option only if really needed.
For more information about ROI, DoD, and Infinite Workspace, see Chapter 7, “Using Viewers,” in the
Fusion Reference Manual.
— Auto: Passes along any metadata that might be in the rendered image.
— Space: Allows the user to set the color space based on the output format.
— Auto: Passes along any metadata that might be in the incoming image.
— Space: Allows the user to set the gamma curve based on the selected file format.
— Log: Displays the Log/Lin settings, similar to the Cineon Log node. For more detail on the Log
settings, see Chapter 38, “Film Nodes,” in the Fusion Reference Manual.
Apply Curve
Depending on the selected Curve Type or on the Gamma Space found in Auto mode, the associated
Gamma Curve is applied, effectively converting from a linear working space.
Inspector
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
For example, if the red button on a Blur tool is deselected, the blur is first applied to the image, and
then the red channel from the original input is copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on Coverage and Background Color channels, see Chapter 18, “Understanding
Image Channels,” in the Fusion Reference Manual.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
LUT Nodes
This chapter details the LUT nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
Contents
File LUT [FLU]������������������������������������������������������������������������������������������������������������ 1171
This approach has two advantages. The first is that the only part of the LUT stored in the composition
is the path to the file. Since LUT files can be large, this can dramatically reduce the file size of a
composition when several LUTs are present. The second advantage is that it becomes possible
to adjust all File LUT nodes using the same file at the same time, just by changing the contents of
the LUT. This can be useful when the same LUT-based color correction is applied in many different
compositions.
Inputs
The File LUT node includes two inputs: one for the main image and the other for an effect mask to
limit the area where the LUT is applied.
— Input: This orange input is the only required connection. It accepts a 2D image output that gets
the LUT applied.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the applied LUT to only those pixels within the mask. An effects mask is applied to the tool after
the tool is processed.
A File LUT node applied at the end of a node tree as a colorist’s look
Controls Tab
The Controls tab includes options for loading a LUT and making adjustments to the gain, color space,
and Alpha channel, if one exists.
LUT File
This field is used to enter the path to the LUT file. Clicking the Browse button opens a file browser
window to locate the LUT file instead of entering it manually into the LUT File field. Currently, this node
supports LUTs exported from Fusion in .LUT and .ALUT formats, DaVinci Resolve’s .CUBE format, and
several 3D LUT formats. The node fails with an error message on the Console if it is unable to find or
load the specified file.
Pre-Gain:
This slider is a gain adjustment before the LUT being applied. This can be useful for pulling in
highlights before the LUT clips them.
Post-Gain
This slider is a gain adjustment after the LUT is applied.
Color Space
This menu is used to change the color space the LUT is applied in. The default is to apply the curves
described in the LUT to the RGB color space, but options for YUV, HLS, HSV, and others are also
available.
Pre-Divide/Post-Multiply
Selecting the Pre-Divide/Post-Multiply checkbox causes the image pixel values to be divided by the
Alpha values before applying the LUT, and then re-multiplied by the Alpha value after the correction.
This helps to prevent the creation of illegally additive images, particularly around the edges of a blue/
green key or when working with 3D-rendered objects.
Settings Tab
The Settings tab in the Inspector is also duplicated in other LUT nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Feeding the original LUT Cube Creator image into the node results in an unaltered, or 1:1, LUT file, and
nothing is displayed in the viewer.
You can, however, modify, grade, and color correct the original cube image with as many nodes as
you like and feed the result into the LUT Cube Analyzer. This creates a LUT that exactly resembles your
color pipeline.
Inputs
The LUT Cube Analyzer includes a single orange input.
— Input: The orange input is used to take the output of any node modifying an image that
originated with the LUT Cube Creator.
Generating a LUT starts with the LUT Cube Creator and ends with a LUT Cube Analyzer.
Controls Tab
The Controls tab for the LUT Cube Analyzer node is used to select the desired LUT output format,
specify a filename, and write the 3D LUT to disk.
Type
Select the desired output format of the 3D LUT.
Filename
Enter the path where you want the file saved and enter the name of the LUT file. Alternatively, you can
click the Browse button to open a file browser to select the location and filename.
Write File
Press this button to generate the 3D LUT file based on the settings above.
Settings Tab
The Settings tab in the Inspector is also duplicated in other LUT nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Feeding the original image into the node would result in an unaltered, or 1:1, output.
Inputs
The LUT Cube Apply has three inputs: a green input where the output of the LUT Cube Creator is
connected, an orange input for the image to have the LUT applied, and a blue effect mask input
— Input: This orange input accepts a 2D image that gets the LUT applied.
— Reference Image: The green input is used to connect the output of the LUT Cube Creator or a
node that is modifying the image originating in the LUT Cube Creator.
— Effect Mask: The optional effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the LUT Cube Apply to only those pixels within the mask. An effects mask is applied to the tool
after the tool is processed.
The LUT generated by the LUT Cube Creator is applied to an image using the LUT Cube Apply node.
Inspector
There are no controls for the LUT Cube Apply node. The LUT connected to the green foreground input
is applied to the image connected to the orange background input without having to write an actual
3D LUT using the LUT Cube Analyzer.
Settings Tab
The Settings tab in the Inspector is also duplicated in other LUT nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Feeding the original LUT Cube Creator image into the LUT Cube Analyzer node results in an unaltered,
or 1:1, LUT file, and nothing is displayed in the viewer.
Inputs
There are no inputs on the LUT Cube Creator. The purpose of the node is to generate an image that
can be used to create a LUT.
Generating a LUT starts with the LUT Cube Creator and ends with a LUT Cube Analyzer.
Inspector
Type:
The Type menu is used to create a pattern of color cubes.
Typical Size settings for color cubes are 33 (33 x 33 x 33) or 65 (65 x 65 x 65). These numbers are the
samples on each side of the cube. A 33 x 33 x 33 cube has around 35,937 color samples.
NOTE: Higher resolutions yield more accurate results but are also more memory and
computationally expensive.
Settings Tab
The Settings tab in the Inspector is also duplicated in other LUT nodes. These common controls are
described in the following “The Common Controls” section.
Inspector
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Enabled uses the hardware. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Mask Nodes
This chapter details the Mask nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Bitmap Mask [Bmp]������������������������������������������������������������������������������������������������ 1181
The Bitmap mask node is not required for effect masks. For effects masks, the Common Settings
tab for the masked node displays controls to select which channel of the mask image is used to
create the mask.
However, Bitmap mask nodes may still be required to connect to other mask inputs on some nodes,
such as Garbage Mattes and Pre-Masks. Also, using a Bitmap mask node between the mask source
and the target node provides additional options that would not be available when connecting directly,
such as combining masks, blurring the mask, or clipping its threshold.
Inputs
The Bitmap mask node includes two inputs in the Node Editor.
— Input: The orange input accepts a 2D image from which the mask will be created.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines the masks.
How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Controls Tab
The Controls tab is used to refine how the image connected to the orange input converts into the
Bitmap mask.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the Blend control of an effect.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause
the edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used
to determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
— Merge: Merge is the default for all masks. The new mask is merged with the input mask.
— Add: The mask’s values add to the input mask’s values.
— Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
— Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
— Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
— Average: This calculates the average (half the sum) of the new mask and the input mask.
— Multiply: This multiplies the values of the input mask by the new mask’s values.
— Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
— Invert: Areas of the input mask that are covered by the new mask are inverted; white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
— Copy: This mode completely discards the input mask and uses the new mask for all values.
— Ignore: This mode completely discards the new mask and uses the input mask for all values.
Fit Input
This menu is used to select how the image source is treated if it does not fit the dimensions of the
generated mask.
In the example below, a 720 x 576 image source (yellow) is used to generate a 1920 x 1080 mask (gray).
— Crop: If the image source is smaller than the generated mask, it will be placed according to
the X/Y controls, masking off only a portion of the mask. If the image source is larger than the
generated mask, it will be placed according to the X/Y controls and cropped off at the borders of
the mask.
— Stretch: The image source will be stretched in X and Y to accommodate the full dimensions of the
generated mask. This might lead to visible distortions of the image source.
— Inside: The image source will be scaled uniformly until one of its dimensions (X or Y) fits the
inside dimensions of the mask. Depending on the relative dimensions of the image source and
mask background, either the image source’s width or height may be cropped to fit the respective
dimensions of the mask.
— Height: The image source will be scaled uniformly until its height (Y) fits the height of the
mask. Depending on the relative dimensions of the image source and mask, the image source’s
X-dimension might not fit the mask’s X-dimension, resulting in either cropping of the image
source in X or the image source not covering the mask’s width entirely.
— Outside: The image source will be scaled uniformly until one of its dimensions (X or Y) fits the
outside dimensions of the mask. Depending on the relative dimensions of the image source
and mask, either the image source’s width or height may be cropped or not fit the respective
dimension of the mask.
Center X and Y
These controls adjust the position of the Bitmap mask.
Channel
The Channel menu determines the Channel of the input image used to create the mask. Choices
include the red, green, blue, and alpha channels, the hue, luminance, or saturation values, or the
auxiliary coverage channel of the input image (if one is provided).
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other Mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
A B-Spline mask is identical to a Polygon mask in all respects except one. Where Polygon masks use
Bézier splines, this mask node uses B-Splines. Where Bézier splines employ a central point and two
handles to manage the smoothing of the spline segment, a B-Spline requires only a single point. This
means that a B-Spline shape requires far fewer control points to create a nicely smoothed shape.
When first added to a node, the B-Spline mask consists of only Center control, which is visible
onscreen. Points are added to the B-Spline by clicking in the viewer. Each new point is connected
to the last one created, but instead of the spline going directly through each control point, B-Spline
control points only influence the spline shape. The control point pulls the spline in its direction to
create a smooth curve.
Like the Polygon mask tool, the B-Spline mask auto-animates. Adding this node to the Node Editor
adds a keyframe to the current frame. Moving to a new frame and changing the shape creates a new
keyframe and interpolate between the two defined shapes.
Inputs
The B-Spline mask node includes a single effect mask input.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines the masks.
How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Controls Tab
The Controls tab is used to refine how the B-Spline appears after drawing it in the viewer.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in
the mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering
the level of the Circle mask lowers the values of all of the pixels in the mask channel, even
though the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause
the edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Border Width
The Border Width control adjusts the thickness of the mask’s edge. When the solid checkbox is
toggled on, the border thickens or narrows the mask. When the mask is not solid, the mask shape
draws as an outline, and the width uses the Border Width setting.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used
to determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
— Merge: Merge is the default for all masks. The new mask is merged with the input mask.
— Add: The mask’s values add to the input mask’s values.
— Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
— Minimum: Comparing the input mask’s values and the new mask, this displays the
lowest (minimum) value.
— Maximum: Comparing the input mask’s values and the new mask, this displays the
highest (maximum) value.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, the checkbox affects all
pixels, regardless of whether the new mask covers them or not.
Solid
When the Solid checkbox is enabled, the mask is filled to be transparent (white) unless inverted. When
disabled, the spline is drawn as just an outline whose thickness is determined by the Border Width slider.
Center X and Y
These controls adjust the position of the B-Spline mask.
Size
Use the Size control to adjust the scale of the B-Spline effect mask, without affecting the relative
behavior of the points that compose the mask or setting a keyframe in the mask animation.
X, Y, and Z Rotation
Use these three controls to adjust the rotation angle of the mask along any axis.
Fill Method
The Fill Method menu offers two different techniques for dealing with overlapping regions of a
polyline. If overlapping segments in a mask are causing undesirable holes to appear, try switching the
setting of this control from Alternate to Non Zero Winding.
Right-clicking on this label will display a contextual menu that offers options for removing or re-adding
animation to the mask, or publishing and connecting the masks.
Adding Points
Adding Points to a B-Spline effect mask is relatively simple. Immediately after adding the node to the
Node Editor, there are no points, but the tool will be in Click Append mode. Click once in the viewer
wherever a point is required for the mask. Continue clicking to draw the shape of the mask.
When the shape is complete, click on the initial point again to close the mask.
When the shape is closed, the mode of the polyline changes to Insert and Modify. This allows you to
add and adjust additional points on the mask by clicking the spline segments. To lock down the mask’s
shape and prevent accidental changes, switch the Polyline mode to Done using the Polyline toolbar or
contextual menu.
B-Spline Toolbar
When a B-Spline mask is selected in the Node Editor, a toolbar appears above the viewer with buttons
for easy access to the modes. Position the pointer over any button in the toolbar to display a tooltip
that describes that button’s function.
You can change the way the toolbar is displayed by right-clicking on the toolbar and selecting from the
options displayed in the toolbar’s contextual menu.
The functions of the buttons in this toolbar are explained in depth in the Polylines section.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Ellipse mask node includes a single effect mask input.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines the masks.
How masks are combined is handled in the Paint mode menu in the Inspector.
Controls Tab
The Controls tab is used to refine how the ellipse appears after drawing it in the viewer.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in
the mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering
the level of the Circle mask lowers the values of all of the pixels in the mask channel, even
though the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Border Width
The Border Width control adjusts the thickness of the mask’s edge. When the solid checkbox is
toggled on, the border thickens or narrows the mask. When the mask is not solid, the mask shape
draws as an outline, and the width uses the Border Width setting.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used
to determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
— Merge: Merge is the default for all masks. The new mask is merged with the input mask.
— Add: The mask’s values add to the input mask’s values.
— Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
— Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
— Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
— Average: This calculates the average (half the sum) of the new mask and the input mask.
— Multiply: This multiplies the values of the input mask by the new mask’s values.
— Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
— Invert: Areas of the input mask that are covered by the new mask are inverted; white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
— Copy: This mode completely discards the input mask and uses the new mask for all values.
— Ignore: This mode completely discards the new mask and uses the input mask for all values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, the checkbox affects all
pixels, regardless of whether the new mask covers them or not.
Solid
When the Solid checkbox is enabled, the mask is filled to be transparent (white) unless inverted.
When disabled, the spline is drawn as just an outline whose thickness is determined by the Border
Width slider.
Center X and Y
These controls adjust the position of the Ellipse mask.
Width
This control allows independent control of the ellipse mask’s Width. In addition to the slider in the
mask’s controls, interactively drag the width (left or right edge) of the mask on the viewer using the
pointer. Any changes will be reflected in this control.
To change the mask’s size without affecting the aspect ratio, drag the onscreen control between the
edges (diagonal). This will modify both the width and height proportionately.
Angle
Change the rotational angle of the mask by moving the Angle control left or right. Values can be
entered into the number fields provided. Alternately, use the onscreen controls by dragging the little
circle at the end of the dashed angle line to interactively adjust the rotation of the ellipse.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Each stroke can have a duration that lasts for the entire project, a single frame. or an arbitrary number
of fields. The strokes can have independent durations in the Keyframes Editor for easy manipulation
of time. Alternatively, Multistrokes is a faster but non-editable way for doing many mask clean up
paint tasks.
Inputs
The Paint mask node includes a single effect mask input.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines the masks.
How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
As the Controls tab in the Mask Paint node is fundamentally identical to the Paint node, for more
detail, see Chapter 51, “Paint Node,” in the Fusion Reference Manual. The only difference between
the two nodes is that, as Mask Paint operates on single-channel mask images, there is no Channel
Selector control, and all color controls have only a single Alpha value. The Mask tab, however, includes
several parameters that are different from the Paint tool, so they are covered below.
Mask Tab
The Mask tab is used to refine the basic mask parameters that do not fall into the category of
“panting.” These include how multiple masks are combined, overall softness control, and level control.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in
the mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering
the level of the Circle mask lowers the values of all of the pixels in the mask channel, even
though the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used
to determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
— Merge: Merge is the default for all masks. The new mask is merged with the input mask.
— Add: The mask’s values add to the input mask’s values.
— Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
— Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
— Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
— Average: This calculates the average (half the sum) of the new mask and the input mask.
— Multiply: This multiplies the values of the input mask by the new mask’s values.
— Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
— Invert: Areas of the input mask that are covered by the new mask are inverted; white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
— Copy: This mode completely discards the input mask and uses the new mask for all values.
— Ignore: This mode completely discards the new mask and uses the input mask for all values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, the checkbox affects all
pixels, regardless of whether the new mask covers them or not.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Like the B-Spline mask tool, the Polygon mask auto-animates. Adding this node to the Node Editor
adds a keyframe to the current frame. Moving to a new frame and changing the shape creates a new
keyframe and interpolate between the two defined shapes.
Inputs
The Polygon mask node includes a single effect mask input.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines the masks.
How masks are combined is handled in the Paint mode menu in the Inspector.
Controls Tab
The Controls tab is used to refine how the polyline appears after drawing it in the viewer.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in
the mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering
the level of the Circle mask lowers the values of all of the pixels in the mask channel, even
though the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause
the edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Border Width
The Border Width control adjusts the thickness of the mask’s edge. When the solid checkbox is
toggled on, the border thickens or narrows the mask. When the mask is not solid, the mask shape
draws as an outline, and the width uses the Border Width setting.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used
to determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
— Merge: Merge is the default for all masks. The new mask is merged with the input mask.
— Add: The mask’s values add to the input mask’s values.
— Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
— Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
— Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
— Average: This calculates the average (half the sum) of the new mask and the input mask.
— Multiply: This multiplies the values of the input mask by the new mask’s values.
— Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
— Invert: Areas of the input mask that are covered by the new mask are inverted; white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
— Copy: This mode completely discards the input mask and uses the new mask for all values.
— Ignore: This mode completely discards the new mask and uses the input mask for all values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, the checkbox affects all
pixels, regardless of whether the new mask covers them or not.
Solid
When the Solid checkbox is enabled, the mask is filled to be transparent (white) unless inverted.
When disabled, the spline is drawn as just an outline whose thickness is determined by the Border
Width slider.
Size
Use the Size control to adjust the scale of the polygon spline effect mask, without affecting the relative
behavior of the points that compose the mask or setting a keyframe in the mask animation.
X, Y, and Z Rotation
Use these three controls to adjust the rotation angle of the mask along any axis.
Fill Method
The Fill Method menu offers two different techniques for dealing with overlapping regions of a
polyline. If overlapping segments in a mask are causing undesirable holes to appear, try switching the
setting of this control from Alternate to Non Zero Winding.
Right-clicking on this label displays a contextual menu that offers options for removing or re-adding
animation to the mask, or publishing and connecting the masks together.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Adding Points
Adding Points to a polygonal effect mask is relatively simple. Immediately after adding the node to the
Node Editor, there are no points, but the tool will be in Click Append mode. Click once in the viewer
wherever a point is required for the mask. Continue clicking to draw the shape of the mask. When the
shape is complete, click on the initial point again to close the mask.
When the shape is closed, the mode of the polyline will change to Insert and Modify. This allows for
the adjusting and adding of additional points to the mask by clicking on segments of the polyline. To
lock down the mask’s shape and prevent accidental changes, switch the Polyline mode to Done using
the Polyline toolbar or contextual menu.
When a Polygon (or B-Spline) mask is added to a node, a toolbar appears above the viewer, offering
easy access to modes. Hold the pointer over any button in the toolbar to display a tooltip that
describes that button’s function.
— Click: Click is the default option when creating a polyline (or B-Spline) mask. It is a Bézier style
drawing tool. Clicking sets a control point and appends the next control point when you click again
in a different location.
Change the way the toolbar is displayed by right-clicking on the toolbar and selecting from the options
displayed in the toolbar’s contextual menu. The functions of the buttons in this toolbar are explained
in depth in the Polylines chapter.
Inputs
The Ranges mask node includes two inputs in the Node Editor.
— Input: The orange input accepts a 2D image from which the mask will be created.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines the masks.
How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Controls Tab
The Controls tab is used to refine how the image connected to the orange input converts into the
ranges mask.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in
the mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering
the level of the Circle mask lowers the values of all of the pixels in the mask channel, even
though the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause
the edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used
to determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
— Merge: Merge is the default for all masks. The new mask is merged with the input mask.
— Add: The mask’s values add to the input mask’s values.
— Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
— Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
— Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
— Average: This calculates the average (half the sum) of the new mask and the input mask.
— Multiply: This multiplies the values of the input mask by the new mask’s values.
— Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, the checkbox affects all
pixels, regardless of whether the new mask covers them or not.
Center X and Y
These controls adjust the position of the ranges mask.
Fit Input
This menu is used to select how the image source is treated if it does not fit the dimensions of the
generated mask.
For example, below, a 720 x 576 image source (yellow) is used to generate a 1920 x 1080 mask (gray).
— Crop: If the image source is smaller than the generated mask, it is placed according to the X/Y
controls, masking off only a portion of the mask. If the image source is larger than the generated
mask it is placed according to the X/Y controls and cropped off at the borders of the mask.
— Stretch: The image source is stretched in X and Y to accommodate the full dimensions of the
generated mask. This might lead to visible distortions of the image source.
— Inside: The image source is scaled uniformly until one of its dimensions (X or Y) fits the inside
dimensions of the mask. Depending on the relative dimensions of the image source and mask
background, either the image source’s width or height may be cropped to fit the respective
dimension of the mask.
— Height: The image source is scaled uniformly until its height (Y) fits the height of the mask.
Depending on the relative dimensions of the image source and mask, the image source’s X
dimension might not fit the mask’s X dimension, resulting in either cropping of the image source
in X or the image source not covering the mask’s width entirely.
— Outside: The image source is scaled uniformly until one of its dimensions (X or Y) fits the outside
dimensions of the mask. Depending on the relative dimensions of the image source and mask,
either the image source’s width or height may be cropped or not fit the respective dimension of
the mask.
Channel
The Channel menu determines the Channel of the input image used to create the mask. Choices
include the red, green, blue, and alpha channels; the hue, luminance, or saturation values; or the
auxiliary coverage channel of the input image (if one is provided).
Shadows/Midtones/Highlights
These buttons are used to select which range is output by the node as a mask. White pixels represent
pixels that are considered to be part of the range, and black pixels are not included in the range. For
example, choosing Shadows would show pixels considered to be shadows as white, and pixels that are
not shadows as black. Mid gray pixels are only partly in the range and do not receive the full effect of
any color adjustments to that range.
The midtones range has no specific control, since its range is understood to be the space between the
shadow and highlight ranges. In other words, after low and high masks have been applied, midtones
are everything else.
The X and Y text controls below the Mini Spline Editor can be used to enter precise positions for the
selected Bézier point or handle.
Presets
This sets the splines to two commonly-used configurations. The Simple button gives a straightforward
linear-weighted selection, while the Smooth button uses a more natural falloff.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Rectangle mask node includes a single effect mask input.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines the masks.
How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Controls Tab
The Controls tab is used to refine how the rectangle appears after drawing it in the viewer.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the Blend control of an effect.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause
the edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Border Width
The Border Width control adjusts the thickness of the mask’s edge. When the solid checkbox is
toggled on, the border thickens or narrows the mask. When the mask is not solid, the mask shape
draws as an outline, and the width uses the Border Width setting.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used
to determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
— Merge: Merge is the default for all masks. The new mask is merged with the input mask.
— Add: The mask’s values add to the input mask’s values.
— Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
— Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
— Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
— Average: This calculates the average (half the sum) of the new mask and the input mask.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, this checkbox affects all
pixels, regardless of whether the new mask covers them.
Solid
When the Solid checkbox is enabled, the mask is filled to be transparent (white) unless inverted.
When disabled, the spline is drawn as just an outline whose thickness is determined by the Border
Width slider.
Center X and Y
These controls adjust the position of the Rectangle mask.
Corner Radius
Corner Radius allows the corners of the Rectangle mask to be rounded. A value of 0.0 is not rounding
at all, which means that the rectangle has sharp corners. A value of 1.0 applies the maximum amount
of rounding to the corners.
Angle
Change the rotation angle of an effect mask by moving the Angle control left or right. Values can be
entered in the provided input boxes. Alternatively, use the onscreen controls by dragging the little
circle at the end of the dashed angle line to interactively adjust the rotation of the ellipse.
Common Controls
Image and Settings Tabs
The Image and Settings tabs in the Inspector are also duplicated in other mask nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Triangle mask node includes a single effect mask input.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines the masks.
How masks are combined is handled in the Paint mode menu in the Inspector.
Controls Tab
The Controls tab is used to refine how the triangle appears after drawing it in the viewer.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the Blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in
the mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering
the level of the Circle mask lowers the values of all the pixels in the mask channel, even
though the Rectangle mask beneath it is still opaque.
Filter
This control selects the filtering algorithm used when applying Soft Edge to the mask.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Border Width
The Border Width control adjusts the thickness of the mask’s edge. When the solid checkbox is
toggled on, the border thickens or narrows the mask. When the mask is not solid, the mask shape
draws as an outline, and the width uses the Border Width setting.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used
to determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
— Merge: Merge is the default for all masks. The new mask is merged with the input mask.
— Add: The mask’s values add to the input mask’s values.
— Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
— Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
— Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
— Average: This calculates the average (half the sum) of the new mask and the input mask.
— Multiply: This multiplies the values of the input mask by the new mask’s values.
— Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
— Invert: Areas of the input mask that are covered by the new mask are inverted: white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
— Copy: This mode completely discards the input mask and uses the new mask for all values.
— Ignore: This mode completely discards the new mask and uses the input mask for all values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, this checkbox affects all
pixels, regardless of whether the new mask covers them.
Solid
When the Solid checkbox is enabled, the mask is filled to be transparent (white) unless inverted.
When disabled, the spline is drawn as just an outline whose thickness is determined by the Border
Width slider.
When adding a Wand mask to a node, a crosshair appears in the viewers. This crosshair should be
positioned in the image to select the color used to create the Wand mask. The mask is created by
examining the pixel color beneath the selection point and adding that color to the mask. The mask
then expands to examine the pixels surrounding the selection point. Surrounding pixels are added to
the mask if they are the same color. The mask stops expanding when no connecting pixels fall within
the color range of the mask.
Inputs
The Wand mask node includes two inputs in the Node Editor.
— Input: The orange input accepts a 2D image from which the mask is created.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input combines the masks.
How masks are combined is handled in the Paint mode menu in the Inspector.
Inspector
Controls Tab
The Controls tab is used to refine how the mask appears after the Wand makes a selection in
the viewer.
Level
The Level control sets the transparency level of the pixels in the mask channel. When the value is 1.0,
the mask is completely opaque (unless it has a soft edge). Lower values cause the mask to be partially
transparent. The result is identical to lowering the Blend control of an effect.
NOTE: Lowering the level of a mask lowers the values of all pixels covered by the mask in
the mask channel. For example, if a Circle mask is placed over a Rectangle mask, lowering
the level of the Circle mask lowers the values of all the pixels in the mask channel, even
though the Rectangle mask beneath it is still opaque.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Soft Edge
Use the Soft Edge slider to blur (feather) the mask, using the selected filter. Higher values cause
the edge to fade off well beyond the boundaries of the mask. A value of 0.0 creates a crisp, well-
defined edge.
Paint Mode
Connecting a mask to the effect mask input displays the Paint mode menu. The Paint mode is used
to determine how the incoming mask for the effect mask input and the mask created in the node
are combined.
— Merge: Merge is the default for all masks. The new mask is merged with the input mask.
— Add: The mask’s values add to the input mask’s values.
— Subtract: In the intersecting areas, the new mask values subtract from the input mask’s values.
— Minimum: Comparing the input mask’s values and the new mask, this displays the lowest
(minimum) value.
— Maximum: Comparing the input mask’s values and the new mask, this displays the highest
(maximum) value.
— Average: This calculates the average (half the sum) of the new mask and the input mask.
— Multiply: This multiplies the values of the input mask by the new mask’s values.
— Replace: The new mask completely replaces the input mask wherever they intersect. Areas that
are zero (completely black) in the new mask do not affect the input mask.
— Invert: Areas of the input mask that are covered by the new mask are inverted; white becomes
black and vice versa. Gray areas in the new mask are partially inverted.
— Copy: This mode completely discards the input mask and uses the new mask for all values.
— Ignore: This mode completely discards the new mask and uses the input mask for all values.
Invert
Selecting this checkbox inverts the entire mask. Unlike the Invert Paint mode, this checkbox affects all
pixels, regardless of whether the new mask covers them.
Color Space
The Color Space button group determines the color space used when selecting the source color for
the mask. The Wand mask can operate in RGB, YUV, HLS, or LAB color spaces.
Channel
The Channel button group is used to select whether the color that is masked comes from all three
color channels of the image, the alpha channel, or an individual channel only.
The exact labels of the buttons depend on the color space selected for the Wand mask operation.
If the color space is RGB, the options are R, G, or B. If YUV is the color space, the options are Y, U, or V.
Range
The Range slider controls the range of colors around the source color that are included in the mask.
If the value is left at 0.0, only pixels of the same color as the source are considered part of the mask.
The higher the value, the more that similar colors in the source are considered to be wholly part
of the mask.
Inspector
Image Tab
The controls in this tab set the resolution and clipping method used by the generated mask.
Custom
When selecting Custom from the Output Size menu, the width, height, and pixel aspect of the mask
created are locked to values defined in the composition’s Frame Format preferences. If the Frame
Format preferences change, the resolution of the mask produced is changed to match. Disabling
this option can be useful for building a composition at a different resolution than the eventual target
resolution for the final render.
— Width and Height: This pair of controls is used to set the Width and Height dimensions of the
mask to be created.
— Pixel Aspect: This control is used to specify the Pixel Aspect ratio of the created mask. An aspect
ratio of 1:1 would generate a square pixel with the same dimensions on either side (like a
computer monitor), and an aspect of 0.91 would create a slightly rectangular pixel (like an
NTSC monitor).
— Depth: The Depth drop-down menu is used to set the pixel color depth of the image created by
the mask. 32-bit pixels require four times the memory of 8-bit pixels but have far greater accuracy.
Float pixels allow high dynamic range values outside the normal 0..1 range, for representing
colors that are brighter than white or darker than black.
NOTE: Right-click on the Width, Height, or Pixel Aspect controls to display a menu listing
the file formats defined in the preferences Frame Format tab. Selecting any of the listed
options sets the width, height, and pixel aspect to the values for that format.
Clipping Mode
This option determines how the domain of definition rendering handles edges. The Clipping mode
is most important when blur or softness is applied, which may require samples from portions of the
image outside the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If
the upstream DoD is smaller than the frame, the remaining area in the frame is treated as
black/transparent.
— None: Setting this option to None does not perform any source image clipping. Any data required
to process the node’s effect that would usually be outside the upstream DoD is treated as
black/transparent.
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Mask category. The Settings
controls are even found on third-party plugin tools. The controls are consistent and work the same
way for each tool, although some tools do include one or two individual options, which are also
covered here.
Motion Blur
— Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
— Quality: Quality determines the number of samples used to create the blur. A quality setting of
2 causes Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
— Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
— Center Bias: Center Bias modifies the position of the center of the motion blur. This allows for the
creation of motion trail effects.
— Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Matte Nodes
This chapter details the Matte nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Alpha Divide [ADv]�������������������������������������������������������������������������������������������������� 1223
Inputs
The Alpha Divide node includes two inputs in the Node Editor.
An Alpha Divide node is inserted before color correcting an image with premultiplied alpha.
Inspector
This node has no controls.
Inputs
The Alpha Multiply node includes two inputs in the Node Editor.
— Input: The orange input accepts a 2D image with a “straight” or non-premultiplied alpha.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the Alpha multiply occurs. An effects mask is applied to the tool after the tool is processed.
An Alpha Multiply node is inserted after color correcting an image with premultiplied alpha.
Inspector
This node has no controls.
NOTE: When working with blue- or green-screen shots, it is best to use the Delta Keyer or
Primatte node, rather than the more general purpose Chroma Keyer node.
Inputs
The Chroma Keyer node includes four inputs in the Node Editor.
— Input: The orange input accepts a 2D image that contains the color you want to be
keyed for transparency.
— Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be made transparent. The garbage matte is applied directly
to the alpha channel of the image.
— Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be fully opaque.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the alpha multiply occurs. An effects mask is applied to the tool after the tool is processed.
Inspector
Key Type
The Key Type menu determines the selection method used for the matte creation.
— Chroma: The Chroma method creates a matte based on the RGB values of the
selected color range.
— Color: The Color method creates a matte based on the hue of the selected color range.
Color Range
Colors are made transparent by selecting the Chroma Keyer node in the node tree, and then dragging
a selection around the colors in the viewer. The range controls update automatically to represent the
current color selection. You can tweak the range sliders slightly, although most often selecting colors
in the displays is all that is required.
Soft Range
This control softens the selected color range, adding additional colors into the matte.
Image Tab
The Image tab primarily handles removing spill color on the foreground subject. Color spill occurs
when light containing the color you are removing is reflected onto the foreground subject.
Spill Color
This menu selects the color used as the base for all spill suppression techniques.
Spill Suppression
This slider sets the amount of spill suppression applied to the foreground subject.
Spill Method
This menu selects the strength of the algorithm used to apply spill suppression to the image.
Fringe Gamma
This control is used to adjust the brightness of the fringe or halo that surrounds the keyed image.
Fringe Size
This expands and contracts the size of the fringe or halo surrounding the keyed image.
Fringe Shape
Fringe Shape forces the fringe toward the external edge of the image or toward the inner edge of the
fringe. Its effect is most noticeable while the Fringe Size slider’s value is large.
Matte Tab
The Matte tab refines the softness, density, and overall fit of the resulting matte.
Filter
This control selects the filtering algorithm used when applying blur to the matte.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Blur
Matte Blur blurs the edge of the matte based on the Filter menu setting. A value of zero results in a
sharp, cutout-like hard edge. The higher the value, the more blur applied to the matte.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering.
This is profoundly important when blurring the matte, which may require samples from portions of
the image outside the current domain.
Contract/Expand
This slider shrinks or grows the semitransparent areas of the matte. Values above 0.0 expand the
matte, while values below 0.0 contract it.
This control is usually used in conjunction with the Matte Blur to take the hard edge of a matte and
reduce fringing. Since this control affects only semitransparent areas, it will have no effect on a
matte’s hard edge.
Gamma
Matte Gamma raises or lowers the values of the matte in the semitransparent areas. Higher values
cause the gray areas to become more opaque, and lower values cause the gray areas to become more
transparent. Completely black or white regions of the matte remain unaffected.
Since this control affects only semitransparent areas, it will have no effect on a matte’s hard edge.
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right.
Any value below the lower threshold setting becomes black or transparent in the matte.
Any value above the upper threshold setting becomes white or opaque in the matte. All values within
the range maintain their relative transparency values.
This control is often used to reject salt and pepper noise in the matte.
Restore Fringe
This restores the edge of the matte around the keyed subject. Often when keying, the edge of the
subject where you have hair is clipped out. Restore Fringe brings back that edge while keeping the
matte solid.
Invert Matte
When this checkbox is selected, the alpha channel created by the keyer is inverted, causing all
transparent areas to be opaque and all opaque areas to be transparent.
Solid Matte
Solid Mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold
out keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert will invert the solid matte, before it is combined with the source alpha.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is often
used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to the keyer.
Enabling Invert will invert the garbage matte, before it is combined with the source alpha.
Post-Multiply Image
Select this option to cause the keyer to multiply the color channels of the image against the alpha
channel it creates for the image. This option is usually enabled and is on by default.
Deselect this checkbox and the image can no longer be considered premultiplied for purposes
of merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
For more information on these Merge node settings, see Chapter 35, “Composite Nodes,” in the Fusion
Reference Manual.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Clean Plate
Inputs
The Clean Plate node includes three inputs in the Node Editor.
— Input: The orange input accepts a 2D image that contains the green or blue screen.
— Garbage Matte: The white garbage matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be excluded from the clean plate. For a clean plate, garbage
mattes should contain areas that are not part of the blue or green screen.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the clean plate is generated. An effects mask is applied to the tool after the tool is processed.
Plate Tab
The Plate tab contains the primary tools for creating a clean plate. Using this tab, you drag over the
areas in the viewer, and then use the Erode and Grow Edges sliders to create the clean plate.
Method
The Method menu selects the type of color selection you use when sampling colors in the viewer.
— Color: Color uses a difference method to separate the background color. This works well on
screen colors that are even.
— Ranges: Ranges uses a chroma range method to separate the background color. This is a better
option for shadowed screen or screens that have different colors.
Matte Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right.
Any value below the lower threshold becomes black or transparent in the matte.
Any value above the upper threshold becomes white or opaque in the matte. All values within the
range maintain their relative transparency values. This control is often used to reject salt and pepper
noise in the matte.
Erode
The Erode slider decreases the size of the screen area. It is used to eat away at small non-screen color
pixels that may interfere with creating a smooth green- or blue-screen clean plate.
Crop
Crop trims in from the edges of the image.
Fill
The Fill checkbox fills in remaining holes with color from the surrounding screen color.
Time Mode
— Sequence: Generates a new clean plate every frame.
— Hold Frame: Holds the clean plate at a single frame.
Mask Tab
The Mask tab is used to invert the mask connected to the garbage mask input on the node.
The garbage mask can be applied to clear areas before growing edges or filling remaining holes.
Invert
Invert uses the transparent parts of the mask to clear the image.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Delta Keyer
Inputs
The Delta Keyer node includes five inputs in the Node Editor.
— Input: The orange input accepts a 2D image that contains the color you want to be keyed for
transparency.
— Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be made transparent. The garbage matte is applied directly
to the alpha channel of the image.
— Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be fully opaque.
— Clean Plate: Accepts the resulting image from the Clean Plate node.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the keying occurs. An effects mask is applied to the tool after the tool is processed.
Inspector
Key Tab
The Key tab is where most keying begins. It is used to select the screen color.
Background Color
This is the color of the blue or green screen, sometimes called the screen color. To create the key with
the Delta Keyer, use the background color Eyedropper to select the screen color from the image.
Pre-Blur
Applies a blur before generating the alpha. This can help with certain types of noise, edge
enhancements, and artifacts in the source image.
Gain
Gain increases the influence of the screen color, causing those areas to become more transparent.
Balance
A color difference keyer, like the Delta Keyer, compares the differences between the dominant
channel determined by the selected background color and the other two channels. Adjusting balance
determines the proportions of the other two channels. A value of 0 uses the minimum of the other
two channels, where a value of 1 uses the maximum. A value of 0.5 uses half of each.
Once you have a more even screen selection, you can move to the Matte tab.
Soft Range
The Soft Range extends the range of selected color and rolloff of the screen color.
Erode
Erode contracts the edge of the pre matte, so the edge detail does not clip.
Blur
This softens the edges of the pre matte.
Matte Tab
The Matte tab refines the alpha of the key, combined with any solid and garbage masks connected to
the node. When using the matte tab, set the viewer to display the alpha channel of the Delta Keyer’s
final output.
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right.
Any value below the lower threshold setting becomes black or transparent in the matte.
Any value above the upper threshold setting becomes white or opaque in the matte. All values within
the range maintain their relative transparency values.
Erode/Dilate
Expands or contracts the matte.
Blur
Softens the matte.
Clean Foreground
Fills slightly transparent (light gray) areas of the matte.
Clean Background
Clips the bottom dark range of the matte.
Replace Mode
Determines how matte adjustments restore color to the image.
— None: No color replacement. Matte processing does not affect the color.
— Source: The color from the original image.
— Hard Color: A solid color.
— Soft Color: A solid color weighted by how much background color was originally removed.
Replace Color
The color used with the Hard Color and Soft Color replace modes.
Fringe Tab
The Fringe tab handles the majority of spill suppression in the Delta Keyer. Spill suppression is a form
of color correction that attempts to remove the screen color from the fringe of the matte.
Spill is the transmission of the screen color through the semitransparent areas of the alpha channel.
In the case of blue- or green-screen keying, this usually causes the color of the background to become
apparent in the edges of the foreground subject.
Spill Suppression
When this slider is set to 0, no spill suppression is applied to the image. Increasing the slider increases
the strength of the spill method.
Fringe Gamma
This control can be used to adjust the brightness of the fringe or halo that surrounds the keyed image.
Fringe Size
This expands and contracts the size of the fringe or halo surrounding the keyed image.
Fringe Shape
Fringe Shape presses the fringe toward the external edge of the image or pulls it toward the inner
edge of the fringe. Its effect is most noticeable while the Fringe Size value is large.
This is useful for correcting semitransparent pixels that still contain color from the original
background to match the new background.
Tuning Tab
The Tuning tab is an advanced tab that allows you to determine the size of the shadow, midtone,
and highlight ranges. By modifying the ranges, you can select the strength of the matte and spill
suppression based on tonal values.
Simple/Smooth
The Simple button sets the range to be linear. The Smooth button sets a smooth tonal gradient for
the ranges.
— Shadows: Adjusts the strength of the key in darker areas of the background.
— Midtones: Adjusts the strength of the key in midtone areas of the background.
— Highlights: Adjusts the strength of the key in brighter areas of the background.
Mask Tab
The Mask tab determines how the solid and garbage mattes are applied to the key.
— Ignore: Does not combine the alpha from the source image.
— Add: Solid areas of the source image alpha are made solid in the solid mask.
— Subtract: Transparent areas of the source image alpha are made transparent in the solid mask.
— None: No color replacement. The solid mask does not affect the color.
— Source: The color from the original image.
— Hard Color: A solid color.
— Soft Color: A solid color weighted by how much background color was originally removed.
— Invert: Inverts the solid mask, before it is combined with the source alpha.
Garbage Mask
— Invert: Normally, solid areas of a garbage mask remove the image. When inverted, the
transparent areas of the mask remove the image.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The resulting Depth Map Alpha channel is visualized as a black and white image, with white being the
area that is affected by the resulting changes and black areas remaining unchanged.
— Input: The yellow input accepts a 2D image that contains the shot you wish to analyze for depth.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the difference matte occurs. An effects mask is applied to the tool after the tool is processed.
Inspector
— Mode: Depth Map is a very computationally intensive effect. The quality setting allows a Faster
mode to speed up the responsiveness to adjustments, while the default Better mode gives the
best results and should be turned on when the adjustments are finished.
— Depth Map Preview: By default this box is checked and shows you the current Depth Map for
making adjustments. When this check box is disabled, the resulting Alpha can then be used for
grading on other nodes.
— Invert: Checking this box reverses the Depth Map, switching its transparent and opaque regions.
— Adjust Map Levels: When deselected (default), all scaling is turned off, allowing you to adjust the
full range of the Depth Map. When enabled, this option clips the Depth Map’s levels to 0 and 1.
This functions as a preview of what will happen to the Depth Map when used as an Alpha channel
where the values are always clipped to 0 and 1. Checking this box also activates the tools below.
— Far Limit: This control adjusts the black levels of the Depth Map.
— Near Limit: This control adjusts the white levels of the Depth Map.
— Gamma: This control adjusts the intermediate depth values to be brighter or dimmer compared
to the fixed black and white levels.
Map Finesse
These controls modify the resulting Depth Map’s Alpha channel for use in grading.
— Post Processing: This control turns the Map Finesse tools on or off.
— Post-Filter: This control blends the map to the smooth areas and edges of the image. It is used to
prevent later grading effects from visibly varying within the region.
— Contract/Expand: This control dilates or erodes the overall shape at the edges; useful for fine
tuning the boundary between the affected and unaffected regions of the map.
— Blur: This control softens the boundary of the map, allowing it to blend more smoothly into the
resulting image.
Although the process sounds reasonable at first glance, subtle variations in the camera position from
shot to shot usually make it difficult to get clean results. Think of the futile attempt of trying to key
smoke in front of a brick wall and using a clean plate of the brick wall as your difference input. Part of
the wall’s structure is always visible in this keying method. Instead, a Difference Keyer is often used to
produce a rough matte that is combined with other nodes to produce a more detailed matte.
Inputs
The Difference Keyer node includes four inputs in the Node Editor.
— Background: The orange background input accepts a 2D image that contains just the set without
your subject.
— Foreground: The green foreground input accepts a 2D image that contains the shot with your
subject in the frame.
— Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be made transparent.
— Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be fully opaque.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the difference matte occurs. An effects mask is applied to the tool after the tool is processed.
Inspector
Controls Tab
The Controls tab in the Difference Keyer contains all the parameters for adjusting the quality of
the matte.
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right. Adjusting them defines a range of difference values between the
images to create a matte.
Any difference above the upper threshold setting becomes white or opaque in the matte.
Filter
This control selects the filtering algorithm used when applying a blur to the matte.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Blur
This blurs the edge of the matte using the method selected in the Filter menu. A value of zero results
in a sharp, cutout-like hard edge. The higher the value, the more blur.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering. This
is profoundly important when blurring the matte, which may require samples from portions of the
image outside the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame is treated as black/
transparent.
— Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
— None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would usually be outside the upstream
DoD is treated as black/transparent.
Contract/Expand
This slider shrinks or grows the semitransparent areas of the matte. Values above 0.0 expand the
matte, while values below 0.0 contract it.
This control is usually used in conjunction with the blur to take the hard edge of a matte and reduce
fringing. Since this control affects only semitransparent areas, it has no effect on a matte’s hard edge.
Invert
Selecting this checkbox inverts the matte, causing all transparent areas to be opaque and all opaque
areas to be transparent.
Solid Matte
Solid Mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold
out keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert, inverts the solid matte before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is
often used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert inverts the garbage matte before it is combined with the source alpha.
Post-Multiply Image
Select this option to cause the keyer to multiply the color channels of the image against the alpha
channel it creates for the image. This option is usually enabled and is on by default.
Deselect this checkbox and the image can no longer be considered premultiplied for purposes
of merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
For more information on these Merge node settings, see Chapter 35, “Composite Nodes,” in the Fusion
Reference Manual.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The Luma Keyer node includes four inputs in the Node Editor.
— Input: The orange input accepts a 2D image that contains the luminance values you want to be
keyed for transparency.
— Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be made transparent. The garbage matte is applied directly
to the alpha channel of the image.
— Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be fully opaque.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the luminance key occurs. An effects mask is applied to the tool after the tool is processed.
Controls Tab
The Controls tab in the Luma Keyer contains all the parameters for adjusting the quality of the matte.
Channel
This menu selects the color channel used for creating the matte. Select from the Red, Green, Blue,
Alpha, Hue, Luminance, Saturation, and Depth (Z-buffer) channels.
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right. Adjusting them defines a range of luminance values to create a matte.
A value below the lower threshold setting becomes black or transparent in the matte.
Any value above the upper threshold setting becomes white or opaque in the matte.
Filter
This control selects the filtering algorithm used when applying a blur to the matte.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between
speed and quality.
— Multi-box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering. This
is profoundly important when blurring the matte, which may require samples from portions of the
image outside the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition to use
the full frame of the image, effectively ignoring the current domain of definition. If the upstream
DoD is smaller than the frame, the remaining area in the frame is treated as black/transparent.
— Domain: Setting this option to Domain respects the upstream domain of definition when applying
the node’s effect. This can have adverse clipping effects in situations where the node employs a
large filter.
— None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that is usually outside the upstream DoD is
treated as black/transparent.
Contract/Expand
This slider shrinks or grows the semitransparent areas of the matte. Values above 0.0 expand the
matte, while values below 0.0 contract it.
This control is usually used in conjunction with the blur to take the hard edge of a matte and reduce
fringing. Since this control affects only semitransparent areas, it has no effect on a matte’s hard edge.
Gamma
Matte Gamma raises or lowers the values of the matte in the semitransparent areas. Higher
values cause the gray areas to be more opaque, and lower values cause the gray areas to be more
transparent. Wholly black or white regions of the matte remain unaffected.
Invert
Selecting this checkbox inverts the matte, causing all transparent areas to be opaque and all opaque
areas to be transparent.
Solid Matte
Solid mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold
out keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert inverts the solid matte before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is often
used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to the keyer.
Post-Multiply Image
Select this option to cause the keyer to multiply the color channels of the image against the alpha
channel it creates for the image. This option is usually enabled and is on by default.
Deselect this checkbox and the image can no longer be considered premultiplied for purposes
of merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
For more information on these Merge node settings, see Chapter 35, “Composite Nodes,” in the Fusion
Reference Manual.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
(Left) Multiple strokes isolating the wood grain of the guitar, while ignoring the
musician’s arm; (Right) The finished shot, with a warmer wood grain
— To draw a positive stroke to add an object to a mask: Load the Magic Mask node in a viewer,
and left-click and drag the stroke across the object you want to choose. Alternatively you can
change the Stroke Mode to Add in the Inspector and left-click and drag to draw a new positive
stroke. Positive strokes are colored blue.
— To draw a negative stroke to remove an object from a mask: Load the Magic Mask node
in a viewer, and Option-left-click and drag the stroke across the object you want to remove.
Alternatively you can change the Stroke Mode to Subtract in the Inspector and left-click and drag
to draw a new negative stroke. Negative strokes are colored red.
— To delete a stroke (or group of strokes) to remove them from the mask: Shift-drag around
a stroke or group of strokes to select them (green), and press the delete key to remove them.
Alternatively you can change the Stroke Mode to Select in the Inspector and left-click and drag a
selection window to select one or more strokes. Clicking the delete button removes them.
Drawing positive strokes will select areas of similar contrast and color, allowing you to link complex
shapes together. Generally you will need more strokes to actively define a complex object, due to the
greater variety of the shapes involved. Stroke position is usually more important than stroke length.
Drawing negative strokes removes areas from the object that you don’t want to isolate. This can
be something simple, like removing the wheels of a car from a mask, or more complicated like
removing specific books from a mask of a bookshelf. Stroke position is usually more important than
stroke length.
(Left) Multiple strokes isolating the car’s body, while removing the wheels and cabin; (Right) The
finished shot, fed through a color corrector node to change the color of just the car’s body
— Input: The orange input accepts a 2D image that contains the luminance values you want to be
keyed for transparency.
— Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be made transparent. The garbage matte is applied directly
to the Alpha channel of the image.
— Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be fully opaque.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the luminance key occurs. An effects mask is applied to the tool after the tool is processed.
Inspector
— Tracking Controls: These buttons control the tracking direction, from left to right:
— Track Reverse: Continuously tracks from the current frame all the way to the beginning of the clip.
— Track Reverse One Frame: Tracks one frame backwards and stops. Useful if you’re
tracking frame-by-frame to watch the progress of a particularly complicated bit of motion.
If something goes wrong, you can back up to the last frame where the stroke was able to
properly track the subject, and drag the stroke to a better location using the pointer to make
it follow the subject properly. If necessary, you can go a frame at a time, dragging the stroke
to a better position every time it fails to follow the feature you’re using it to isolate.
— Stop Tracking: Stops tracking, in cases where there’s a problem with the track, and you want
to make a change.
— Track Forward Then Reverse: Tracks from the current frame all the way to the end of the clip,
then returns to the original tracking point and tracks backwards to the beginning of the clip.
— Track Forward One Frame: Tracks one frame forward and stops. Useful if you’re tracking
frame-by-frame to watch the progress of a particularly complicated bit of motion. If necessary,
you can go a frame at a time, dragging the stroke to a better position every time it fails to
follow the feature you’re using it to isolate.
— Track Forward: Continuously tracks from the current frame all the way to the end of the clip.
— Go To Frame: These buttons snap the playhead to the selected frame, from left to right:
— First Frame of Tracked Area: Moves the playhead to the first tracked frame of a range of
tracked frames in preparation for tracking backwards if there are untracked frames at the
beginning of the clip.
— Reference Frame: Moves the playhead to the frame on which you initially drew the strokes.
— Last Frame of Tracked Area: Moves the playhead to the last tracked frame of a range of
tracked frames in preparation for tracking forwards if there are untracked frames at the end
of the clip.
— Stroke Mode: These buttons let you change and modify the strokes drawn in the Viewer
— Add: Lets you add an additional stroke (blue) that determines what in the frame will be
included in the mask.
— Subtract: Lets you add an additional stroke (red) that determines what in the frame will be
excluded from the mask.
— Select: Lets you draw a selection rectangle around single or multiple strokes to select
them (green).
— Delete: Deletes all selected strokes (green).
— Disk Cache: These buttons allow you to control what’s stored in the disk cache.
— Regenerate All: Rebuilds the disk cache.
— Clear: Deletes the frames in the current cache.
— Reference Time: The Reference Time determines the frame where the initial strokes are drawn.
It is also the time from which tracking begins. The reference frame cannot be changed once it has
been set without destroying all pre-existing tracking information.
— Processed Frames: Displays the range of frames that have already been tracked. Start is the
earliest frame tracked and end is the last frame tracked. These fields are not user editable.
— Mode: Two options let you choose a tradeoff between quality and performance. Faster lets you
generate a lower quality mask more quickly that’s suitable for garbage matting. Better generates
a higher quality mask with more detail that’s more processor-intensive.
Post-Multiply Image
Select this option to cause the keyer to multiply the color channels of the image against the Alpha
channel it creates for the image. This option is usually enabled and is on by default.
Deselect this checkbox and the image can no longer be considered premultiplied for purposes
of merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
For more information on these Merge node settings, see Chapter 35, “Composite Nodes,” in the Fusion
Reference Manual.
Matte Tab
The Matte tab refines the alpha of the key, combined with any solid and garbage masks connected
to the node. When using the Matte tab, set the viewer to display the Alpha channel of Magic Mask’s
final output.
— Filter: Selects the Filter that is used when blurring the matte.
— Box Blur: This option applies a Box Blur effect to the whole image. This method is faster than
the Gaussian blur but produces a lower-quality result.
— Bartlett: Bartlett applies a more subtle, anti-aliased blur filter.
— Multi-Box: Multi-Box uses a box filter layered in multiple passes to approximate a Gaussian
shape. With a moderate number of passes (e.g., four), a high-quality blur can be obtained,
often faster than the Gaussian filter and without any ringing.
— Gaussian: Gaussian applies a smooth, symmetrical blur filter, using a sophisticated constant-
time Gaussian approximation algorithm.
— Fast Gaussian: Gaussian applies a smooth, symmetrical blur filter, using a sophisticated
constant-time Gaussian approximation algorithm. This mode is the default filter method.
Solid Matte
Solid mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the Alpha channel of the image. Generally, solid mattes are used to hold
out keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert inverts the solid matte before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the Alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is
often used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert inverts the garbage matte before it is combined with the source alpha.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Typically, you add this node to copy a color channel or alpha channel from the foreground input to the
background input, or to combine alpha channels from the two images.
Inputs
The Matte Control node includes four inputs in the Node Editor.
— Background: The orange background input accepts a 2D image that receives the foreground
image alpha channel (or some other channel you want to copy to the background).
— Foreground: The green foreground input accepts a 2D image that contains an alpha channel (or
some other channel) you want to be applied to the background image.
— Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the foreground/background combination that fall within the matte to be made transparent.
— Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the foreground/background combination that fall within the matte to be fully opaque.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the matte control occurs. An effects mask is applied to the tool after the tool is processed.
Inspector
Matte Tab
The Matte tab combines and modifies alpha or color channels from an image in the foreground input
with the background image.
Combine
Use this menu to select which operation is applied. The default is set to None for no operation.
— None: This causes the foreground image to be ignored.
— Combine Red: This combines the foreground red channel to the background alpha channel.
— Combine Green: This combines the foreground green channel to the background alpha channel.
— Combine Blue: This combines the foreground blue channel with the background alpha channel.
— Combine Alpha: This combines the foreground alpha channel with the background
alpha channel.
Combine Operation
Use this menu to select the method used to combine the foreground channel with the background.
— Copy: This copies the foreground source over the background alpha, overwriting any existing
alpha in the background.
— Add: This adds the foreground source to the background alpha.
— Subtract: This subtracts the foreground source from the background alpha.
— Inverse Subtract: This subtracts the background alpha from the foreground source.
— Maximum: This compares the foreground source and the background alpha and takes the value
from the pixel with the highest value.
— Minimum: This compares the foreground source and the background alpha and takes the value
from the pixel with the lowest value.
— And: This performs a logical AND on the two values.
— Or: This performs a logical OR on the values.
— Merge Over: This merges the foreground source channel over the background alpha channel.
— Merge Under: This merges the foreground source channel under the background alpha channel.
Filter
Selects the Filter that is used when blurring the matte.
— Box Blur: This option applies a Box Blur effect to the whole image. This method is faster than the
Gaussian blur but produces a lower-quality result.
— Bartlett: Bartlett applies a more subtle, anti-aliased blur filter.
— Multi-Box: Multi-Box uses a box filter layered in multiple passes to approximate a Gaussian
shape. With a moderate number of passes (e.g., four), a high-quality blur can be obtained, often
faster than the Gaussian filter and without any ringing.
— Gaussian: Gaussian applies a smooth, symmetrical blur filter, using a sophisticated constant-time
Gaussian approximation algorithm. In extreme cases, this algorithm may exhibit ringing; see
below for a discussion of this. This mode is the default filter method.
Blur
This blurs the edge of the matte using a standard, constant speed Gaussian blur. A value of zero
results in a sharp, cutout-like hard edge. The higher the value, the more blur is applied to the matte.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering.
This is profoundly important when blurring the matte, which may require samples from portions of
the image outside the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If
the upstream DoD is smaller than the frame, the remaining area in the frame is treated as
black/transparent.
Contract/Expand
This shrinks or grows the matte similar to an Erode Dilate node. Contracting the matte reveals more of
the foreground input, while expanding the matte reveals more of the background input. Values above
0.0 expand the matte, and values below 0.0 contract it.
Gamma
This raises or lowers the values of the matte in the semitransparent areas. Higher values cause
the gray areas to become more opaque, and lower values cause the gray areas to become more
transparent. Completely black or white regions of the matte remain unaffected.
Threshold
Any value below the lower threshold becomes black or transparent in the matte. Any value above the
upper threshold becomes white or opaque in the matte. All values within the range maintain their
relative transparency values.
Restore Fringe
This restores the edge of the matte around the keyed subject. Often when keying, the edge of the
subject where you have hair is clipped out. Restore Fringe brings back that edge while keeping the
matte solid.
Invert Matte
When this checkbox is selected, the alpha channel of the image is inverted, causing all transparent
areas to be opaque and all opaque areas to be transparent.
Solid Matte
Solid mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold
out areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert inverts the solid matte before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is
often used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert inverts the garbage matte before it is combined with the source alpha.
Deselect this checkbox and the image can no longer be considered premultiplied for purposes
of merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
For more information on these Merge node settings, see Chapter 35, “Composite Nodes,” in the Fusion
Reference Manual.
Spill Tab
The Spill tab handles spill suppression in the Matte Control. Spill suppression is a form of color
correction that attempts to remove the screen color from the fringe of the matte.
Spill is the transmission of the screen color through the semitransparent areas of the alpha channel.
In the case of blue- or green-screen keying, this usually causes the color of the background to become
apparent in the edges of the foreground subject.
Spill Color
This menu selects the color used as the base for all spill suppression techniques.
Spill Suppression
When this slider is set to 0, no spill suppression is applied to the image. Increasing the slider increases
the strength of the spill method.
Spill Method
This selects the strength of the algorithm used to apply spill suppression to the image.
Fringe Size
This expands and contracts the size of the fringe or halo surrounding the keyed image.
Fringe Shape
Fringe Shape presses the fringe toward the external edge of the image or pulls it toward the inner
edge of the fringe. Its effect is most noticeable while the Fringe Size value is large.
This is useful for correcting semitransparent pixels that still contain color from the original
background to match the new background.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Primatte [Pri]
NOTE: Primatte is distributed and licensed by IMAGICA Corp. of America, Los Angeles, CA,
USA. Primatte was developed by and is a trademark of IMAGICA Corp., Tokyo, Japan.
Inputs
The Primatte node includes six inputs in the Node Editor. Unlike every other tool in Fusion, the primary
orange input is labeled as the Foreground input, since it accepts the green-screen or blue-screen
image. The background input on the Primatte node is the green input; this is an optional input that
allows Primatte to create the final merged composite.
— Foreground Input: The orange input accepts a 2D image that contains blue or green screen.
— Background Input: The green (optional) input accepts a 2D image layered as the background in
the composite. If no image is connected, Primatte outputs the keyed foreground. Connecting an
image to the background input activates Primatte’s advanced edge blending options.
— Replacement Image: The magenta (optional) input accepts a 2D image used as a source of
Primatte’s spill suppression color correction.
— Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be made transparent. The garbage matte is applied directly
to the alpha channel of the image.
— Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be fully opaque.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the keying occurs. An effects mask is applied to the tool after the tool is processed.
NOTE: Connecting the background input without connecting the replacement image input
uses the background image as the replacement image surf spill suppression.
Primatte Tab
The core functionality for Primatte is found in the Primatte tab. The basic workflow is based on
selecting one of the operational mode buttons and then scrubbing over areas in the viewer.
Auto Compute
The Auto Compute button is likely the first button pressed when starting to key your footage. Primatte
automagically analyzes the original foreground image, determines the backing color, and sets it as the
central backing color. Then, using that information, another analysis determines the foreground areas.
A Clean FG Noise operation is performed using the newly determined foreground areas, and Primatte
renders the composite.
NOTE: The Auto Compute button may make the next three buttons—Select Background
Color, Clean Background Noise, and Clean Foreground Noise—unnecessary and make
your keying operation much more straightforward. Clicking Auto Compute automatically
senses the backing screen color, eliminates it, and even gets rid of some foreground and
background noise. If you get good results, then jump ahead to the Spill Removal tools.
If you don’t get satisfactory results, continue from this point using the three buttons
described below.
Spill Sponge
The Spill Sponge is the quickest method for removing color spill on your subject. Click the Spill Sponge
button and scrub the mouse pointer over a screen color pixel, and the screen color disappears from
the selected color region and is replaced by a complementary color, a selected color, or a color from a
replacement image. These options are set in the Replace tab. Additionally, use the tools under the Fine
Tuning tab or use the Spill(+) and Split(-) features to adjust the spill.
Matte Sponge
Sometimes in the Primatte operation, a 100% opaque, foreground area (all white) becomes slightly
transparent (gray). To clean those transparent areas, click the Matte Sponge button and scrub over
the transparent pixels. All the spill-suppression information remains intact.
Restore Detail
Clicking Restore Detail and scrubbing over background regions in the viewer turns completely
transparent areas translucent. This operation is useful for restoring lost hair details, thin wisps of
smoke, and the like.
Spill(+)
Clicking the Spill(+) button returns the color spill to the sampled pixel color (and all colors like it) in
the amount of one Primatte increment. This tool can be used to move the sampled color more in the
direction of the color in the original foreground image. It can be used to nullify a Spill(-) step.
Matte(+)
Clicking the Matte(+) button makes the matte more opaque for the sampled pixel color (and all colors
like it) in the amount of one Primatte increment. If the matte is still too translucent or thin, another
click using this operational mode tool makes the sampled color region even more opaque. This can be
used to thicken smoke or make a shadow darker to match shadows in the background imagery. It can
only make these adjustments to the density of the color region on the original foreground image. It
can be used to nullify a Matte(-) step.
Matte(-)
Clicking the Matte(+) button makes the matte more translucent for the sampled pixel color (and all
colors like it) in the amount of one Primatte increment. If the matte is still too opaque, another click
using this operational mode tool makes the sampled color region even more translucent. This can be
used to thin out smoke or make a shadow thinner to match shadows in the background imagery.
Detail(+)
When this button is selected, the foreground detail becomes less visible for the sampled pixel color
(and all colors like it) in the amount of one Primatte increment. If there is still too much detail, another
click using this operational mode tool makes more of it disappear. This can be used to remove smoke
or wisps of hair from the composite. Sample where detail is visible, and it disappears. This is for
moving color regions into the 100% background region. It can be used to nullify a Detail(-) step.
Detail(-)
When this button is selected, foreground detail becomes more visible for the sampled pixel color
(and all colors like it) in the amount of one Primatte increment. If detail is still missing, another click
using this operational mode tool makes detail more visible. This can be used to restore lost smoke or
wisps of hair. Sample where the smoke or hair just disappears and it returns to visibility. Use this for
restoring color regions that were moved into the 100% background region. It may start to bring in
background noise if shooting conditions were not ideal on the foreground image.
Algorithms
There are three keying algorithms available in the Primatte keyer:
— Primatte: The Primatte algorithm mode delivers the best results and supports both the
Solid Color and the Complement Color spill suppression methods. This algorithm uses three
multifaceted polyhedrons (as described later in this section) to separate the 3D RGB colorspace.
It is also the default algorithm mode and, because it is computationally intensive, it may take the
longest to render.
— Primatte RT: Primatte RT is the simplest algorithm and therefore the fastest. It uses only a
single planar surface to separate the 3D RGB colorspace (as described later in this section) and,
as a result, does not separate the foreground from the backing screen as carefully as the above
Primatte algorithm. Another disadvantage of the Primatte RT algorithm is that it does not work
well with less saturated backing screen colors, and it does not support the Complement Color spill
suppression method.
Hybrid Rendering
After sampling the backing screen color and producing acceptable edges around the foreground
object, you sometimes find a transparent area within the foreground subject. This can occur when
the foreground subject contains a color that is close to the backing screen color. Removing this
transparency with the Clean FG Noise mode can cause the edge of the foreground subject to pick up a
fringe that is close to the backing screen color. Removing the fringe is very difficult without sacrificing
quality somewhere else on the image. The Hybrid Render mode internally creates two keying
operations: Body and Edge. The optimized Edge operation gets the best edge around the foreground
subject without any fringe effect. The Body operation deals with transparency within the foreground
subject. The resultant matte is created by combining these two mattes, and then blurring and eroding
the foreground subject in the Body matte and combining it with the edge matte.
To use Hybrid Rendering, start by keying the main foreground area using the Select Background Color
mode (or any of the other Primatte backing screen detection methods). Activate the Hybrid Rendering
checkbox. Lastly, select the Clean FG Noise button and scrub over the transparent area. The Hybrid
Render mode performs the “Body/Edge” operation. The result is a final composite with perfect edges
around the foreground subject with a solid foreground subject.
Hybrid Blur
Blurs the Body matte that has been automatically generated when Hybrid Rendering is activated.
Hybrid Erode
This slider dilates or erodes the Hybrid matte. You can view the results by selecting Hybrid matte in
the View Mode menu.
Adjust Lighting
Before applying the Adjust Lighting operation, it is necessary to determine the backing screen color
using Auto Compute or Select Background Color. After performing one of those operations, click on
the Adjust Lighting button. Primatte generates an artificial clean plate and uses it to generate an
evenly lit backing screen behind the foreground object. The default setting should detect all the areas
that contain foreground pixels and deliver a smooth backing screen for the keying.
Lighting Threshold
Should Adjust Lighting fail to produce a smoother backing screen, adjust the Lighting Threshold slider
while viewing the Lighting Background setting in the View Mode menu. This displays the optimized
artificial backing screen that the Adjust Lighting mode creates.
Crop
This button reveals the Crop sliders to create a rectangular garbage matte with the Primatte node. As
opposed to Fusion’s Crop tool, this does not change the actual image size.
Reset
Resets all the Primatte key control data back to a blue- or green-screen.
Selected Color
This shows the color selected (or registered) by the scrubbing in the viewer while the Fine Tuning tab
is selected.
Spill
The Spill slider can be used to remove spill from the selected color region. The more to the right the
slider moves, the more spill is removed. The more to the left the slider moves, the closer the color
component of the selected region is to the color in the original foreground image. If moving the slider
to the right does not remove the spill, resample the color region and move the slider again.
These slider operations are additive. The result achieved by moving the slider to the right can also be
achieved by clicking on the color region using the Spill(-) operational mode.
Transparency
The Transparency slider makes the matte more translucent in the selected color region. Moving
this slider to the right makes the selected color region more transparent. Moving the slider to the
left makes the matte more opaque. If moving the slider to the right does not make the color region
Detail
The Detail slider can be used to restore lost detail. After selecting a color region, moving this slider to
the left makes the selected color region more visible. Moving the slider to the right makes the color
region less visible. If moving the slider to the left does not make the color region visible enough,
resample the color region and again move the slider to the left.
These slider operations are additive. This result achieved by moving the slider to the left can also be
achieved by clicking on the color region using the Detail(-) operational mode.
Replace Tab
The Replace tab allows you to choose between the three methods of color spill replacement as
covered in detail in the Spill Sponge section above. There are three options for the replacement color
when removing the spill. These options are selected from the Replace mode menu.
Replace Mode
— Complement: Replaces the spill color with the complement of the screen color. This mode
maintains fine foreground detail and delivers the best-quality results. If foreground spill is not a
significant problem, this mode is the one that should be used. However, If the spill intensity on
the foreground image is rather significant, this mode may often introduce serious noise in the
resultant composite.
— Image: Replaces the spill color with colors from a defocused version of the background image or
the Replace image, if one is connected to the Replace input (magenta ) on the node. This mode
results in a good color tone on the foreground subject even with a high-contrast background.
On the negative side, the Image mode occasionally loses the fine edge detail of the foreground
subjects. Another problem can occur if you later change the size of the foreground image against
the background. Since the background/foreground alignment would change, the applied color
tone from the defocused image might not match the new alignment.
— Color: Replaces the spill color with a solid color. When this option is selected, a color swatch
and R,G,B sliders are displayed for selecting the color. Changing the palette color for the solid
replacement, you can select a good spill replacement that matches the composite background. Its
strength is that it works fine with even severe spill conditions. On the negative side, when using
the Solid Color Replacement mode, fine detail on the foreground edge tends to be lost. The single
palette color sometimes cannot make a good color tone if the background image has some
high‑contrast color areas.
Degrain Tab
The Degrain tab is used when a foreground image is highly compromised by film grain. As a result of
the grain, when backing screen noise is completely removed, the edges of the foreground object often
become harsh and jagged, leading to a poor key.
Grain Size
The Grain Size selector provides a range of grain removal from Small to Large. If the foreground image
has a large amount of film grain-induced pixel noise, you may lose a good edge to the foreground
object when trying to clean all the grain noise with the Clean Background Noise Operation Mode.
These tools clean up the grain noise without affecting the quality of the key.
Grain Tolerance
Adjusting this slider increases the effect of the Clean Background Noise tool without changing the
edge of the foreground object.
Matte Tab
The Matte tab refines the alpha of the key, combined with any solid and garbage masks connected
to the node. When using the Matte tab, set the viewer to display the alpha channel of Primatte’s
final output.
Filter
This control selects the filtering algorithm used when applying blur to the matte.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-Box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Blur
Matte Blur blurs the edge of the matte based on the Filter menu setting. A value of zero results in a
sharp, cutout-like hard edge. The higher the value, the more blur applied to the matte.
Blur Inward
Activating the Blur Inward checkbox generates the blur toward the center of the foreground subject.
Conventional blurring or defocus affects the matte edges in both directions (inward and outward) and
sometimes introduces a halo artifact around the edge in the composite view. Blur Inward functions
only in the inward direction of the foreground subject (toward the center of the white area). The final
result removes small and dark noise in the screen area without picking them up again in the Clean
Background Noise mode. It can sometimes result in softer, cleaner edges on the foreground objects.
This control is usually used in conjunction with the Matte Blur to take the hard edge of a matte and
reduce fringing. Since this control affects only semitransparent areas, it will have no effect on a
matte’s hard edge.
Gamma
Matte Gamma raises or lowers the values of the matte in the semitransparent areas. Higher values
cause the gray areas to become more opaque, and lower values cause the gray areas to become more
transparent. Completely black or white regions of the matte remain unaffected.
Since this control affects only semitransparent areas, it will have no effect on a matte’s hard edge.
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right.
Any value below the lower threshold setting becomes black or transparent in the matte.
Any value above the upper threshold setting becomes white or opaque in the matte. All values within
the range maintain their relative transparency values.
This control is often used to reject salt and pepper noise in the matte.
Restore Fringe
This restores the edge of the matte around the keyed subject. Often when keying, the edge of the
subject where you have hair is clipped out. Restore Fringe brings back that edge while keeping the
matte solid.
Invert Matte
When this checkbox is selected, the alpha channel created by the keyer is inverted, causing all
transparent areas to be opaque and all opaque areas to be transparent.
Solid Matte
Solid mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold
out keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert inverts the solid matte before it is combined with the source alpha.
Garbage Matte
Garbage mattes are mask nodes or images connected to the garbage matte input on the node. The
garbage matte is applied directly to the alpha channel of the image. Generally, garbage mattes are
used to remove unwanted elements that cannot be keyed, such as microphones and booms. They are
also used to fill in areas that contain the color being keyed but that you wish to maintain.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is
often used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert inverts the garbage matte before it is combined with the source alpha.
Deselect this checkbox and the image can no longer be considered premultiplied for purposes
of merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
For more information on these Merge node settings, see Chapter 35, “Composite Nodes,” in the Fusion
Reference Manual.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Matte nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
5 Repeat this procedure as often as necessary to clear the noise from the background areas.
Selecting Gain/Gamma from the viewer’s Options menu to increase the brightness or gamma
allows you to see noise that would otherwise be invisible.
You do not need to remove every single white pixel to get good results. Most pixels displayed
as a dark color close to black in a key image are considered transparent and virtually allow the
background to be the final output in that area. Consequently, there is no need to eliminate
all noise in the screen portions of the image. In particular, if an attempt is made to remove
noise around the foreground subject meticulously, a smooth composite image is often difficult
to generate.
TIP: When clearing noise from around loose, flying hair or any background/
foreground transitional area, be careful not to select any of the areas near the edge of
the hair. Leave a little noise around the hair as this can be cleaned up later using the
Fine Tuning tools.
1 Keep the View Mode menu set to Black and the viewer set to the Alpha Channel.
2 Click the Clean Foreground Noise button.
3 Drag the mouse pointer through these dark pixels in the foreground that should be pure white.
Primatte processes the selection and eliminates the noise.
4 Repeat this procedure as often as necessary to clear the noise from the foreground areas.
5 If enabled, disable Gain/Gamma from the viewer’s Options menu to return to a regular viewer.
Removing Spill
The first three sections created a clean matte. At this point, the foreground can be composited onto
any background image. However, if there is color spill on the foreground subject, a final operation is
necessary to remove that screen spill for a more natural-looking composite.
Spill Sponge
The quickest method is to select the Spill Sponge button and then sample the spill areas away.
Additional spill removal can be done using the tools under the Fine Tuning tab or by using the
Spill(-) button.
NOTE: When using the slider in the Fine Tuning tab to remove spill, spill color
replacement is replaced based on the setting of the Spill Replacement options.
You can use the other two sliders in the same way for different key adjustments. The Detail slider
controls the matte softness for the color that is closest to the background color. For example, you
can recover lost rarefied smoke in the foreground by selecting the Fine Tuning mode, clicking on
the area of the image where the smoke starts to disappear and moving the Detail slider to the left.
The Transparency slider controls the matte softness for the color that is closest to the foreground
color. For example, if you have thick and opaque smoke in the foreground, you can make it
semitransparent by moving the Transparency slider to the right after selecting the pixels in the
Fine Tuning mode.
Inputs
The Ultra Keyer node includes four inputs in the Node Editor.
— Input: The orange input accepts a 2D image that contains the color you want to be keyed for
transparency.
— Garbage Matte: The gray garbage matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be made transparent. The garbage matte is applied directly
to the alpha channel of the image.
— Solid Matte: The white solid matte input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps masks. Connecting a mask to this input causes areas of
the image that fall within the matte to be fully opaque.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps masks. Connecting a mask to this input limits the pixels where
the keying occurs. An effects mask is applied to the tool after the tool is processed.
Inspector
Pre-Matte Tab
The Pre-Matte tab is where most keying begins. It is used to select the screen color and smooth out
the color of the screen.
Background Correction
Depending on the background color selected above, the keyer iteratively merges the pre-keyed image
over either a blue or green background before processing it further.
Matte Separation
Matte Separation performs a pre-process on the image to help separate the foreground from the
background before color selection. Generally, increase this control while viewing the alpha to eliminate
the bulk of the background, but stop just before it starts cutting holes in the subject or eroding fine
detail on the edges of the matte.
Pre-Matte Range
These R,G,B, and Luminance range controls update automatically to represent the current color
selection. Colors are selected by selecting the Ultra Keyer node’s tile in the node tree and dragging
the Eyedropper into the viewer to select the colors to be used to create the matte. These range
controls can be used to tweak the selection slightly, although selecting colors in the viewer is all that
is required.
Image Tab
The Image tab handles the majority of spill suppression in the Ultra Keyer. Spill suppression is a form
of color correction that attempts to remove the screen color from the fringe of the matte.
Spill is the transmission of the screen color through the semitransparent areas of the alpha channel.
In the case of blue- or green-screen keying, this usually causes the color of the background to become
apparent in the edges of the foreground subject.
Spill Suppression
When this slider is set to 0, no spill suppression is applied to the image. Increasing the slider increases
the strength of the spill method.
Spill Method
This selects the strength of the algorithm used to apply spill suppression to the image.
Fringe Gamma
This control can be used to adjust the brightness of the fringe or halo that surrounds the keyed image.
Fringe Size
This expands and contracts the size of the fringe or halo surrounding the keyed image.
Fringe Shape
Fringe Shape presses the fringe toward the external edge of the image or pulls it toward the inner
edge of the fringe. Its effect is most noticeable while the Fringe Size value is large.
This is useful for correcting semitransparent pixels that still contain color from the original
background to match the new background.
Matte Tab
The Matte tab refines the alpha of the key, combined with any solid and garbage masks connected to
the node. When using the Matte tab, set the viewer to display the alpha channel of the Delta Keyer’s
final output.
Filter
This control selects the filtering algorithm used when applying blur to the matte.
— Box: This is the fastest method but at reduced quality. Box is best suited for minimal
amounts of blur.
— Bartlett: Otherwise known as a Pyramid filter, Bartlett makes a good compromise between speed
and quality.
— Multi-Box: When selecting this filter, the Num Passes slider appears and lets you control the
quality. At 1 and 2 passes, results are identical to Box and Bartlett, respectively. At 4 passes and
above, results are usually as good as Gaussian, in less time and with no edge “ringing.”
— Gaussian: The Gaussian filter uses a true Gaussian approximation and gives excellent results, but
it is a little slower than the other filters. In some cases, it can produce an extremely slight edge
“ringing” on floating-point pixels.
Blur
Matte Blur blurs the edge of the matte based on the Filter menu setting. A value of zero results in a
sharp, cutout-like hard edge. The higher the value, the more blur applied to the matte.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering. This
is profoundly important when blurring the matte, which may require samples from portions of the
image outside the current domain.
Contract/Expand
This slider shrinks or grows the semitransparent areas of the matte. Values above 0.0 expand the
matte, while values below 0.0 contract it.
This control is usually used in conjunction with the Matte Blur to take the hard edge of a matte and
reduce fringing. Since this control affects only semitransparent areas, it has no effect on a matte’s
hard edge.
Gamma
Matte Gamma raises or lowers the values of the matte in the semitransparent areas. Higher values
cause the gray areas to become more opaque, and lower values cause the gray areas to become more
transparent. Completely black or white regions of the matte remain unaffected.
Since this control affects only semitransparent areas, it will have no effect on a matte’s hard edge.
Threshold
This range slider sets the lower threshold using the handle on the left and sets the upper threshold
using the handle on the right.
Any value below the lower threshold setting becomes black or transparent in the matte.
Any value above the upper threshold setting becomes white or opaque in the matte. All values within
the range maintain their relative transparency values.
This control is often used to reject salt and pepper noise in the matte.
Restore Fringe
This restores the edge of the matte around the keyed subject. Often when keying, the edge of the
subject where you have hair is clipped out. Restore Fringe brings back that edge while keeping the
matte solid.
Invert Matte
When this checkbox is selected, the alpha channel created by the keyer is inverted, causing all
transparent areas to be opaque and all opaque areas to be transparent.
Solid Matte
Solid mattes are mask nodes or images connected to the solid matte input on the node. The solid
matte is applied directly to the alpha channel of the image. Generally, solid mattes are used to hold
out keying in areas you want to remain opaque, such as someone with blue eyes against a blue screen.
Enabling Invert inverts the solid matte before it is combined with the source alpha.
Garbage mattes of different modes cannot be mixed within a single tool. A Matte Control node is
often used after a Keyer node to add a garbage matte with the opposite effect of the matte applied to
the keyer.
Enabling Invert inverts the garbage matte before it is combined with the source alpha.
Post-Multiply Image
Select this option to cause the keyer to multiply the color channels of the image against the alpha
channel it creates for the image. This option is usually enabled and is on by default.
Deselect this checkbox and the image can no longer be considered premultiplied for purposes
of merging it with other images. Use the Subtractive option of the Merge node instead of the
Additive option.
For more information on these Merge node settings, see Chapter 35, “Composite Nodes,” in the Fusion
Reference Manual.
Subtract Background
This option color corrects the edges when the screen color is removed and anti-aliased to a black
background. By enabling this option, the edges potentially become darker. Disabling this option allows
you to pass on the color of the screen to use in other processes down the line.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other matte nodes. These common controls are
described in detail in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Matte category. The controls are
consistent and work the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Usually, this causes the tool to skip processing entirely, copying the input straight to the output.
For example, if the Red button on a Blur tool is deselected, the blur is first applied to the image, and
then the red channel from the original input is copied back over the red channel of the result.
There are some exceptions, such as tools where deselecting these channels causes the tool to skip
processing that channel entirely. In that case, there are a set of RGBA buttons on the Controls tab in
the tool. The buttons in the Settings and the Controls tabs are identical.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels,” in the Fusion Reference Manual.
Clipping Mode
This option determines how edges are handled when performing domain of definition rendering.
This is profoundly important when blurring the matte, which may require samples from portions of
the image outside the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition.
If the upstream DoD is smaller than the frame, the remaining area in the frame is treated as
black/transparent.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off GPU hardware-
accelerated rendering. Enabled uses the GPU hardware for rendering the node. Auto uses a capable
GPU if one is available, and falls back to software rendering when a capable GPU is not available
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Metadata Nodes
This chapter details the Metadata nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Copy Metadata [Meta]������������������������������������������������������������������������������������������ 1289
Inputs
The two inputs on the Copy Metadata node are used to connect two 2D images.
— Background Input: The orange background input is used for the primary 2D image that is output
from the node.
— Foreground Input: The green foreground input is used for the secondary 2D image that contains
metadata you want merge or overwrite onto the background image.
Inspector
Operation
The Operation menu determines how the metadata of the foreground and background inputs
are treated.
— Merge (Replace Duplicates): All values are merged, but values with duplicate names are taken
from the foreground input.
— Merge (Preserve Duplicates): All values are merged, but values with duplicate names are taken
from the background input.
— Replace: The metadata in the foreground replaces the entire metadata in the background.
— Clear: All metadata is discarded.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Metadata nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The single input on the Set Metadata node is used to connect a 2D image that gets metadata added.
— Background Input: The orange background input is used for the primary 2D image that is output
from the node with the new metadata.
Inspector
Controls Tab
The Controls tab is where you set up the name of the metadata field and the value or information
regarding the metadata.
Field Name
The name of the metadata value. Do not use spaces.
Field Value
The value assigned to the name above.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Metadata nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
— Background Input: The orange background input is used for the primary 2D image that is output
from the node with the new timecode.
A Set Timecode node inserts new timecode metadata into the background clip.
Inspector
Controls Tab
The Controls tab sets the clip’s starting timecode metadata based on FPS, hours, minutes, seconds,
and frames.
FPS
You can choose from a variety of settings for frames per second.
Since this is a Fuse, you can easily adapt the settings to your needs by editing the appropriate piece of
code for the buttons:
MBTNC_StretchToFit = true,
{ MBTNC_AddButton = "24" },
{ MBTNC_AddButton = "25" },
{ MBTNC_AddButton = "30" },
{ MBTNC_AddButton = "48" },
{ MBTNC_AddButton = "50" },
Hours/Minutes/Seconds/Frames Sliders
Define an offset from the starting frame of the current comp.
Print to Console
Verbose output of the Timecode/Frame value in the Console.
The Timecode/Frames conversion is done according to the FPS settings. The result might look like this:
TimeCode: 00:00:08:15
Frames: 207
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Metadata nodes. These common controls
are described in detail in the following “The Common Controls” section.
Inspector
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels,” in the Fusion Reference Manual.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Miscellaneous Nodes
This chapter details miscellaneous nodes within Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Auto Domain [ADoD]��������������������������������������������������������������������������������������������� 1296
For example, a CG character rarely takes up the entire frame of an image. With this type of image, the
Auto Domain node sets the DoD to a rectangular region by comparing image pixels with the Canvas
color. The Canvas color indicates what color the pixels are outside the DoD. By default, unless a Canvas
color is set using the Set Canvas Color node, the color is set to black. This default works well when an
image has a premultiplied alpha channel. The result is a DoD that encompasses the portion of the clip
that contains only the character. The DoD is updated on each frame to accommodate changes, such as
a character walking closer to the camera. However, if a clip does not contain an alpha channel, the Set
Canvas Color node can be used to define the Canvas color as solid alpha with a color that matches the
solid background.
For more detail on the Set Canvas Color node, see Chapter 34, “Color Nodes,” in the Fusion
Reference Manual.
NOTE: The Domain of Definition is a bounding box that encompasses pixels that
have a nonzero value. The DoD is used to limit image-processing calculations and
speeds up rendering.
Inputs
The single input on the Auto Domain node is used to connect a 2D image and an effect mask, which
can be used to limit the blurred area.
— Input: The orange input is used for the primary 2D image that is blurred.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the blur to only those
pixels within the mask.
Inspector
Controls Tab
In most cases, the Auto Domain node automatically calculates the DoD bounding box; however,
the rectangular shape can be modified using the Controls tab in the Inspector.
Left
Defines the left border of the search area of the ADoD. Higher values on this slider move the left
border toward the right, excluding more data from the left margin.
1 represents the right border of the image; 0 represents the left border. The slider defaults
to 0 (left border).
Bottom
Defines the bottom border of the search area of the ADoD. Higher values on this slider move the
bottom border toward the top, excluding more data from the bottom margin.
1 represents the top border of the image; 0 represents the bottom border. The slider defaults
to 0 (bottom border).
1 represents the right border of the image; 0 represents the left border. The slider defaults
to 1 (right border).
Top
Defines the top border of the search area of the ADoD. Higher values on this slider move the top
border toward the bottom, excluding more data from the top margin.
1 represents the top border of the image; 0 represents the bottom border. The slider defaults
to 1 (top border).
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
It can also be useful if, from a certain point in your node tree, you feel the need to process your
images in a higher bit depth than their original one or to reduce the bit depth to save memory.
Inputs
The single input on the Change Depth node is used to connect a 2D image and an effect mask, which
can be used to limit the blurred area.
— Input: The orange input is used for the primary 2D image to be converted.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes or bitmaps from other tools. Connecting a mask to this input limits the blur to only those
pixels within the mask.
A Change Depth node placed after color correction is done on a floating-point image.
Inspector
Controls Tab
The two controls for this node are the Depth menu and the Dither menu. These two menus are used
to convert and adjust the color depth of the image.
Depth
The Keep setting doesn‘t do anything to the image but instead keeps the input depth. The other
options change the bit depth of the image to the respective value.
Dither
When down converting from a higher bit depth, it can be useful to add Error Diffusion or Additive
Noise to camouflage artifacts that result from problematic (high-contrast) areas.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Per-pixel calculations can be performed on the Red, Green, Blue, Alpha, Z, Z-Coverage, UV texture
coords, XYZ Normals, RGBA background color, and XY motion vector channels of the images.
You should be moderately experienced with scripting, or C++ programming, to understand the
structure and terminology used by the Custom Tool node.
Inputs
The Custom Tool node has three image inputs, a matte input, and an effect mask input.
— Input: The orange, green, and magenta inputs combine 2D images to make your composite.
When entering them into the Custom Tool fields, they are referred to as c1, c2 and c3 (c standard
for all three R, G, B channels)
— Matte Input: The white input is for a matte created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a matte to this input allows a matte to be
combined into any equation. When entering the matte into the Custom Tool fields, it is referred to
as m1.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the Custom Tool effect
to only those pixels within the mask.
Inspector
Controls Tab
Point in 1-4, X and Y
These four controls are 2D X and Y center controls that are available to expressions entered in the
Setup, Intermediate, and Channels tabs as variables p1x, p1y, ...., p4x, p4y. They are normal positional
controls and can be animated or connected to modifiers as any other node might.
Number in 1-8
The values of these controls are available to expressions entered in the Setup, Intermediate, and
Channels tabs as variables n1, n2, n3, ..., n8. They are normal slider controls and can be animated or
connected to modifiers exactly as any other node might.
These controls can be renamed using the options in the Config tab to make their meanings more
apparent, but expressions still see the values as n1, n2, ..., n8.
Setup 1-4
Up to four separate expressions can be calculated in the Setup tab of the Custom Tool node. The
Setup expressions are evaluated once per frame before any other calculations are performed. The
results are then made available to the other expressions in the Custom Tool node as variables s1, s2,
s3, and s4.
NOTE: Because these expressions are evaluated once per frame only and not for each
pixel, it makes no sense to use per-pixel variables like X and Y or channel variables like r1,
g1, b1. Allowable values include constants, variables such as n1..n8, time, W and H, and
functions like sin() or getr1d().
Intermediate 1-4
An additional four expressions can be calculated in the Inter tab. The Inter expressions are evaluated
once per pixel after the Setup expressions are evaluated but before the Channel expressions are
evaluated. Per-pixel channel variables like r1, g1, b1, and a1 are allowable. Results are available as
variables i1, i2, i3, and i4.
Number Controls
There are eight sets of Number controls, corresponding to the eight Number In sliders in the Controls
tab. Uncheck the Show Number checkbox to hide the corresponding Number In slider, or edit the
Name for Number text field to change its name.
Point Controls
There are four sets of Point controls, corresponding to the four Point In controls in the Controls tab.
Uncheck the Show Point checkbox to hide the corresponding Point In control and its crosshair in the
viewer. Similarly, edit the Name for Point text field to change the control’s name.
Color Channel expressions (RGBA) should generally return floating-point values between 0.0 and 1.0.
Values beyond this are clipped if the destination image is an integer. Other expression fields should
produce values appropriate to their channel (e.g., between -1.0 and 1.0 for Vector and Normal fields,
0.0 to 1.0 for Coverage, or any value for Depth). The Channel expressions may use the results from
both the Setup expressions (as variables s1–s4) and Inter expressions (as variables i1–i4).
Value Variables
NOTE: Use w and h and ax and ay without a following number to get the dimensions and
aspect of the primary image.
NOTE: Use c1, c2, c3 to refer to the value of a pixel in the current channel. This makes
copying and pasting expressions easier. For example, if c1/2 is typed as the red expression,
the result would be half the value of the red pixel from image 1, but if the expression is
copied to the blue channel, now it would have the value of the pixel from the blue channel.
To refer to the red value of the current pixel in input 1, type r1. For the image in input 2, it would be r2.
NOTE: There are a variety of methods used to refer to pixels from locations other than the
current one in an image.
In the above description, [ch] is a letter representing the channel to access. The [#] is a number
representing the input image. So to get the red component of the current pixel (equivalent to r), you
would use getr1b(x,y). To get the alpha component of the pixel at the center of image 2, you would use
geta2b(0.5, 0.5).
— getr1b(x,y) Output the red value of the pixel at position x, y, if there were a valid pixel present. It
would output 0.0 if the position were beyond the boundaries of the image (all channels).
— getr1d(x,y) Output the red value of the pixel at position x, y. If the position specified were
outside of the boundaries of the image, the result would be from the outer edge of the image
(RGBA only).
— getr1w(x,y) Output the red value of the pixel at position x, y. If the position specified were
outside of the boundaries of the image, the x and y coordinates would wrap around to the other
side of the image and continue from there (RGBA only).
To access other channel values with these functions, substitute the r in the above examples with
the correct channel variable (r, g, b, a and, for the getr1b() functions only, z, and so on), as shown
above. Substitute the 1 with either 2 or 3 in the above examples to access the images from the other
image inputs.
pi The value of pi
e The value of e
Mathematical Operators
-x (0.0 - x)
x * y x multiplied by y
x / y x divided by y
x + y x plus y
x - y x minus y
x && y 1.0 if both x and y are not 0.0, otherwise 0.0, i.e. identical to above
x|y 1.0 if either x or y (or both) are not 0.0, otherwise 0.0
x||y 1.0 if either x or y (or both) are not 0.0, otherwise 0.0
EXAMPLE The following examples are intended to help you understand the various
components of the Custom Tool node.
ROTATION
Using the n1 slider for the angle theta, and a sample function, we get (for the red channel):
This calculates the current pixel’s (x,y) position rotated around the origin at (0,0) (the
bottom-left corner), and then fetches the red component from the source pixel at
this rotated position. For centered rotation, we need to subtract 0.5 from our x and y
coordinates before we rotate them, and add 0.5 back to them afterward:
Which brings us to the next lesson: Setup and Intermediate Expressions. These are useful
for speeding things up by minimizing the work that gets done in the Channel expressions.
The Setup expressions are executed only once, and their results don‘t change for any pixel,
so you can use these for s1 and s2, respectively.
cos(n1) sin(n1)
(x-.5) * s1 - (y-.5) * s2 + .5
(x-.5) * s2 + (y-.5) * s1 + .5
These are the x and y parameters for the getr1b() function from above, but with the Setup
results, s1 and s2, substituted so that the trig functions are executed only once per frame,
not every pixel. Now you can use these intermediate results in your Channel expressions:
getr1b(i1, i2)
getg1b(i1, i2)
getb1b(i1, i2)
geta1b(i1, i2)
With the Intermediate expressions substituted in, we only have to do all the additions,
subtractions, and multiplications once per pixel, instead of four times. As a rule of thumb, if
it doesn‘t change, do it only once.
This is a simple rotation that doesn’t take into account the image aspect at all. It is left as
an exercise for you to include this (sorry). Another improvement could be to allow rotation
around points other than the center.
FILTERING
Our second example duplicates the functionality of a 3 x 3 Custom Filter node set to
average the current pixel together with the eight pixels surrounding it. To duplicate it with
a Custom Tool node, add a Custom Tool node to the node tree, and enter the following
expressions into the Setup tab.
(Leave the node disconnected to prevent it from updating until we are ready.)
S1
1.0/w1
S2
1.0/h1
These two expressions are evaluated at the beginning of each frame. S1 divides 1.0 by the
current width of the frame, and S2 divides 1.0 by the height. This provides a floating-point
value between 0.0 and 1.0 that represents the distance from the current pixel to the next
pixel along each axis.
Now enter the following expression into the first text control of the Channel tab (r).
Fusion refers to pixels as floating-point values between 0.0 and 1.0, which is why we created
the expressions we used in the Setup tab. If we had used x+1, y+1 instead, the expression
would have sampled the same pixel over and over again. (The function we used wraps the
pixel position around the image if the offset values are out of range.)
That took care of the red channel; now use the following expressions for the green, blue,
and alpha channels.
It’s time to view the results. Add a Background node set to a solid color and change the
color to a pure red. Add a hard-edged Rectangular effects mask and connect it to the
expression just created.
For comparison, add a Custom Filter node and duplicate the settings from the image
above. Connect a pipe to this node from the background to the node and view the results.
Alternate between viewing the Custom Tool node and the Custom Filter while zoomed in
close to the top corners of the effects mask.
Of course, the Custom Filter node renders a lot faster than the Custom Tool node we
created, but the flexibility of the Custom Tool node is its primary advantage. For example,
you could use an image connected to input 2 to control the median applied to input one by
changing all instances of getr1w, getg1w, and getb1w in the expression to getr2w, getg2w,
and getb2w, but leaving the r1, g1, and b1s as they are.
This is just one example; the possibilities of the Custom Tool node are limitless.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
This node can also interlace two separate images together into a single interlace image.
The background input is the dominate field 1 and the foreground is field 2.
Inputs
The single input on the Fields node is used to connect a 2D image and an effect mask, which can be
used to limit the blurred area.
— Stream1 Input: The orange background input is used for the primary 2D image that is
interpolated or converted.
— Stream2 Input: The optional green foreground input is only used when merging two interlaced
images together.
Controls Tab
The Controls tab includes two menus. The Operation menu is used to select the type of field
conversion performed. The Process Mode menu is used to select the field’s format for the
output image.
Operatiion Mode
Operation Menu
— Do Nothing: This causes the images to be affected by the Process Mode selection exclusively.
— Strip Field 2: This removes field 2 from the input image stream, which shortens the image to half
of the original height.
— Strip Field 1: This removes field 1 from the input image stream, which shortens the image to half
of the original height.
— Strip Field 2 and Interpolate: This removes field 2 from the input image stream and inserts a
field interpolated from field 1 so that image height is maintained. Should be supplied with frames,
not fields.
— Strip Field 1 and Interpolate: This removes field 1 from the input image stream and inserts a
field interpolated from field 2 so that image height is maintained. Should be supplied with frames,
not fields.
— Interlace: This combines fields from the input image stream(s). If supplied with one image stream,
each pair of frames are combined to form half of the number of double-height frames. If supplied with
two image streams, single frames from each stream are combined to form double-height images.
— De-Interlace: This separates fields from one input image stream. This will produce double the
amount of half-height frames.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
— Input: The orange input is used for the primary 2D image that will be averaged.
Inspector
Controls Tab
The Controls tab contains the parameters for setting the duration and guidance of the
averaged frames.
Sample Direction
The Sample Direction menu determines if the averaged frames are taken before the current frame,
after, or a mix of the two.
— Forward: Averages the number of frames set by the Frames slider after the current frame.
— Both: Averages the number of frames set by the Frames slider, taking frames before and after the
current frame.
— Backward: Averages the number of frames set by the Frames slider before the current frame.
Missing Frames
This control determines the behavior if a frame is missing from the clip.
— Duplicate Original: Uses the last original frame until a new frame is available.
— Blank Frame: Leaves missing frames blank.
Frames
This slider sets the number of frames that are averaged.
TIP: The Keyframe Stretcher can be used on a single parameter by applying the
Keystretcher modifier.
Inputs
The single input on the Keyframe Stretcher node is used to connect a 2D image that contains
keyframe animation.
— Input: The orange input is used for any node with keyframed animation. The input can be
a Merge node that is not animated but contains foreground and background nodes that
are animated.
In the below example, the duration of the clip is extended to 75 frames. The first 10 frames and the
last 10 frames of the animation run at the same speed as the original animation, while any animation
in the middle is stretched to fill the difference.
NOTE: The actual Spline Editor will show only the original keyframe positions. The splines
are not changed by the Keyframe Stretcher; only the animation is changed.
Animation modified to 75 frames but stretching only the middle of the animation
Inspector
Any keyframe adjustments to the original control will be correspondingly scaled back to the source
curve and will match the original timing as expected.
The Run Command can be used to net render other command line applications using the Fusion
Render Manager, as well as a host of other useful functions.
Inputs
The single input on the Run Command node is used to pass through a 2D image.
— Input: The optional orange image input is not required for this node to operate. However, if it
is connected to a node‘s output, the Run Command will only launch after the connected node
has finished rendering. This is often useful when connected to a Saver, to ensure that the output
frame has been fully saved to disk first. If the application launched returns a non-zero result, the
node will also fail.
Inspector
Frame Tab
The Frame tab is where the command to execute is selected and modified.
Hide
Enable the Hide checkbox to prevent the application or script from displaying a window when it
is executed.
Wait
Enable this checkbox to cause the node to wait for a remote application or tool to exit before
continuing. If this checkbox is disabled, the Fusion continues rendering without waiting for the
external application.
Frame Command
This field is used to specify the path for the command to be run after each frame is rendered. The
Browse button can be used to identify the path.
Interactive
This checkbox determines whether the launched application should run interactively, allowing
user input.
If you want to add zero paddings to the numbers generated by %t, refer to the wildcard with %0x,
where x is the number of characters with which to pad the value. This also works for %a and %b.
For example, test%04t.tga would return the following values at render time:
test0000.tga
test0001.tga
test0009.tga
test0010.tga
You may also pad a value with spaces by calling the wildcard as %x, where x is the number of spaces
with which you would like to pad the value.
The Run Command Start tab The Run Command End tab
EXAMPLE To copy the saved files from a render to another directory as each frame is
rendered, save the following text in a file called copyfile.bat to your C\ directory (the
root folder).
@echo off
set parm=%1 %2
copy %1 %2 set parm=
Create or load any node tree that contains a Saver. The following example assumes a
Saver is set to output D\ test0000.exr, test0001.exr, etc. You may have to modify the
example to match.
Add a Run Command node after the Saver to ensure the Saver has finished saving first.
Now enter the following text into the Run Command node’s Frame Command text box:
C\copytest.bat D\test%04f.exr C\
When this node tree is rendered, each file will be immediately copied to the C\ directory as
it is rendered.
The Run Command node could be used to transfer the files via FTP to a remote drive on
the network, to print out each frame as it is rendered, or to execute a custom image-
processing tool.
The Run Command node is not restricted to executing simple batch files. FusionScript,
VBScript, Jscript, CGI, and Perl files could also be used, as just a few examples.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
It does not change the image‘s physical dimensions. Downstream nodes will not process anything
outside the Domain of Definition (DoD), thus speeding up rendering of computation-intensive nodes.
This node provides an absolute mode, for setting the domain of definition manually, and a relative
mode for adjusting the existing domain of definition.
Inputs
The two inputs on the Set Domain node are used to connect 2D images.
— Input: The orange background input must be connected. It accepts a 2D image with the DoD you
want to replace or adjust.
— Foreground: The green image input is optional but also accepts a 2D image as its input. When the
foreground input is connected, the Set Domain node will replace the Background input’s domain
of definition with the foreground’s DoD.
A Set Domain node manually sets the area to limit image processing.
Inspector
Controls Tab
Mode
The Mode menu has two choices depending on whether you want to adjust or offset the existing
domain or set precise values for it.
The same operations can be performed in Set or in Adjust mode. In Adjust mode, the sliders default
to 0, marking their respective full extent of the image. Positive values shrink the DoD while negative
values expand the DoD to include more data.
Set mode defaults to the full extent of the visible image. Sliders default to a scale of 0-1 from left to
right and bottom to top.
Left
Defines the left border of the DoD. Higher values on this slider move the left border toward the right,
excluding more data from the left margin.
1 represents the right border of the image; 0 represents the left border. The slider defaults to 0
(left border).
Bottom
Defines the bottom border of the DoD. Higher values on this slider move the bottom border toward
the top, excluding more data from the bottom margin.
1 represents the top border of the image; 0 represents the bottom border. The slider defaults to 0
(bottom border).
1 represents the right border of the image; 0 represents the left border. In Set mode, the slider
defaults to 1 (right border).
Top
Defines the top border of the DoD. Higher values on this slider move the top border toward the
bottom, excluding more data from the top margin.
1 represents the top border of the image; 0 represents the bottom border. In Set mode, the slider
defaults to 1 (top border).
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
TimeSpeed does not interpolate the aux channels but instead destroys them. In particular, the Vector/
BackVector channels are consumed and destroyed after computation.
Add an Optical Flow after the Time Speed node if you want to generate flow vectors for the
retimed footage.
Inputs
The single input on the Time Speed node is used to connect a 2D image that will be retimed.
— Input: The orange input is used for the primary 2D image that will be retimed.
A MediaIn node having its speed changed in the Time Speed node.
Inspector
Speed
This control is used to adjust the Speed, in percentage values, of the outgoing image sequence.
Negative values reverse the image sequence. 200% Speed is represented by a value of 2.0, 100% by
1.0, 50% by 0.5, and 10% by 0.1.
Interpolate Mode
This menu determines the how the time speed is processed in order to improve its visual playback
quality, especially in the case of clips that are slowed down. There are three choices in the menu.
— Nearest: The most processor efficient and least sophisticated method of processing; frames are
either dropped for fast motion or duplicated for slow motion.
— Blend: Also processor efficient, but can produce smoother results; adjacent duplicated frames are
dissolved together to smooth out slow or fast motion effects.
— Flow: The most processor intensive but highest quality method of speed effect processing.
Using vector channels pre-generated from an Optical Flow node, new frames are generated to
create slow or fast motion effects. The result can be exceptionally smooth when motion in a clip
is linear. However, two moving elements crossing in different directions or unpredictable camera
movement can cause unwanted artifacts.
Sample Spread
This slider is displayed only when Interpolation is set to Blend. The slider controls the strength of the
interpolated frames on the current frame. A value of 0.5 blends 50% of the frame before and 50% of
the frame ahead and 0% of the current frame.
Depth Ordering
This menu is displayed only when Interpolation is set to Flow. The Depth Ordering is used to
determine which parts of the image should be rendered on top. This is best explained by example.
In a locked-off camera shot where a car is moving through the frame, the background does not move,
so it produces small, or slow, vectors. The car produces larger, or faster, vectors.
The Depth Ordering, in this case, is Fastest on Top, since the car draws over the background.
In a shot where the camera pans to follow the car, the background has faster vectors, and the car has
slower vectors, so the Depth ordering method would be Slowest on Top.
Clamp Edges
This checkbox is displayed only when Interpolation is set to Flow. Under certain circumstances, this
option can remove the transparent gaps that may appear on the edges of interpolated frames. Clamp
Edges can cause a stretching artifact near the edges of the frame that is especially visible with objects
moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use clamp edges only to correct small gaps around the
edges of an interpolated frame.
Edge Softness
This slider is only displayed when Interpolation is set to Flow and Clamp Edges is enabled. It helps to
reduce the stretchy artifacts that might be introduced by Clamp Edges.
If you have more than one of the Source Frame and Warp Direction checkboxes turned on, this
can lead to doubling up of the stretching effect near the edges. In this case, you’ll want to keep the
softness rather small at around 0.01. If you have only one checkbox enabled, you can use a larger
softness at around 0.03.
— Prev Forward: Takes the previous frame and uses the Forward vector to interpolate
the new frame.
— Next Forward: Takes the next frame in the sequence and uses the Forward vector to interpolate
the new frame.
— Prev Backward: Takes the previous frame and uses the Back Forward vector to interpolate
the new frame.
— Next Backward: Takes the next frame in the sequence and uses the Back vector to interpolate
the new frame.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Image interpolation offers smooth, high-quality results, all using a spline curve to adjust time
nonlinearly. To apply constant time changes such as frame rate changes, use a Time Speed instead.
When operating in Flow mode, Optical Flow data is required. This node does not generate optical flow
directly; you must create it manually upstream using an Optical Flow node or by loading the forward/
reverse vector channels from disk.
Flow Stretcher does not interpolate the aux channels but instead destroys them. In particular, the
Vector/BackVector channels are consumed/destroyed. Add an Optical Flow after the Flow Stretcher if
you want to generate flow vectors for the retimed footage.
— Input: The orange input is used for the primary 2D image that will be time stretched.
Inspector
Source Time
This control designates from which frame in the original sequence to begin sampling.
When a Time Stretcher node is added to the node tree, the Source Time control already contains a
Bézier spline with a single keyframe set to 0.0. The keyframe position is determined by the current
time when the node is added to the node tree.
NOTE: The Source Time spline may not be immediately visible until Edit is selected from the
Source Time’s contextual menu, or Display all Splines is selected from the Spline Window’s
contextual menu.
Interpolate Mode
This menu determines the how the time speed is processed in order to improve its visual playback
quality, especially in the case of clips that are slowed down.
— Nearest: The most processor efficient and least sophisticated method of processing; frames are
either dropped for fast motion or duplicated for slow motion.
— Blend: Also processor efficient but can produce smoother results; adjacent duplicated frames are
dissolved together to smooth out slow or fast motion effects.
— Flow: The most processor intensive but highest quality method of speed effect processing.
Using vector channels pre-generated from an Optical Flow node, new frames are generated to
create slow or fast motion effects. The result can be exceptionally smooth when motion in a clip
is linear. However, two moving elements crossing in different directions or unpredictable camera
movement can cause unwanted artifacts.
Sample Spread
This slider is displayed only when Interpolation is set to Blend. The slider controls the strength of the
interpolated frames on the current frame. A value of 0.5 blends 50% of the frame before and 50% of
the frame ahead and 0% of the current frame.
Depth Ordering
This menu is displayed only when Interpolation is set to Flow. The Depth Ordering is used to
determine which parts of the image should be rendered on top. This is best explained by example.
In a locked-off camera shot where a car is moving through the frame, the background does not move,
so it produces small, or slow, vectors. The car produces larger, or faster, vectors.
The Depth Ordering in this case is Fastest on Top, since the car draws over the background.
In a shot where the camera pans to follow the car, the background has faster vectors, and the car has
slower vectors, so the Depth ordering method would be Slowest on Top.
Clamp Edges
This checkbox is displayed only when Interpolation is set to Flow. Under certain circumstances, this
option can remove the transparent gaps that may appear on the edges of interpolated frames. Clamp
Edges can cause a stretching artifact near the edges of the frame that is especially visible with objects
moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use clamp edges only to correct small gaps around the
edges of an interpolated frame.
Edge Softness
This slider is displayed only when Interpolation is set to Flow and Clamp Edges is enabled. It helps to
reduce the stretchy artifacts that might be introduced by Clamp Edges.
If you have more than one of the Source Frame and Warp Direction checkboxes turned on, this
can lead to doubling up of the stretching effect near the edges. In this case, you’ll want to keep the
softness rather small at around 0.01. If you have only one checkbox enabled, you can use a larger
softness at around 0.03.
— Prev Forward: Takes the previous frame and uses the Forward vector to
interpolate the new frame.
— Next Forward: Takes the next frame in the sequence and uses the Forward vector to
interpolate the new frame.
— Prev Backward: Takes the previous frame and uses the Back Forward vector to
interpolate the new frame.
— Next Backward: Takes the next frame in the sequence and uses the Back vector to
interpolate the new frame.
EXAMPLE Make sure that the current time is either the first or last frame of the clip to
be affected in the project. Add the Time Stretcher node to the node tree. This will create a
single point on the Source Time spline at the current frame. The value of the Source Time
will be set to zero for the entire Global Range.
Set the value of the Source Time to the frame number to be displayed from the original
source, at the frame in time it will be displayed in during the project.
6. Fusion will render 25 frames by interpolating down the 100 frames to a length of 25.
7. Hold the last frame for 30 frames, and then play the clip backward at regular speed.
Continue the example from above and follow the steps below.
9. Right-click on the Source Time control and select Set Key from the menu.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Although Wireless Links can be helpful, try to keep as much of a node tree as visible as possible;
otherwise, you lose one of the main benefits of a node tree.
Inputs
There are no inputs on this node.
Inspector
Controls Tab
The Controls tab in the Wireless Link node contains a single Input field for the linked node.
Input
To use the Wireless Link node, in the Node Editor, drag the 2D node into the Input field of the Wireless
Link node. Any change you make to the original node is wirelessly replicated in the Wireless Link node.
You can use the output from the Wireless Link node to connect to a nearby node.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other miscellaneous nodes. These common
controls are described in detail in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the miscellaneous nodes. The controls
are consistent and work the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
For example, if the Red button on a Blur tool is deselected, the blur will first be applied to the image,
and then the red channel from the original input will be copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this will generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls
tabs are identical.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
“Understanding Image Channels,” in the Fusion Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off GPU hardware-
accelerated rendering. Enabled uses the GPU hardware for rendering the node. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Optical Flow
This chapter details the Optical Flow nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Optical Flow [OF]����������������������������������������������������������������������������������������������������� 1334
The computed optical flow is stored within the Vector and Back Vector aux channels of the output.
These channels can be used in other nodes like the Vector Motion Blur or Vector Distort. However,
Optical Flow must render twice when connecting it to a Time Stretcher or Time Speed node. These
nodes require the channels A. FwdVec and B. BackVec in that order, but Optical Flow generates A.
BackVec and A. FwdVec when it processes.
If you find that optical flow is too slow, consider rendering it out into OpenEXR files using a
Saver node.
TIP: If the footage input flickers on a frame-by-frame basis, it is a good idea to deflicker the
footage beforehand.
Inputs
The Optical Flow node includes a single orange image input.
— Input: The orange background input accepts a 2D image. This is the sequence of frames for which
you want to compute optical flow. The output of the Optical Flow node includes the image and
vector channels. The vector channels can be displayed by right-clicking in the viewer and choosing
Channel > Vectors and then Options > Normalize Color Range.
TIP: When analyzing Optical Flow vectors, consider adding a Smooth Motion node
afterward with smoothing for forward/ backward vectors enabled.
Alternatively, if you find the Optical Flow node too slow to analyze the frames, consider rendering it
out to an OpenEXR format using a Saver node. Then import the rendered EXR file as your new image
with embedded vector channels.
Inspector
Warp Count
Decreasing this slider makes the optical flow computations faster. To understand what this option
does, you must understand that the optical flow algorithm progressively warps one image until
it matches with the other image. After some point, convergence is reached, and additional warps
become a waste of computational time. You can tweak this value to speed up the computations, but it
is good to watch what the optical flow is doing at the same time.
Smoothness
This controls the smoothness of the optical flow. Higher smoothness helps deal with noise, while lower
smoothness brings out more detail.
Half Resolution
The Half Resolution checkbox is used purely to speed up the calculation of the optical flow. The input
images are resized down and tracked to produce the optical flow.
When using the Classic method, a single slider at the top of the Inspector improves performance
by generating proxies. The remaining Advanced section parameters tune the Optical Flow vector
calculations. The default settings serve as a good standard. In most cases, tweaking of the advanced
settings is not needed. Many deliver small or diminishing returns. However, depending on the
settings, rendering time can easily vary by 10x. If you’re interested in reducing process time, it is best
to start by experimenting with the Proxy, Number of Iterations, and Number of Warps sliders and
changing the filtering to Bilinear.
Smoothness
This controls the smoothness of the optical flow. Higher smoothness helps deal with noise, while lower
smoothness brings out more detail.
Edges
This slider is another control for smoothness but applies it based on the color channel. It tends to have
the effect of determining how edges in the flow follow edges in the color images. When it is set to a
low value, the optical flow becomes smoother and tends to overshoot edges. When it is set to a high
value, details from the color images start to slip into the optical flow, which is not desirable. Edges in
the flow end up more tightly aligning with the edges in the color images. This can result in streaked-
out edges when the optical flow is used for interpolation. As a rough guideline, if you are using the
disparity to produce a Z-channel for post effects like Depth of Field, then set it lower in value. If you
are using the disparity to perform interpolation, you might want it to be higher in value.
Mismatch Penalty
This option controls how the penalty for mismatched regions grows as they become more dissimilar.
The slider provides a choice between a balance of Quadratic and Linear penalties. Quadratic strongly
penalizes large dissimilarities, while Linear is more robust to dissimilar matches. Moving this slider
toward Quadratic tends to give a disparity with more small random variations in it, while Linear
produces smoother, more visually pleasing results.
Warp Count
Decreasing this slider makes the optical flow computations faster. In particular, the computational
time depends linearly upon this option. To understand what this option does, you must understand
that the optical flow algorithm progressively warps one image until it matches with the other image.
After some point, convergence is reached, and additional warps become a waste of computational
time. The default value in Fusion is set high enough that convergence should always be reached. You
can tweak this value to speed up the computations, but it is good to watch what the optical flow is
doing at the same time.
Iteration Count
Decreasing this slider makes the computations faster. In particular, the computational time depends
linearly upon this option. Just like adjusting the Warp Count, adjusting this option higher will eventually
yield diminishing returns and not produce significantly better results. By default, this value is set to
something that should converge for all possible shots and can be tweaked lower fairly often without
reducing the disparity’s quality.
Filtering
This option controls filtering operations used during flow generation. Catmull-Rom filtering will
produce better results, but at the same time, turning on Catmull-Rom will increase the computation
time steeply.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Optical Flow nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Repair Frame will not pass through, but rather destroys, any aux channels after the
computation is done.
See the Optical Flow node for controls and settings information.
TIP: If your footage varies in color from frame to frame, sometimes the repair can be
noticeable because, to fill in the hole, Repair Frame must pull color values from adjacent
frames. Consider using deflickering, color correction, or using a soft-edged mask to help
reduce these kinds of artifacts.
Inputs
There are two inputs on the Repair Frame node. One is used to connect a 2D image that will be
repaired and the other is for an effect mask.
— Input: The orange input is used for the primary 2D image that will be repaired.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the repairs to
certain areas.
Controls Tab
The Controls tab includes options for how to repair the frames. It also includes controls for adjusting
the optical flow analysis, identical to those controls in the Optical Flow node.
Depth Ordering
The Depth Ordering determines which parts of the image should be rendered on top by selecting
either Fastest On Top or Slowest On Top. The examples below best explain these options.
In a locked-off camera shot where a car is moving through the frame, the background does not move,
so it produces small, or slow, vectors, while the car produces larger, or faster, vectors.
The depth ordering in this case is Fastest On Top since the car draws over the background.
In a shot where the camera pans to follow the car, the background has faster vectors, and the car has
slower vectors, so the Depth Ordering method is Slowest On Top.
Clamp Edges
Under certain circumstances, this option can remove the transparent gaps that may appear on the
edges of interpolated frames. Clamp Edges causes a stretching artifact near the edges of the frame
that is especially visible with objects moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use clamp edges only to correct small gaps around the
edges of an interpolated frame.
Edge Softness
This slider is displayed only when Clamp Edges is enabled. The slider helps to reduce the stretchy
artifacts that might be introduced by Clamp Edges.
If you have more than one of the Source Frame and Warp Direction checkboxes turned on, this
can lead to doubling up of the stretching effect near the edges. In this case, you’ll want to keep the
softness rather small at around 0.01. If you have only one checkbox enabled, you can use a larger
softness at around 0.03.
— Prev Forward: Takes the previous frame and uses the Forward vector to interpolate
the new frame.
— Next Forward: Takes the next frame in the sequence and uses the Forward vector to interpolate
the new frame.
— Prev Backward: Takes the previous frame and uses the Back Forward vector to interpolate
the new frame.
— Next Backward: Takes the next frame in the sequence and uses the Back vector to interpolate
the new frame.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Optical Flow nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
It is required that the image connected to the input on the node have precomputed Vector and Back
Vector channels; otherwise, this tool prints error messages in the Console window.
Check on the channels you want to temporally smooth. Be aware that if a channel selected for
smoothing is not present, Smooth Motion will not fail, nor will it print any error messages.
It can also be used to smooth the Vector and Back Vector channels; however, sometimes, this can
make the interpolated results worse if there are conflicting motions or objects in the shot that move
around erratically, jitter, or bounce rapidly.
Another technique using two Smooth Motion nodes is to use the first Smooth Motion node to smooth
the Vector and Back Vector channels. Use the second Smooth Motion to smooth the channels you
want to smooth (e.g., Disparity). This way, you use the smoothed vector channels to smooth Disparity.
You can also try using the smoothed motion channels to smooth the motion channels.
Inputs
The Smooth Motion node includes a single orange image input.
— Input: The orange image input accepts a 2D image. This is the sequence of images for which you
want to compute smooth motion. This image must have precomputed Vector and Back Vector
channels either generated from an Optical Flow node or saved in EXR format with vector channels.
A Smooth Motion node using Vector and Back Vector channels from the Optical Flow node.
Inspector
Channel
Smooth Motion can be applied to more than just the RGBA channels. It can also be applied to the
other AOV channels.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Optical Flow nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Tween [Tw]
Since optical flow is based on color matching, it is a good idea to color correct your images to match
ahead of time. Also, if you are having trouble with noisy images, it may also help to remove some of
the noise ahead of time.
Tween destroys any input aux channels. See the Optical Flow node for controls and settings information.
Inputs
There are two image inputs on the Tween node and an effects mask input.
— Input 0: The orange input, labeled input 0, is the previous frame to the one you are generating.
— Input 1: The green input, labeled input 1, is the next frame after the one you are generating.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the Tween to
certain areas.
Inspector
Controls Tab
The Controls tab includes options for how to tween frames. It also includes controls for adjusting the
optical flow analysis, identical to those controls in the Optical Flow node.
Interpolation Parameter
This option determines where the frame you are interpolating is, relative to the two source frames A
and B. An Interpolation Parameter of 0.0 will result in frame A, a parameter of 1.0 will result in frame B,
and a parameter of 0.5 will yield a result halfway between A and B.
In a locked-off camera shot where a car is moving through the frame, the background does not move,
so it produces small, or slow, vectors, while the car produces larger, or faster, vectors.
The Depth Ordering in this case is Fastest On Top since the car draws over the background.
In a shot where the camera pans to follow the car, the background has faster vectors, and the car has
slower vectors, so the Depth Ordering method is Slowest On Top.
Clamp Edges
Under certain circumstances, this option can remove the transparent gaps that may appear on the
edges of interpolated frames. Clamp Edges causes a stretching artifact near the edges of the frame
that is especially visible with objects moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use Clamp Edges only to correct small gaps around the
edges of an interpolated frame.
Edge Softness
This slider is displayed only when Clamp Edges is enabled. The slider helps to reduce the stretchy
artifacts that might be introduced by Clamp Edges.
If you have more than one of the Source Frame and Warp Direction checkboxes turned on, this
can lead to doubling up of the stretching effect near the edges. In this case, you‘ll want to keep the
softness rather small at around 0.01. If you have only one checkbox enabled, you can use a larger
softness at around 0.03.
— Prev Forward: Takes the previous frame and uses the Forward vector to interpolate
the new frame.
— Next Forward: Takes the next frame in the sequence and uses the Forward vector to interpolate
the new frame.
— Prev Backward: Takes the previous frame and uses the Back Forward vector to interpolate
the new frame.
— Next Backward: Takes the next frame in the sequence and uses the Back vector to interpolate
the new frame.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Optical Flow nodes. These common
controls are described in detail in the following “The Common Controls” section.
Inspector
r
The Common Optical Flow Settings tab
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Optical Flow category. The controls
are consistent and work the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this causes the tool to skip processing entirely, copying the input straight to the output.
For example, if the Red button on a Blur tool is deselected, the blur is first applied to the image, and
then the red channel from the original input will be copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information see Chapter 18, “Understanding Image Channels,” in the Fusion Reference Manual.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, see the Fusion scripting documentation.
Paint Node
This chapter details the Paint node available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Paint������������������������������������������������������������������������������������������������������������������������������ 1349
Inputs��������������������������������������������������������������������������������������������������������������������������� 1349
Inspector��������������������������������������������������������������������������������������������������������������������� 1353
Each Paint node is made up of a series of brush strokes. These strokes are vector shapes with editable
brush, size, and effect. A wide range of apply modes and brush types are available.
Most Brushstrokes styles are editable polylines for fine control. They can be animated to change
shape, length, and size over time. The opacity and size of a stroke can be affected by velocity and
pressure when used with a supported tablet.
Unlimited undo and redo of paint provides the ability to experiment before committing changes to an
image sequence. Paint strokes can be reordered, deleted, and modified with virtually infinite flexibility.
Inputs
The two inputs on the Paint node are used to connect a 2D image and an effect mask which can be
used to limit the painted area.
— Input: It is required to connect the orange input with a 2D image that creates the size of the
“canvas” on which you paint.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the Paint to only those
pixels within the mask.
A Paint node merged over the top of a MediaIn for more flexibility
To begin working with the Paint tool, first select the paint stroke type from the Paint toolbar above the
viewer. There are ten stroke types to choose from as well as two additional tools for selecting and grouping
paint strokes. The stroke types and tools are described below in the order they appear in the toolbar.
— Multistroke: Although this is the default selection and the first actual brush type in the toolbar,
Multistroke is not typically the stroke type most often used. However, it’s perfect for those
100-strokes-per-frame retouching paint jobs like removing tracking markers. Multistroke is much
faster than the Stroke type but is not editable after it is created. By default, Multistroke lasts for
one frame and cannot be modified after it has been painted. Use the Duration setting in the
Stroke controls to set the number of frames before painting. A shaded area of the Multistroke
duration is visible but not editable in the Keyframes Editor. While Multistrokes aren’t directly
editable, they can be grouped with the PaintGroup modifier, then tracked, moved, and rotated by
animating the PaintGroup instead.
— Clone Multistroke: Similar to Multistroke but specifically meant to clone elements from one
area or image to the other. Perfect for those 100-strokes-per-frame retouching paint jobs like
removing tracking markers. Clone Multistroke is faster than the Stroke type but is not editable
after it is created. By default, Clone Multistroke lasts for one frame and cannot be modified after
it has been painted. Use the Duration setting in the Stroke controls to set the number of frames
before painting. A shaded area of the Clone Multistroke duration is visible but not editable in the
Keyframes Editor.
Paint edit options are displayed in the viewer after a Polyline stroke is created.
— Click Append: This is the default option when creating a polyline stroke. It works more like a
Bézier pen drawing tool than a paintbrush tool. Clicking sets a control point and appends the next
control point when you click again in a different location.
— Draw Append: This is a freehand drawing tool. It paints a stroke similar to drawing with a pencil
on paper. You can create a new Polyline Stroke or Copy Polyline Stroke using the Draw tool, or you
can extend a Stroke style after clicking the Make Editable button in the Inspector.
— Insert: Insert adds a new control point along the paint stroke spline.
— Modify: Modify allows you to safely move or smooth any exiting point along a spline without
worrying about adding a new point accidentally.
— Done: Prevents any point along the spline from being moved or modified. Also, new points cannot
be added. You can, however, move and rotate the entire spline.
— Closed: Closes an open polyline.
— Smooth: Changes the selected stroke or control point from a linear to a smooth curve.
— Linear: Changes the selected stroke or control point from a smooth curve to linear.
— Select All: Selects all the control points on the polyline.
— Keys: Shows or hides the control points along the polyline.
— Handles: Shows or hides the Bézier handles along the polyline.
— Shape: Places a reshape rectangle around the selected polyline control points. Using the reshape
rectangle, you can deform groups of polyline control points or entire shapes much easier than
modifying each point.
— Delete: Deletes the selected control point(s).
— Reduce: Opens a Freehand precision window that can be used to reduce the number of control
points on a polyline. This can make the paint stroke easier to modify, especially if it has been
created using the Draw tool.
— Publish: You can use the Publish menu to either publish control points or the path. Publishing is
a form of parameter linking. It makes the selected item available for use by other controls, or to
attach a control point to a tracker.
— Follow Points: Allows a selected point to follow the path of a published point. The point follows
the published point using an offset position.
— Roto Assist: Enable the Roto Assist button when you begin painting with the Polyline Stroke tool.
The polyline points snap to the closest edge as you click to add points to the shape. A cyan outline
indicates the points that have snapped to an edge. There are three main Roto Assist options
selectable through the drop-down menu:
— Multiple Points: When enabled, a single click on a high-contrast edge will add multiple points
to define the entire edge, instead of having to add each point individually. This is a one time
only click. The second click reverts to single point edge detection.
— Distance 8: Opens a dialog where you can set the pixel range within which searching for an
edge will take place.
— Reset: Used for resetting the snap attribute of all snapped points. After resetting, the points
will become unavailable for tracking.
Controls Tab
Not all of the controls described here appear in all modes. Some controls are useful only in a specific
Paint mode and do not appear when they are not applicable. The Controls tab is used to configure
your paint settings before painting. Once a paint stroke is created, except for the Multistroke and
Clone Multistroke, you can select the stroke in the viewer and update the controls.
Brush Controls
Brush Shape
The brush shape buttons select the brush tip shape. Except for the single pixel shape, you can modify
the size of the brush shape in the viewer by holding down the Command or Ctrl key while dragging
the mouse.
— Soft Brush: The Soft Brush type is a circular brush tip with soft edges.
— Circular Brush: A Circular Brush is a brush tip shape with hard edges.
Vary Size
Vary size settings change the stroke size based on speed or a pressure-sensitive pen and tablet.
— Constant: The brush tip remains a constant size over the stroke.
— With Pressure: The stroke size varies with the actual applied pressure.
— With Velocity: The stroke size varies with the speed of painting. The faster the stroke,
the thinner it is.
Vary Opacity
Vary opacity settings change the stroke opacity based on speed or a pressure-sensitive pen
and tablet.
— Constant: The brush tip remains at a constant transparency setting over the entire stroke.
— With Pressure: The stroke transparency varies with the applied pressure.
— With Velocity: The stroke transparency varies with the speed of painting. The faster the stroke,
the more transparent it is.
Softness
Use this control to increase or decrease the Softness of a soft brush.
Image Source
When using the Image Source brush type, select between three possible source brush images.
— Node: The image source is derived from the output of a node in the node tree. Drag the node into
the Inspector’s Source node input field to set the source.
— Clip: The image source is derived from an image or sequence on disk. Any file supported by
Fusion’s Loader or MediaIn node can be used.
— Brush: Select an image to use as a brush from the menu. Images located in the Fusion > Brushes
directory are used to populate the menu.
Color Space
When the Fill tool is selected, a Color Space menu selects the color space when sampling colors
around the Fill tool center for inclusion in the fill range.
Channel
When the Fill tool is selected, a Channel menu selects which color channel is used in the fill paint.
For example, with alpha selected, the fill occurs on contiguous pixels of the alpha channel.
— Color: The Color Apply Mode paints simple colored strokes. When used in conjunction with an
image brush, it can also be used to tint the image.
— Clone: The Clone Apply Mode copies an area from the same image using adjustable positions and
time offsets. This mode can also copy portions of one image into another image. Any image from
the node tree can be used as the source image.
— Emboss: The Emboss Apply Mode embosses the portions of the image covered by the
brush stroke.
— Erase: Erase reveals the underlying image through all other strokes, effectively erasing portions
of the strokes beneath it without actually destroying the strokes.
— Merge: This Apply Mode effectively merges the brush onto the image. This mode behaves in
much the same way as the Color Apply Mode but has no color controls. It is best suited for use
with the image brush type.
— Smear: Smear the image using the direction and strength of the brushstroke as a guide.
— Stamp: Stamps the brush onto the image, completely ignoring any alpha channel or transparency
information. This mode is best suited for applying decals to the target image.
— Wire: This Wire Removal Mode is used to remove wires, rigging, and other small elements in the
frame by sampling adjacent pixels and draw them in toward the stroke.
Stroke Controls
The stroke controls contain parameters that adjust the entire stroke of paint as well as
control it over time.
— Size: This control adjusts the size of the brush when the brush type is set to either Soft Brush or
Circle. The diameter of the brush is drawn in the viewer as a small circle surrounding the mouse
pointer. The size can also be adjusted interactively in the viewer by holding the Command or Ctrl
key while dragging the mouse pointer.
— Spacing: The Spacing slider determines the distance between dabs (samples used to draw a
continuous stroke along the underlying vector shape). Increasing this value increases the density
of the stroke, whereas decreasing this value causes the stroke to assume the appearance of a
dotted line.
— Stroke Animation: The Stroke Animation menu provides several pre-built animation effects
that can be applied to a paint stroke. This menu appears only for vector strokes like Stroke
and Polyline Stroke.
— All Frames: This default displays the stroke for all frames of the image connected to the
orange background input of the Paint node.
— Limited Duration: This exists on the number of frames specified by the Duration slider.
— Write On: When Write On is selected, an animation spline is added to the paint stroke that
precisely duplicates the timing of the paint stroke’s creation. The stroke is written on the
image exactly as it was drawn. To adjust the timing of the Write On effect, switch to the Spline
Editor and use the Time Stretcher node to adjust the overall length of the animation spline.
To smooth or manually adjust the motion, try reducing the points in the animation spline.
— Duration: Duration sets the duration of each stroke in frames. This control is present only for
Multistroke and Clone Multistroke, or when the stroke animation mode is set to Limited Duration.
It is most commonly employed for frame-by-frame rotoscoping through a scene.
Each Vector stroke applied to a scene has a duration in the Keyframes Editor that can be trimmed
independently from one stroke to the next. The duration can be set to 0.5, which allows each
stroke to last for a single field only when the node tree is processing in Fields mode.
— Write On and Write Off: This range slider appears when the Stroke Animation is set to one of
the Write On and Write Off methods. The range represents the beginning and end points of the
stroke. Increase the Start value from 0.0 to 1.0 to erase the stroke, or increase the End value from
0.0 to 1.0 to draw the stroke on the screen. This control can be animated to good effect. It works
most effectively when automatically animated through the use of the Write On and Write Off
modes of the Stroke Animation menu.
— Make Editable: This button appears only for Vector strokes. Clicking on Make Editable turns the
current stroke into a polyline spline so that the shape can be adjusted or animated.
NOTE: The MultiStroke tools are built for speed and can contain many strokes internally
without creating a huge list stack in the modifiers
Each Paint modifier stroke contains Brush controls, Apply controls, and Stroke controls identical to
those found in the main Controls tab of the Inspector.
While painting:
Hold Command or Ctrl while left-dragging to change brush size.
While cloning:
Option-click or Alt-click to set the clone source position. Strokes start cloning
from the selected location.
Option + Left/Right or Alt + Left/Right Arrow keys change the clone source angle.
Option + Up/Down or Alt + Up/Down Arrow keys change the clone source size.
Shift + Command or Shift + Ctrl can be used with the above for greater or lesser
adjustments. Left and right square brackets, [ and ], change the clone source Time Offset
(this requires a specific Clone Source node to be set in the Source Node field).
Copy Rect/Ellipse:
Shift + drag out the source to constrain the shape.
Paint Groups:
Command + drag or Ctrl + drag to change the position of a group’s crosshair, without
changing the position of the group.
Particle Nodes
This chapter details the Particle nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Particle Nodes������������������������������������������������� 1359 pMerge [pMg]�������������������������������������������������� 1391
To begin, every particle system you create must contain two fundamental nodes:
— pEmitter: Used to generate the particles and control their basic look, motion. and behavior.
— pRender: Used to render the output of the pEmitter into a 2D or 3D scene. When creating
particles, you only ever view the pRender node.
Particle Nodes
The remaining particle nodes modify the pEmitter results to simulate natural phenomena like gravity,
flocking, and bounce. The names of particle nodes all begin with a lowercase p to differentiate them
from non-particle nodes. They can be found in the particles category in the Effects Library.
pAvoid [pAv]
It has two primary controls. The first determines the distance from the region a particle should be
before it begins to move away from the region. The second determines how strongly the particle
moves away from the region.
A pAvoid node creates a “desire” in a particle to move away from a specific region. If the velocity
of the particle is stronger than the combined distance and strength of the pAvoid region, the
particle’s desire to avoid the region does not overcome its momentum and the particle crosses that
region regardless.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area particles avoid.
Inspector
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
Distance
Determines the distance from the region a particle should be before it begins to move away from
the region.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pBounce [pBn]
Inputs
The pBounce node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green or magenta bitmap or mesh input appears on the node
when you set the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area particles bounce off.
A pBounce node using a Shape 3D node as the region on which particles bounce off
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result.
Two nodes with the same seed values will produce the same random results. Click the Randomize
button to randomly select a new seed value, or adjust the slider to manually select a new seed value.
Elasticity
Elasticity affects the strength of a bounce, or how much velocity the particle will have remaining after
impacting upon the Bounce region. A value of 1.0 will cause the particle to possess the same velocity
after the bounce as it had entering the bounce. A value of 0.1 will cause the particle to lose 90% of its
velocity upon bouncing off of the region.
The range of this control is 0.0 to 1.0 by default, but greater values can be entered manually. This will
cause the particles to gain momentum after an impact, rather than lose it. Negative values will be
accepted but do not produce a useful result.
Variance
By default, particles that strike the Bounce region will reflect evenly off the edge of the Bounce
region, according to the vector or angle of the region. Increasing the Variance above 0.0 will
introduce a degree of variation to that angle of reflection. This can be used to simulate the effect of a
rougher surface.
Spin
By default, particles that strike the region will not have their angle or orientation affected in any way.
Increasing or decreasing the Spin value will cause the Bounce region to impart a spin to the particle
based on the angle of collision, or to modify any existing spin on the particle. Positive values will
impart a forward spin, and negative values impart a backward spin. The larger the value, the faster the
spin applied to the particle will be.
Roughness
This slider varies the bounce off the surface to slightly randomize particle direction.
Surface Motion
This slider makes the bounce surface behave as if it had motion, thus affecting the particles.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pChangeStyle [pCS]
Except for the pCustom node, this is the only node that modifies the particles’ appearance rather than
its motion. It is often used to trigger a change in the appearance in response to some event, such as
striking a barrier.
Inputs
The pChange Style node has a single orange input by default. Like most particle nodes, this orange
input accepts only other particle nodes. A green or magenta bitmap or mesh input appears on the
node when you set the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the custom particle node takes effect.
Inspector
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click
the Randomize button to randomly select a new seed value, or adjust the slider to manually select a
new seed value.
Style
This option allows the user to change the particle’s Style and thus the look. See
“The Common Controls” in this chapter to learn more about Styles.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pCustom [pCu]
Inputs
The pCustom node has three inputs. Like most particle nodes, this orange input accepts only other
particle nodes. The green and magenta inputs are 2D image inputs for custom image calculations.
Optionally, there are teal or white bitmap or mesh inputs, which appear on the node when you set the
Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Image 1 and 2: The green and magenta image inputs accept 2D images that are used for per
pixel calculations and compositing functions.
— Region: The teal or white region input takes a 2D image or a 3D mesh depending on whether
you set the Region menu to Bitmap or Mesh. The color of the input is determined by whichever is
selected first in the menu. The 3D mesh or a selectable channel from the bitmap defines the area
where the custom particle node takes effect.
A pCustom node using a Shape 3D node as the region where the custom event occurs
Inspector
All the same operators, functions, and conditional statements described for the Custom node apply
to the pCustom node as well, including Pixel-read functions for the two image inputs (e.g., getr1w(x,y),
getz2b(x,y), and so on).
Number 1-8
Numbers are variables with a dial control that can be animated or connected to modifiers exactly as
any other control might. The numbers can be used in equations on particles at current time: n1, n2,
n3, n4, … or at any time: n1_at(float t), n2_at(float t), n3_at(float t), n4_at(float t), where t is the time you
want. The values of these controls are available to expressions in the Setup and Intermediate tabs.
Position 1-8
These eight point controls include 3D X,Y,Z position controls. They are normal positional controls and
can be animated or connected to modifiers as any other node might. They are available to expressions
entered in the Setup, Intermediate, and Channels tabs.
Setup 1-8
Up to eight separate expressions can be calculated in the Setup tab of the pCustom node. The Setup
expressions are evaluated once per frame, before any other calculations are performed. The results
are then made available to the other expressions in the node as variables s1, s2, s3, and s4.
Think of them as global setup scripts that can be referenced by the intermediate and channel scripts.
Particle
Particle position, velocity, rotation, and other controls are available in the Particle tab.
pxi1, pyi1 the 2d position of a particle, corrected for image 1’s aspect
pxi2, pyi2 the 2d position of a particle, corrected for image 2’s aspect
rgnhit this value is 1 if the particle hit the pCustom node’s defined region
rgndist this variable contains the particles distance from the region
rgnix, rgniy, rgniz values representing where on the region the particle hit
rgnnx, rgnny, rgnnz region surface normal of the particle when it hit the region
p1x, p1y, p1z .. p4x, p4y, p4z the values of position inputs 1 through 4
pCustomForce [pCF]
The forces on a particle within a system can have their positions and rotations affected by forces. The
position in XYZ and the Torque, which is the spin of the particle, are controlled by independent custom
equations. The Custom Force node is used to create custom expressions and filters to modify the
behavior. In addition to providing three image inputs, this node will allow for the connection of up to
eight numeric inputs and as many as four XY position values from other controls and parameters in
the node tree.
Inputs
The pCustom Force node has three inputs. Like most particle nodes, this orange input accepts only
other particle nodes. A green and magenta are 2D image inputs for custom image calculations.
Optionally there are teal or white bitmap or mesh inputs, which appear on the node when you set the
Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Image 1 and 2: The green and magenta image inputs accept 2D images that are used for per-
pixel calculations and compositing functions.
— Region: The teal or white region input takes a 2D image or a 3D mesh depending on whether
you set the Region menu to Bitmap or Mesh. The color of the input is determined by whichever is
selected first in the menu. The 3D mesh or a selectable channel from the bitmap defines the area
where the pCustom Force takes effect.
Inspector
The tabs and controls located in the Inspector are similar to the controls found in the pCustom node.
Refer to the pCustom node in this chapter for more information.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Since the most common use of this node is to simulate gravity, the default direction of the pull is down
along the Y axis (-90 degrees), and the default behavior is to ignore regions and affect all particles.
Inputs
The pDirectional Force node has a single orange input by default. Like most particle nodes, this
orange input accepts only other particle nodes. A green or magenta bitmap or mesh input appears on
the node when you set the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the directional force takes effect.
A pDirectional Force node placed between the pEmitter and pRender nodes
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click
the Randomize button to select a new seed value randomly, or adjust the slider to select a new seed
value manually.
Strength
Determines the power of the force. Positive values will move the particles in the direction set by the
controls; negative values will move the particles in the opposite direction.
Direction
Determines the direction in X/Y space.
Direction Z
Determines the direction in Z space.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Like all other Particle nodes (with the exception of the pRender node), the pEmitter produces a particle
set, not a visible image, and therefore cannot be displayed directly in a viewer. To view the output of a
particle system, add a pRender node after the pEmitter.
Inputs
By default, the pEmitter node has no inputs at all. You can enable an image input by selecting Bitmap
from the Style menu in the Style tab. Also, two region inputs, one for bitmap and one for mesh, appear
on the node when you set the Region menu in the Region tab to either Bitmap or Mesh. The colors of
these inputs change depending on the order in which they are enabled.
— Style Bitmap Input: This image input accepts a 2D image to use as the particles’ image. Since this
image duplicates into potentially thousands of particles, it is best to keep these images small and
square—for instance, 256 x 256 pixels.
— Region: The region inputs takes a 2D image or a 3D mesh depending on whether you set the
Region menu to Bitmap or Mesh. The color of the input is determined by whichever is selected
first in the menu. The 3D mesh or a selectable channel from the bitmap defines the area where
the particles are emitted.
A pEmitter node connected to a pRender node is a typical setup for more particle systems.
Animate this parameter to specify the number of particles generated in total. For example, if only 25
particles in total are desired, animate the control to produce five particles on frame 0–4, then set a key
on frame five to generate zero particles for the remainder of the project.
Number Variance
This modifies the amount of particles generated for each frame, as specified by the Number control.
For example, if Number is set to 10.0 and Number Variance is set to 2.0, the emitter will produce
anywhere from 9-11 particles per frame. If the value of Number Variance is more than twice as large as
the value of Number, it is possible that no particles will be generated for a given frame.
Lifespan
This control determines how long a particle will exist before it disappears or ‘dies.’ The default value
of this control is 100 frames, although this can be set to any value. The timing of many other particle
controls is relative to the Lifespan of the particle. For example, the size of a particle can be set to
increase over the last 80% of its life, using the Size Over Life graph in the Style tab of the pEmitter.
Lifespan Variance
Like Number Variance, the Lifespan Variance control allows the Lifespan of particles produced to be
modified. If Particle Lifespan was set to 100 frames and the Lifespan Variance to 20 frames, particles
generated by the emitter would have a lifespan of 90–110 frames.
Color
This provides the ability to specify from where the color of each particle is derived. The default setting
is Use Style Color, which will provide the color from each particle according to the settings in the Style
tab of the pEmitter node.
The alternate setting is Use Color From Region, which overrides the color settings from the Style tab
and uses the color of the underlying bitmap region.
The Use Color From Region option only makes sense when the pEmitter region is set to use a bitmap
produced by another node in the composition. Particles generated in a region other than a bitmap
region will be rendered as white when the Use Color From Region option is selected.
Position Variance
This control determines whether or not particles can be ‘born’ outside the boundaries of the pEmitter
region. By default, the value is set to zero, which will restrict the creation area for new particles to the
exact boundaries of the defined region. Increasing this control’s value above 0.0 will allow the particle
to be born slightly outside the boundaries of that region. The higher the value, the ‘softer’ the region’s
edge will become.
Temporal Distribution
In general, an effect is processed per frame, based on the comp frame rate. However, processing
some particles only at the exact frame boundaries can cause pulsing. To make the behavior subtly
more realistic, the particles can be birthed in subframe increments.
These settings are influenced by the Sub Frame Accuracy setting in the pRender node. The Sub Frame
Accuracy slider controls how many in-between frames are calculated between each frame. The higher
the value the more accurate the particle calculation but the longer the render times.
Velocity
The controls in the Velocity section determine the speed and direction of the particle cells as the are
generated from the emitter region.
Velocity Variance modifies the velocity of each particle at birth, in the same manner described in
Lifespan Variance and Number Variance above.
Inherit
Inherit Velocity passes the emitter region’s velocity on to the particles. This slider has a wide range
that includes negative and positive values. A negative value causes the particles to move in the
opposite direction, a value of 1 will cause the particles to move with a velocity that matches the
emitter region’s velocity, and a value of 2 causes the particles to move ahead of the emitter region.
Rotation
Rotation controls are used to set the orientation of particle cells and animating that orientation
over time .
Rotation Mode
This menu control provides two options to help determine the orientation of the particles emitted.
When the particles are spherical, the effect of this control will be unnoticeable.
— Absolute Rotation: The particles will be oriented as specified by the Rotation controls, regardless
of velocity and heading.
— Rotation Relative To Motion: The particles will be oriented in the same direction as the
particle is moving. The Rotation controls can now be used to rotate the particle‘s orientation away
from its heading.
Rotation XYZ Variance can be used to randomly vary the rotation by a specified amount around the
center of the Rotation XYZ value to avoid having every particle oriented in the exact same direction.
Spin
Spin controls are auto animated controls that change the orientation of particle cells over time.
The Spin XYZ variances will vary the amount of rotation applied to each frame in the manner described
by Number Variance and Lifespan Variance documented above.
Sets Tab
This tab contains settings that affect the physics of the particles emitted by the node. These settings
do not directly affect the appearance of the particles. Instead, they modify behavior like velocity, spin,
quantity, and lifespan.
Set 1-32
To assign the particles created by a pEmitter to a given set, simply select the checkbox of the set
number you want to assign. A single pEmitter node can be assigned to one or multiple sets. Once they
are assigned in the pEmitter, you can enable sets in other particle nodes so they only affect particles
from specific pEmitters.
Style Tab
The Style tab provides controls that affect the appearance of the particles. For detailed information
about the style Tab, see the “The Common Controls” section at the end of this chapter.
Region Tab
The Region tab controls the shape, size, and location of the area that emits the particle cells. This is
often called the Emitter. Only one emitter region can be set for a single pEmitter node. If the pRender
is set to 2D, then the emitter region will produce particles along a flat plane in Z Space. 3D emitter
regions possess depth and can produce particles inside a user-defined, three-dimensional region.
For more detail on the Region tab, see “The Common Controls” section at the end of this chapter.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pFlock [pFl]
The strength of these “desires” produces the seemingly motivated behavior perceived by the viewer.
Inputs
The pFlock node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green or magenta bitmap or mesh input appears on the node
when you set the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange background input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the flocking takes effect.
Inspector
Flock Number
The value of this control represents the number of other particles that the affected particle will
attempt to follow. The higher the value, the more visible “clumping” will appear in the particle system
and the larger the groups of particles will appear.
Follow Strength
This value represents the strength of each particle’s desire to follow other particles. Higher values will
cause the particle to appear to expend more energy and effort to follow other particles. Lower values
increase the likelihood that a given particle will break away from the pack.
Attract Strength
This value represents the strength of attraction between particles. When a particle moves farther
from other particles than the Maximum Space defined in the pFlock node, it will attempt to move
closer to other particles. Higher values cause the particle to maintain its spacing energetically,
resolving conflicts in spacing more rapidly.
Repel Strength
This value represents the force applied to particles that get closer together than the distance defined
by the Minimum Space control of the pFlock node. Higher values will cause particles to move away
from neighboring particles more rapidly, shooting away from the pack.
Minimum/Maximum Space
This range control represents the distance each particle attempts to maintain between it and other
particles. Particles will attempt to get no closer or farther than the space defined by the Minimum/
Maximum values of this range control. Smaller ranges will give the appearance of more organized
motion. Larger ranges will be perceived as disorganized and chaotic.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pFollow node has a single orange input by default. Like most particle nodes, this orange
background input accepts only other particle nodes. A green bitmap or mesh input appears on the
node when you set the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where particles will follow the position point.
A pFollow node introduces a follow object that influences the particles’ motion.
Random Seed
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
Position XYZ
The position controls are used to create the new path by positioning the follow object. Moving the XYZ
parameters displays the onscreen position of the follow object. Animating these parameters creates
the new path the particles will be influenced by.
Spring
The Spring setting causes the particles to move back and forth along the path. The spread of
the spring motion increases over the life of the particles depending on the distance between the
particles and the follow object. Higher spring settings increase the elasticity, while lower settings
decrease elasticity.
Dampen
This value attenuates the spring action. A lower setting offers less resistance to the back and forth
spring action. A higher setting applies more resistance.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pFriction node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green or magenta bitmap or mesh input appears on the node
when you set the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the friction occurs.
Random Seed
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
Velocity Friction
This value represents the Friction force applied to the particle’s Velocity. The larger the value, the
greater the friction, thus slowing down the particle.
Spin Friction
This value represents the Friction force applied to the particle’s Rotation or Spin. The larger the value,
the greater the friction, thus slowing down the rotation of the particle.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
pGradientForce [pGF]
This node can be used to give particles the appearance of moving downhill or following the contour of
a provided shape.
— Input: The orange input takes the output of other particle nodes.
— Input: The green input takes the 2D image that contains the alpha channel gradient.
— Region: The magenta or teal region input takes a 2D image or a 3D mesh depending on whether
you set the Region menu to Bitmap or Mesh. The color of the input is determined by whichever is
selected first in the menu. The 3D mesh or a selectable channel from the bitmap defines the area
where the gradient force occurs.
A pGradient Force node using a Fast Noise node as the gradient to modify the particles’ motion
Inspector
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result.
Two nodes with the same seed values will produce the same random results. Click the Randomize
button to randomly select a new seed value, or adjust the slider to manually select a new seed value.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pImage Emitter node has three inputs. Like most particle nodes, the orange input accepts only
other particle nodes. Green and magenta inputs are 2D image inputs for custom image calculations.
Optionally, there are teal or white bitmap or mesh inputs, which appear on the node when you set the
Region menu in the Region tab to either Bitmap or Mesh.
— Input: Unlike most other particle nodes, the orange input on the pImage Emitter accepts a 2D
image used as the emitter of the particles. If a region is defined for the emitter, this input is used
to define the color of the particles.
— Style Bitmap Input: This image input accepts a 2D image to use as the particles’ image. Since this
image duplicates into potentially thousands of particles, it is best to keep these images small and
square—for instance, 256 x 256 pixels.
— Region: The teal or white region input takes a 2D image or a 3D mesh depending on whether
you set the Region menu to Bitmap or Mesh. The color of the input is determined by whichever is
selected first in the menu. The 3D mesh or a selectable channel from the bitmap defines the area
where the particles are emitted.
A pImage Emitter node emits particles based on an image connected to the orange input.
Inspector
X and Y Density
The X and Y Density sliders are used to set the mapping of particles to pixels for each axis. They
control the density of the sampling grid. A value of 1.0 for either slider indicates 1 sample per
pixel. Smaller values will produce a looser, more pointillistic distribution of particles, while values
above 1.0 will create multiple particles per pixel in the image.
Alpha Threshold
The Alpha Threshold is used for limiting particle generation so that pixels with semitransparent alpha
values will not produce particles. This can be used to harden the edges of an otherwise soft alpha
channel. The higher the threshold value, the more opaque a pixel must be before it will generate a
particle. Note that the default threshold of 0.0 will create particles for every pixel, regardless of alpha,
although many may be transparent and invisible.
X/Y/Z Pivot
These controls allow you to position the grid of emitted particles.
NOTE: Pixels with a black (transparent) alpha channel will still generate invisible particles,
unless you raise the Alpha Threshold above 0.0. This can slow down rendering significantly.
An Alpha Threshold value of 1/255 = 0.004 is good for eliminating all fully transparent pixels.
The pixels are emitted in a fixed-size 2D grid on the XY plane, centered on the Pivot
position. Changing the Region from the default of All allows you to restrict particle creation
to more limited areas. If you need to change the size of this grid, use a Transform 3D node
after the pRender.
Remember that the various emitter controls apply only to particles when they are emitted.
That is, they set the initial state of the particle and do not affect it for the rest of its lifespan.
Since pImageEmitter (by default) emits particles only on the first frame, animating these
controls will have no effect. However, if the Create Particles Every Frame checkbox is turned
on, new particles will be emitted each frame and will use the specified initial settings for
that frame.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pKill node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green bitmap or mesh input appears on the node when you set
the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area particles are killed.
A pKill node using a Shape 3D node as the region where particles die
Inspector
This node only contains common controls in the Conditions and Regions tabs. The Conditions and
Regions controls are used to define the location, age, and set of particles that are killed.
pMerge [pMg]
The combined particles will preserve any sets assigned to them when they were created, making it
possible for nodes downstream of the pMerge to isolate specific particles when necessary.
Inputs
The pMerge node has two identical inputs, one orange and one green. These two inputs accept only
other particle nodes.
— Particle 1 and 2 Input: The two inputs accept two streams of particles and merge them.
Inputs
The pPoint Force node has a single orange input by default. Like most particle nodes, this orange
input accepts only other particle nodes. A green bitmap or mesh input appears on the node when you
set the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the point force affects the particles.
The pPoint Force node positions a tangent force that particles are attracted to or repelled from.
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
Strength
This parameter sets the Strength of the force emitted by the node. Positive values represent attractive
forces; negative values represent repellent forces.
Power
This determines the degree to which the Strength of the force falls off over distance. A value of zero
causes no falloff of strength. Higher values will impose an ever-sharper falloff in strength of the force
with distance.
Limit Force
The Limit Force control is used to counterbalance potential problems with temporal sub-sampling.
Because the position of a particle is sampled only once a frame (unless sub-sampling is increased in
the pRender node), it is possible that a particle can overshoot the Point Force’s position and end up
getting thrown off in the opposite direction. Increasing the value of this control reduces the likelihood
that this will happen.
X, Y, Z Center Position
These controls are used to represent the X, Y, and Z coordinates of the point force in 3D space.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pRender node has one orange input, a green camera input, and a blue effects mask input. Like
most particle nodes, this orange input accepts only other particle nodes. A green bitmap or mesh
input appears on the node when you set the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Camera Input: The optional green camera input accepts a camera node directly or a 3D scene
with a camera connected that is used to frame the particles during rendering.
— Effect Mask: The optional blue input expects a mask shape created by polylines, basic primitive
shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input for 2D
particles crops the output of the particles so they are seen only within the mask.
In 3D mode, the only controls in the pRender node that have any effect at all are Restart, Pre-Roll
and Automatic Pre-Roll, Sub-Frame Calculation Accuracy, and Pre-Generate frames. The remaining
controls affect 2D particle renders only. The pRender node also has a Camera input on the node tree
that allows the connection of a Camera 3D node. This can be used in both 2D and 3D modes to allow
control of the viewpoint used to render an output image.
Pre-Roll Options
Particle nodes generally need to know the position of each particle on the last frame before they can
calculate the effect of the forces applied to them on the current frame. This makes changing current
time manually by anything but single frame intervals likely to produce an inaccurate image.
Restart
This control also works in 3D. Clicking on the Restart button will restart the particle system at the
current frame, removing any particles created up to that point and starting the particle system from
scratch at the current frame.
Pre-Roll
This control also works in 3D. Clicking on this button causes the particle system to recalculate,
starting from the beginning of the render range up to the current frame. It does not render the image
produced. It only calculates the position of each particle. This provides a relatively quick mechanism to
ensure that the particles displayed in the views are correctly positioned.
If the pRender node is displayed when the Pre-Roll button is selected, the progress of the pre-roll is
shown in the viewer, with each particle shown as point style only.
Automatic Pre-Roll
Selecting the Automatic Pre-Roll checkbox causes the particle system to automatically pre-roll
the particles to the current frame whenever the current frame changes. This prevents the need to
manually select the Pre-Roll button whenever advancing through time in jumps larger than a single
frame. The progress of the particle system during an Automatic Pre-Roll is not displayed to the
viewers to prevent distracting visual disruptions.
About Pre-Roll
Pre-Roll is necessary because the state of a particle system is entirely dependent on the last known
position of the particles. If the current time were changed to a frame where the last frame particle
state is unknown, the display of the particle is calculated on the last known position, producing
inaccurate results.
To demonstrate:
Notice how the particle system only adds to the particles it has already created and does not try to
create the particles that would have been emitted in the intervening frames. Try selecting the Pre-Roll
button in the pRender node. Now the particle system state is represented correctly.
— View
This drop-down list provides options to determine the position of the camera view in a 3D
particle system. The default option of Scene (Perspective) will render the particle system from
the perspective of a virtual camera, the position of which can be modified using the controls in
the Scene tab. The other options provide orthographic views of the front, top, and side of the
particle system.
It is important to realize that the position of the onscreen controls for Particle nodes is unaffected
by this control. In 2D mode, the onscreen controls are always drawn as if the viewer were showing
the front orthographic view. (3D mode gets the position of controls right at all times.)
The View setting is ignored if a Camera 3D node is connected to the pRender node’s Camera input
on the node tree, or if the pRender is in 3D mode.
Conditions
— Blur, Glow, and Blur Blend
When generating 2D particles, these sliders apply a Gaussian blur, glows, and blur blending to the
image as it is rendered, which can be used to soften the particles and blend them. The result is no
different than adding a Blur after the pRender node in the node tree.
— Pre-Generate Frames
This control is used to cause the particle system to pre-generate a set number of frames before
its first valid frame. This is used to give a particle system an initial state from which to start.
A good example of when this might be useful is in a shot where particles are used to create the
smoke rising from a chimney. Set Pre-Generate Frames to a number high enough to ensure that
the smoke is already present in the scene before the render begins, rather than having it just
starting to emerge from the emitter for the first few frames.
Enabling this option is likely to increase the render times for the particle system dramatically.
Scene Tab
Z Clip
The Z Clip control is used to set a clipping plane in front of the camera. Particles that cross this plane
are clipped, preventing them from impacting on the virtual lens of the camera and dominating
the scene.
Grid Tab
These controls do not apply to 3D particles.
The grid is a helpful, non-rendering display guide used to orient the 2D particles in 3D space. The
grid is never seen in renders, just like a center crosshair is never seen in a render. The width, depth,
number of lines, and grid color can be set using the controls found in this tab.
Image Tab
The controls in this tab are used to set the resolution, color depth, and pixel aspect of the rendered
image produced by the node.
Width/Height
This pair of controls is used to set the Width and Height dimensions of the image to be rendered
by the node.
Pixel Aspect
This control is used to specify the Pixel Aspect ratio of the rendered particles. An aspect ratio of
1:1 would generate a square pixel with the same dimensions on either side (like a computer display
monitor), and an aspect of 0.9:1 would create a slightly rectangular pixel (like an NTSC monitor).
NOTE: Right-click on the Width, Height, or Pixel Aspect controls to display a menu listing
the file formats defined in the preferences Frame Format tab. Selecting any of the listed
options will set the width, height, and pixel aspect to the values for that format, accordingly.
Depth
The Depth menu is used to set the pixel color depth of the particles. 32-bit pixels require 4X the
memory of 8-bit pixels but have far greater color accuracy. Float pixels allow high dynamic range
values outside the normal 0…1 range, for representing colors that are brighter than white or darker
than black.
— Auto: Automatically reads and passes on the metadata that may be in the image.
— Space: Displays a Color Space Type menu where you can choose the correct color
space of the image.
Remove Curve
Depending on the selected Gamma Space or on the Gamma Space found in Auto mode, the Gamma
Curve is removed from, or a log-lin conversion is performed on, the material, effectively converting it
to a linear output space.
Motion Blur
As with other 2D nodes in Fusion, Motion Blur is enabled from within the Settings tab. You may set
Quality, Shutter Angle, Sample Center, and Bias, and Blur will be applied to all moving particles.
NOTE: Motion Blur on 3D mode particles (rendered with a Renderer 3D) also requires that
identical motion blur settings are applied to the Renderer 3D node.
pSpawn [pSp]
As long as a particle falls under the effect of the pSpawn node, it will continue to generate particles.
It is important to restrict the effect of the node with limiters like Start and End Age, Probability, Sets
and Regions, and by animating the parameters of the emitter so that the node is operative only
when required.
Inputs
By default, the pSpawn node has a single orange input. Like most particle nodes, this orange input
accepts only other particle nodes. You can enable an image input by selecting Bitmap from the Style
menu in the Style tab. Also, two region inputs, one for bitmap and one for mesh, appear on the node
when you set the Region menu in the Region tab to either Bitmap or Mesh. The colors of these inputs
change depending on the order they are enabled.
A pSpawn node used to generate new particles at specific points in the old particles’ life
Inspector
The pSpawn node has a large number of controls, most of which exactly duplicate those found within
the pEmitter node. There are a few controls that are unique to the pSpawn node, and their effects are
described below.
Velocity Transfer
This control determines how much velocity of the source particle is transferred to the particles
it spawns. The default value of 1.0 causes each new particle to adopt 100 percent of the velocity
and direction from its source particle. Lower values will transfer less of the original motion to the
new particle.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pTangent Force node has a single orange input by default. Like most particle nodes, this orange
input accepts only other particle nodes. A green bitmap or mesh input appears on the node when you
set the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area where the tangent force effects the particles.
The pTangent Force node positions a tangent force that particles maneuver around.
Inspector
The controls for this node are used to position the offset in 3D space and to determine the strength of
the tangential force along each axis independently.
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result.
Two nodes with the same seed values will produce the same random results. Click the Randomize
button to randomly select a new seed value, or adjust the slider to manually select a new seed value.
X, Y, Z Center Position
These controls are used to represent the X, Y, and Z coordinates of the Tangent force in 3D space.
X, Y, Z Center Strength
These controls are used to determine the Strength of the Tangent force in 3D space.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pTurbulence node has a single orange input by default. Like most particle nodes, this orange
input accepts only other particle nodes. A green bitmap or mesh input appears on the node when you
set the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area of turbulence.
The pTurbulence node disturbs the rigid flow of particles for a more natural motion.
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
X, Y, and Z Strength
The Strength control affects the amount of chaotic motion imparted to particles.
Density
Use this control to adjust the density in the turbulence field. Lower values causes more particle cells
to be affected similarly, almost as if “waves” of the turbulence field run through the particles, affecting
groups of cells at the same time. Higher values add finer variations to more individual particle cells
causing more of a spread in the turbulence field.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Style, Region, and Settings tabs are common to all Particle nodes, so their
descriptions can be found in “The Common Controls” section at the end of this chapter.
Inputs
The pVortex node has a single orange input by default. Like most particle nodes, this orange input
accepts only other particle nodes. A green bitmap or mesh input appears on the node when you set
the Region menu in the Region tab to either Bitmap or Mesh.
— Input: The orange input takes the output of other particle nodes.
— Region: The green or magenta region input takes a 2D image or a 3D mesh depending on
whether you set the Region menu to Bitmap or Mesh. The color of the input is determined by
whichever is selected first in the menu. The 3D mesh or a selectable channel from the bitmap
defines the area of the vortex.
A pVortex node creates a spiraling motion for particles that fall within its pull.
Randomize
The Random Seed slider and Randomize button are presented whenever a Fusion node relies on a
random result. Two nodes with the same seed values will produce the same random results. Click the
Randomize button to randomly select a new seed value, or adjust the slider to manually select a new
seed value.
Strength
This control determines the Strength of the Vortex Force applied to each particle.
Power
This control determines the degree to which the Strength of the Vortex Force falls off with distance.
X, Y, and Z Offset
Use these sliders to set the amount by which the vortex Offsets the affected particles.
Size
This is used to set the Size of the Vortex Force.
Angle X and Y
These sliders control the amount of rotational force applied by the Vortex along the X and Y axes.
Common Controls
Conditions, Style, Region, and Settings Tabs
The Conditions, Region, and Settings tabs are common to all Particle nodes, so their descriptions can
be found in the following “The Common Controls” section.
Inspector
Style Tab
The Style Tab is common to the pEmitter, pSpawn, pChangeStyle, and pImage Emitter. It controls the
appearance of the particles using general controls like type, size, and color.
Style
The Style menu provides access to the various types of particles supported by the Particle Suite.
Each style has its specific controls, as well as controls it will share with other styles.
— Point: This option produces particles precisely one pixel in size. Controls that are specific to Point
Style are Apply Mode and Sub Pixel Rendered.
— Apply Mode: This menu applies only to 2D particles. 3D particle systems are not affected.
It includes Apply modes for Add and Merge. Add combines overlapping particles by adding
together the color values of each particle. Merge uses a simple over operation to combine
overlapping particles.
— Sub Pixel Rendered: This checkbox determines whether the point particles are rendered with
Sub Pixel precision, which provides smoother-looking motion but blurrier particles that take
slightly longer to render.
— Bitmap: This style produces particle cells based on an image file or another node in the Node
editor. When this option is selected an orange image input appears on the node in the node
editor. There are several controls for affecting the appearance and animation. In addition to the
controls in the Style section, a Merge section is displayed at the bottom of the inspector when
Bitmap is selected as the Style. The Merge section includes controls for additive or subtractive
merges when the particle cells overlap.
— Animate Over Time: This menu includes three options for determining how movie files
play when they are used as particle cell bitmaps. The Over Time setting plays the movie
file sequentially. For instance, when the comps is on frame 2, frame 2 of the movie file is
displayed, when the comp is on frame 3, frame 3 of the movie files is displayed and so on. If
a particle cell is not generated until frame 50, it begins with frame 50 of the movie file. This
— Blob: This option produces large, soft spherical particles, with controls for Color, Size, Fade timing,
Merge method, and Noise.
— Noise: This slider only applies to 2D Blob particles. The noise slider is used to introduce a
computer generated Perlin noise pattern into the blob particles in order to give the blobs
more texture. A setting of 0 introduces no noise to the Blob particles and a setting of 1
introduces the maximum amount of noise.
— Brush: This styles produces particle cells based on any image file located in the brushes directory.
There are numerous controls for affecting the appearance and animation.
— Gain: The gain slider is a multiplier of the pixel value. It is used to apply a correction to the
overall Gain of the image that is used as the Brush. Let’s say you have a brush particle cell
that contains a pixel value of R0.5 G0.5 B0.4 and you add a Gain of 1.2, you end up with a pixel
value of R0.6 G0.6, B0.48 (i.e., 0.4 * 1.2 = 0.48) while leaving black pixels unaffected. Higher
values produce a brighter image, whereas lower values reduce both the brightness and the
transparency of the image.
— Brush: This menu shows the names of any image files stored in the Brushes directory. The
location of the Brushes directory is defined in the Preferences dialog, under Path Maps. The
default is the Brushes subdirectory within Fusion’s install folder.
— Use Aspect From: The Use Aspect From menu includes three settings for the aspect ratio of
the brush image. You can choose image format to use the brush image’s native aspect ration.
Choose Frame Format to use the aspect ratio set in the Frame Format Setting in the Fusion
Preferences, or choose Custom to enter your own Pixel X and Y dimensions.
Color Controls
The Color Controls select the color and Alpha values of the particles generated by the emitter.
Color Variance
These range controls provide a means of expanding the colors produced by the pEmitter. Setting the
Red variance range at -0.2 to +0.2 will produce colors that vary 20% on either side of the red channel,
for a total variance of 40%. If the pEmitter is set to produce R0.5, G0.5, B0.5 (pure gray), the variance
shown above will produce points with a color range between R0.3, G0.5, B0.5, and R0.7, G0.5, B0.5.
To visualize color space as values between 0-256 or as 0-65535, change the values used by Fusion
using the Show Color As option provided in the General tab within the Preferences dialog.
Additional points can be added to the gradient control to cause the particle color to shift
throughout its life.
This type of control can be useful for fire-type effects (for example, the flame may start blue, turn
orange, and end a darker red). The gradient itself can be animated over time by right-clicking on the
control and selecting Animate from the contextual menu. All points on the gradient will be controlled
by a single Color Over Life spline, which controls the speed at which the gradient itself changes. You
may also use the From Image modifier, which produces a gradient from the range of colors in an
image along a line between two points.
Size Controls
The majority of the Size Controls are self-explanatory. The Size and Size Variance controls are used to
determine the size and degree of size variation for each particle. It is worth noting that the Point style
does not have size controls (each point is a single pixel in size, and there is no additional control).
When a Bitmap Particle style is used, a value of 1.0 indicates that each particle should be the same
size as the input bitmap. A value of 2.0 will scale the particle up in size by 200%. For the best quality
particles, always try to make the input bitmap as big, or bigger, than the largest particle produced by
the system.
For the Point Cluster style, the size control adjusts the density of the cluster, or how close together
each particle will get.
There are additional size controls that can be used to adjust further the size of particles based on
velocity and depth.
Size to Velocity
This increases the size of each particle relative to the velocity or speed of the particle. The velocity of
the particle is added to the size, scaled by the value of this control.
1.0 on this control, such as for a particle traveling at 0.1, will add another 0.1 to the size (velocity * size
to velocity + size = new size). This is most useful for Line styles, but the control can be used to adjust
the size of any style.
Objects on the focal plane (Z = 0.0) will be actual-sized. Objects farther along Z will become smaller.
Objects closer along Z will get larger.
A value of 2.0 will exaggerate the effect dramatically, whereas a value of 0.0 will cancel the effects of
perspective entirely.
This graph supports all the features available to a standard spline editor. These features can be
accessed by right-clicking on the graph. It is also possible to view and edit the graph spline in the
larger Spline Editor.
Fade Controls
This simple range slider provides a mechanism for fading a particle at the start and end of its lifetime.
Increasing the Fade In value will cause the particle to fade in at the start of its life. Decreasing the Fade
Out value will cause the particle to fade out at the end of its life.
This control’s values represent a percentage of the particle’s overall life, therefore, setting the Fade In
to 0.1 would cause the particle to fade in over the first 10% of its total lifespan. For example, a particle
with a life of 100 frames would fade in from frame 0…10.
Merge Controls
This set of particle controls affects the way individual particles are merged together. The Subtractive/
Additive slider works as documented in the standard Merge node. The Burn-In control will cause the
particles to overexpose, or “blow out,” when they are combined.
None of the Merge controls will have any effect on a 3D particle system.
None of the Blur controls will have any effect on a 3D particle system.
This graph supports all of the features available to a standard Spline Editor. These features can be
accessed by right-clicking on the graph. It is also possible to view and edit the spline in the larger
Spline editor.
The DoF Focus range control is used to determine what area of the image remains in focus. Lower
values along Z are closer to the camera. Higher values are farther away. Particles within the range
will remain in focus. Particles outside that range will have the blur defined by the Z Blur control
applied to them.
Conditions
Conditions Tab
The Conditions tab limits the particles that are affected by the node’s behavior. You can limit the
particle using probability or more specifically using sets.
Probability
The Probability slider determines the percentage of chance that the node affects any given particle.
The default value of 1.0 affects all particles. A setting of 0.6 would mean that each particle has a 60
percent chance of being affected by the control.
Start/End Age
This range control can be used to restrict the effect of the node to a specified percentage of the
particle lifespan.
For example, to restrict the effect of a node to the last 20 percent of a particle’s life, set the Start value
to 0.8, and the End value remains at 1.0. The node on frames 80 through 100 only affects a particle
with a lifespan of 100 frames.
— Ignore Sets: The particle node disregards the state of the Set checkboxes and applies to all nodes.
— Affect Specified Sets: The particle node applies its behavior to the active Set checkboxes only.
— Ignore Specified Sets: The particle node applies its behavior to the inactive Set checkboxes only.
Set #
The state of a Set # checkbox determines if the Particle node’s effect will be applied to the particles in
the set. It allows you to limit the effects of some nodes to a subset of particles.
Sets are assigned by the nodes that create particles. These include the pEmitter, pImage Emitter,
pChangeStyle, and the pSpawn nodes.
Region Tab
The Region tab is used to restrict the node’s effect to a geometric region or plane, and to determine
the area where particles are created if it’s a pEmitter node or where the behavior of a node has
influence.
The Region tab is common to almost all particle nodes. In the pEmitter node Emitter Regions are used
to determine the area where particles are created. In most other tools it is used to restrict the tool’s
effect to a geometric region or plane. There are seven types of regions, each with its own controls.
Only one emitter region can be set for a single pEmitter node. If the pRender is set to 2D, then the
emitter region will produce particles along a flat plane in Z Space. 3D emitter regions possess depth
and can produce particles inside a user-defined, three-dimensional region.
— All: In 2D, the particles will be created anywhere within the boundaries of the image. In 3D, this
region describes a cube 1.0 x 1.0 x 1.0 units in size.
— Bézier: Bézier mode uses a user-created polyline to determine the region where particles are
created. The Bézier mode works in both 2D and 3D modes; however, the Bézier polyline region
can only be created in 2D.
To animate the shape of the polyline over time or to connect it to another polyline, right-click the
Shape animation label at the bottom of the inspector and select the appropriate option from the
drop-down menu.
— Bitmap: A Bitmap source from one of the other nodes in the composition will be used as the
region where particles are born.
— Cube: A full 3D Cube is used to determine the region within which particles are created. The height,
width, depth, and XYZ positions can all be determined by the user and be animated over time.
— Line: A simple line control determines where particles are created. The line is composed of two
end-points, which can be connected to Paths or Trackers, as necessary. This type of emitter region
includes X, Y, and Z position controls for the start and end of the line
— Mesh: Any 3D Mesh can be used as a region. In Mesh mode, the region can also be restricted
by the Object ID using the ObjectID slider. See below for a more in-depth explanation of how
mesh regions work.
— Rectangle: The Rectangle region type is like the Cube type, except that this region has no depth
in Z space. Unlike other 2D emitter regions, this region can be positioned and rotated in Z space.
— Sphere: This is a spherical 3D emitter region with Size and Center Z controls. Sphere (3D) is the
default region type for a new pEmitter node.
Mesh Regions
Region Type
The Region Type drop-down menu allows you to choose whether the region will include the inner
volume or just the surface. For example, with a pEmitter mesh region, this determines if the particles
emit from the surface or the full volume.
To determine if a particle is in the interior of an object, a ray is cast from infinity through that particle
and then out to -infinity. The Winding Ray Direction determines which direction this ray is cast in.
Each time a surface is pierced by the ray, it is recorded and added onto a total to generate a winding
number. Going against a surfaces normal counts as +1, and going with the normal counts as -1.
The Winding Rule is then used to determine what is inside/outside. For example, setting the Winding
Rule to Odd means that only particles with odd values for the winding number are kept when creating
the particles. The exact same approach is used to ensure that polylines that intersect themselves are
closed properly.
By setting the region’s Winding Ray Direction to the Z (blue) axis, this mesh can then be treated
as a closed volume for purposes of particle creation, as pictured below.
Limit By ObjectID
Selecting this checkbox allows the Object ID slider to select the ObjectID used as part of the region.
Style
The Style menu provides access to the various types of particles supported by the Particle Suite. Each
style has its specific controls, as well as controls it will share with other styles.
— Point Style: This option produces particles precisely one pixel in size. Controls that are specific to
Point style are Apply Mode and Sub Pixel Rendered.
— Bitmap Style and Brush Style: Both the Bitmap and Brush styles produce particles based on
an image file. The Bitmap style relies on the image from another node in the node tree, and the
Brush style uses image files in the Brushes directory. They both have numerous controls for
affecting their appearance and animation, described below.
— Blob Style: This option produces large, soft spherical particles, with controls for Color, Size, Fade
timing, Merge method, and Noise.
— Line Style: This style produces straight line-type particles with optional “falloff.” The Size to
Velocity control described below (under Size Controls) is often useful with this Line type. The Fade
control adjusts the amount of falloff over the length of the line.
— Point Cluster Style: This style produces small clusters of single-pixel particles. Point Clusters
are similar to the Point style; however, they are more efficient when a large quantity of particles
is required. This style shares parameters with the Point style. Additional controls specific to Point
Cluster style are Number of Points and Number Variance.
Style Options
The following options appear only on some of the styles, as indicated below.
This control applies only to 2D particles; 3D particle systems are not affected.
— Add: Overlapping particles are combined by adding together the color values of each particle.
— Merge: Overlapping particles are merged.
The default value of 1.0 causes the line to fade out completely by the end of the length.
Color Variance
These range controls provide a means of expanding the colors produced by the pEmitter. Setting the
Red variance range at -0.2 to +0.2 will produce colors that vary 20% on either side of the red channel,
for a total variance of 40%. If the pEmitter is set to produce R0.5, G0.5, B0.5 (pure gray), the variance
shown above will produce points with a color range between R0.3, G0.5, B0.5, and R0.7, G0.5, B0.5.
To visualize color space as values between 0-256 or as 0-65535, change the values used by Fusion
using the Show Color As option provided in the General tab within the Preferences dialog.
The left point of the gradient represents the particle color at birth. The right point shows the color of
the particle at the end of its lifespan.
Additional points can be added to the gradient control to cause the particle color to shift
throughout its life.
This type of control can be useful for fire-type effects (for example, the flame may start blue, turn
orange and end a darker red). The gradient itself can be animated over time by right-clicking on the
control and selecting Animate from the contextual menu. All points on the gradient will be controlled
by a single Color Over Life spline, which controls the speed at which the gradient itself changes. You
may also use the From Image modifier, which produces a gradient from the range of colors in an
image along a line between two points.
Position Nodes
This chapter details the Position nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Volume Fog [VlF]����������������������������������������������������������������������������������������������������� 1422
As opposed to 3D-rendered volumetric fog, it works on 2D images and delivers much faster results
and interactive feedback when setting up the fog. See the “WPP Concept” section at the end of this
chapter for further explanation of how this technology works and to learn about the required imagery.
Inputs
The following inputs appear on the Volume Fog node in the Node Editor.
— Image: The orange input accepts the primary image where the fog will be applied. This image
contains a World Position Pass in the XYZ Position channels.
— Fog Image: The green Fog image input is for creating volumetric fog with varying depth and
extent; a 2D image can be connected here. A good starting point is to use a Fast Noise at a small
resolution of 256 x 256 pixels.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the fog to
certain areas.
— Scene Input: The magenta scene input accepts a 3D scene containing a 3D Camera.
Shape Tab
The Shape tab defines the size and location of the fog volume. You can either use the Pick buttons to
select the location and orientation in the viewer or use the Translation, Rotation, and Scale controls.
Shape
This menu switches between a basic spherical or rectangular volume to be placed in your image.
These volumes can then be further refined using the Fog image and effect mask.
Pick
Drag the Pick button into the viewer to select the XYZ coordinates from any 3D scene or 2D image
containing XYZ values, such as a rendered World Pass, to position the center of the Volume object.
When picking from a 2D image, make sure it’s rendered in 32-bit float to get full precision.
X, Y, Z Offset
These controls can be used to position the center of the fog volume manually or can be animated or
connected to other controls in Fusion.
Rotation Pick
Drag the Pick button into the viewer to select the rotational values from any 3D Scene or 2D image
containing those values, like an XYZ-Normal-Pass, to reorient the fog volume.
When picking from a 2D image, like an XYZ Normal pass, make sure it’s rendered in 32-bit float to get
full precision and accurate rotational values.
X, Y, Z Rotation
Use these controls to rotate the fog volume around its center.
Size
The overall size of the fog volume created.
Soft Edge
Controls how much the fog volume is faded toward the center from its perimeter to achieve a
softer look.
Color Tab
The Color tab controls the detail and color of the fog.
Adaptive Samples
Volumes images consist of multiple layers, so there may be 64 layers in a volume. This checkbox
adjusts the rendering algorithm for how to best blend those layers.
Dither: Applies a form of noise to improve the blending and hide visible layer differences.
Samples
Determines how many times a “ray” shot into the volume will be evaluated before the final image is
created. Not unlike raytracing, higher values lead to more detail inside the volume but also increase
render times.
You can, for example, use a Fast Noise with a high Seethe Rate to create such a sequence of images.
Be careful with the resolution of the images. Higher resolutions can require a large amount of
memory. As a rule of thumb, a resolution of 256 x 256 pixels with 256 Z Slices (i.e., forming a 256 x 256
x 256 cubic volume, which will use up to 256 MB for full color 32-bit float data) should give you a good
starting point.
Make sure that both Global In and Global Out, as well as the valid range of your source node, fall
within the range of First Slice Time + Z Slices.
Color
Allows you to modify the color of the fog generated. This will multiply over any color provided by the
connected Fog image.
Gain
Increases or decreases the intensity of the fog. More Gain will lead to a stronger glow and less
transparency in the fog. Lower values let the fog appear less dense.
Subtractive/Additive Slider
Similar to the Merge node, this value controls whether the fog is composed onto the image in Additive
or Subtractive mode, leading to a brighter or dimmer appearance of the fog.
Fog Only
This option outputs the generated fog on a black background, which then can be composited
manually or used as a mask on a Color Corrector for further refinement.
Noise Tab
The Noise tab controls the shape and pattern of the noise added to the fog.
Gain
This control increases or decreases the brightest parts of the noise map.
Brightness
This control adjusts the overall brightness of the noise map, before any gradient color mapping is
applied. In Gradient mode, this produces a similar effect to the Offset control.
Translation
Use the Translation coordinate control to pan and move the noise pattern.
Noise Rotation
Use the Rotation controls to orient the noise pattern in 3D.
Seethe
Adjust this thumbwheel control to interpolate the noise map against a different noise map. This will
cause a crawling shift in the noise, like it was drifting or flowing. This control must be animated to
affect the noise over time.
Discontinuous
Normally, the Noise function interpolates between values to create a smooth, continuous gradient of
results. Enable this checkbox to create hard discontinuity lines along some of the noise contours. The
result will be a dramatically different effect.
Inverted
Select this checkbox to invert the noise, creating a negative image of the original pattern. This is most
effective when Discontinuous is also enabled.
Camera Tab
For a perfect evaluation of a fog volume, a camera or 3D scene can be connected to the Scene input
of the node.
Translation Pick
Drag the Pick button into the viewer to select XYZ coordinates from any 3D scene or 2D image
containing XYZ values, like a rendered World Pass, to define the center of the camera. When picking
from a 2D image, make sure it’s rendered in 32-bit float to get full precision.
X, Y, Z Offset
These controls can be used to define the center of the camera manually or can be animated or
connected to other controls in Fusion.
Light Tab
To utilize the controls in the Light tab, you must have actual lights in your 3D scene. Connect that
scene, including Camera and Lights, to the 3D input of the node.
Do Lighting
Enables or disables lighting calculations. Keep in mind that when not using OpenCL (i.e., rendering on
the CPU), these calculations may become a bit slow.
Do In-Scattering
Enables or disables light-scattering calculations. The volume will still be lit according to the state of the
Do Lighting checkbox, but scattering will not be performed.
Density
This is similar to scattering in that it makes the fog appear thicker. With a high amount of scattering,
though, the light will be scattered out of the volume before it has had much chance to travel through
the fog, meaning it won’t pick up a lot of the transmission color. With a high density instead, the fog
still appears thicker, but the light gets a chance to be transmitted, thus picking up the transmission
color before it gets scattered out. Scattering is affected by the light direction when Asymmetry is not
0.0. Density is not affected by light direction at all.
Scattering
Determines how much of the light bouncing around in the volume ends up scattering the light out
of the fog. If the light scatters more, or more accurately, then there’s a higher probability of the light
being scattered out of the volume, hence less light is left to continue through the fog. This option can
make the fog seem denser.
Asymmetry
Determines in what direction the light is scattered. A value of 0 produces uniform, or isotropic,
scattering, meaning all directions have equal probability. A value greater than 0 causes “forward
scattering,” meaning the light is scattered more into the direction of the light rays. This is similar to
what happens with water droplets in clouds. A value smaller than 0 produces “back scattering,” where
the light is more scattered back toward the original light source.
Transmission
Defines the color that is transmitted through the fog. The light that doesn’t get scattered out will tend
toward this color. It is a multiplier, though, so if you have a red light, but blue transmission, you won’t
see any blue.
Reflection
Changes the intensity of the light that is scattered out. Reflection can be used to modify the overall
color before Emission is added. This will be combined with the color channels of the volume texture
and then used to scale the values. The color options and the color channels of the volume texture
are multiplied together, so if the volume texture were red, setting the Reflection color options to blue
would not make the result blue. In such a case, they will multiply together to produce black.
Emission
This adds a bit of “glowing” to the fog, adding energy/light back into the calculation. If there are no
lights in the scene, and the fog emission is set to be 1.0, the results are similar to no lighting, like
turning off the Do Lighting option. Glowing can also be done while producing a different kind of look,
by having a Transmission greater than 1. This, however, would never happen in the real world.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Position nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
On the right, you see the same volume with lighting/scattering turned on, and a single
point light.
On the left with straight accumulation; in the middle with lighting, scattering, and a single
point light; and on the right, the light in the scene has been moved, which also influences
the look of the volume.
Inputs
The following three inputs appear on the Volume Mask node in the Node Editor:
— Image: The orange image input accepts a 2D image containing a World Position Pass in the XYZ
Position channels.
— Mask Image: An image can be connected to the green mask image input for refining the mask.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the volume mask to
certain areas.
A Volume Mask tool takes advantage of World Position Pass for color correction in a 3D scene
Shape Tab
The Shape tab defines the size and location of the Volume Mask. You can either use the Pick buttons
to select the location and orientation in the viewer or use the Translation, Rotation, and Scale controls.
Shape
This menu switches between a spherical or rectangular mask to be placed in your image. The mask
can be further refined using the mask image input.
Translation Pick
Drag the Pick button into the viewer to select XYZ coordinates from any 3D scene or 2D image
containing XYZ values, like a rendered World Pass, to position the center of the Volume Mask. When
picking from a 2D image, make sure it’s rendered in 32-bit float to get full precision.
X, Y, Z Offset
These controls can be used to position the center of the mask manually or can be animated or
connected to other controls in Fusion.
Rotation Pick
Drag the Pick button into the viewer to select rotational values from any 3D scene or 2D image
containing those values, like an XYZ Normal pass, to reorient the mask.
When picking from a 2D image, like an XYZ Normal pass, make sure it’s rendered in 32-bit float, and
use World Space coordinates to get full precision and the correct rotational values.
X, Y, Z Rotation
Use these controls to rotate the mask around its center.
Size
The overall size, in X, Y, and Z, of the mask created.
Soft Edge
Controls how much the Volume is faded toward the center from its perimeter to achieve a softer look.
Color Tab
The Color tab controls the color and blending of the mask image.
Color
Allows you to modify the color of the generated Volume Mask. This will add to any color provided by
the connected mask image.
Subtractive/Additive Slider
Similar to the Merge node, this value controls whether the mask is composed onto the image in
Additive or Subtractive mode, leading to a brighter or dimmer appearance of the mask.
Mask Only
Outputs the generated mask on a black background, which then can be used as a mask on a Color
Corrector for further refinement.
Camera Tab
For a perfect evaluation of a Volume, a camera or 3D scene can be connected to the Scene input
of the node.
Camera
If multiple cameras are available in the connected Scene input, this drop-down menu allows you to
choose the correct camera needed to evaluate the Volume.
Instead of connecting a camera, position values can also be provided manually or by connecting the
XYZ values to other controls.
Translation Pick
Drag the Pick button into the viewer to select XYZ coordinates from any 3D scene or 2D image
containing XYZ values, like a rendered World Pass, to define the center of the camera.
When picking from a 2D image, make sure it’s rendered in 32-bit float to get full precision.
X, Y, Z Offset
These controls can be used to define the center of the camera manually or can be animated or
connected to other controls in Fusion.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Position nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Creating a World Position Pass from Z-depth can be useful when your 3D application is not capable of
creating a WPP.
It can also be used when a 3D-tracking software outputs a per-pixel Z-depth together with the 3D
Camera. Thus, the Volume Mask and Volume Fog could be applied to real-world scenes. The quality of
the resulting WPP depends mainly on the quality of the incoming Z channel.
See the “WPP Concept” section for further explanation on how this technology works and to learn
about the required imagery.
Inputs
The following inputs appear on the node tile in the Node Editor:
— Image: The orange image input accepts an image containing a World Position Pass or a Z-depth
pass, depending on the desired operation.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the World Position
Pass to certain areas.
— Scene Input: The magenta scene input accepts a 3D scene input containing a 3D Camera.
A Z to World Position node creates a World Position Pass from a Z-depth pass
Controls Tab
The Controls tab determines whether you are creating a World Position Pass or a Z channel. If there
is more than one camera in the connected scene, this tab also selects the camera to use for the
calculation.
Mode
This menu switches between creating a Z channel from a World Position Pass or vice versa.
Camera
If multiple cameras are available in the connected Scene input, this drop-down menu allows you to
choose the correct camera needed to evaluate the image.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Position nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
WPP Concept
The Position nodes in Fusion offer an entirely new way of working with masks and Volumetrics for
footage containing XYZ Position channels. Z to World offers the option to create those channels out of
a Z channel and 3D Camera information. For this overview, we refer to the World Position Pass as WPP.
What Is a WPP?
The WPP interprets each pixel’s XYZ position in 3D space as an RGB color value.
For instance, if a pixel sits at 0/0/0, the resulting pixel has an RGB value of 0/0/0 and thus will be black.
If the pixel sits at 1/0/0 in the 3D scene, the resulting pixel is entirely red. Of course, if the coordinates
of the pixel are something like -60/75/123, WPP interprets those values as RGB color values as well.
Due to the potentially enormous size of a 3D scene, the WPP channel should always be rendered in
32-bit floating point to provide the accuracy needed. The image below shows a 3D rendering of a
scene with its center sitting at 0/0/0 in 3D Space and the related WPP. For better visibility, the WPP is
normalized in this example.
However, connecting a camera that lines up with the original camera the WPP has been rendered
from, or setting the camera’s position manually, dramatically improves the accuracy and look of the
resulting fog or mask.
If applying fog to a scene like that, which is larger than the ground plane, the result will look similar to
the “w/o Sphere” example shown below because, with no WPP information outside the ground plane,
the resulting value is 0/0/0, and the fog fills that area as well.
Inspector
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
For example, if the Red button on a Blur tool is deselected, the blur will first be applied to the image,
and then the red channel from the original input will be copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this will generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
"Understanding Image Channels," in the Fusion Reference Manual
Motion Blur
— Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
— Quality: Quality determines the number of samples used to create the blur. A quality setting of 2
will cause Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
— Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
— Center Bias: Center Bias modifies the position of the center of the motion blur. This allows for the
creation of motion trail effects.
— Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off GPU hardware-
accelerated rendering. Enabled uses the GPU hardware for rendering the node. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Resolve Connect
This chapter details the single node found in the Resolve Connect
category, available only in standalone Fusion Studio.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
External Matte Saver [EMS]�������������������������������������������������������������������������������� 1441
Inputs��������������������������������������������������������������������������������������������������������������������������� 1441
Inspector�������������������������������������������������������������������������������������������������������������������� 1442
NOTE: The Resolve Connect category and External Matte Saver node are available only in
Fusion Studio.
Inputs
By default, the node provides a single input for a 2D image you want to save as a matte.
— Input: Although initially there is only a single orange input for a matte to connect, the Inspector
provides an Add button for adding additional inputs. Each input uses a new color, but all accept
2D RGBA images.
Inspector
Controls Tab
The Controls tab is used to name the saved file and determine where on your hard drive the file
is stored.
Filename
Enter the name you want to use for the EXR file in the Filename field. At the end of the name, append
the .exr extension to ensure that the file is saved as an EXR file.
Browse
Clicking the Browse button opens a standard file browser window where you can select the location to
save the file.
Channels menu
The Channels menu allows you to select which channels are saved in the matte. You can choose the
alpha channel, the RGB channels, or the RGBA channels.
Channels Name
The Channels Name field allows you to customize the name of the matte channel you are saving. This
name is displayed in DaVinci Resolve’s Color page.
Node Name
The Node Name field displays the source of the matte. This is automatically populated when you
connect a node to the input.
Add
Clicking the Add button adds an input on the node and another set of fields for you to configure and
name the new matte channel.
Settings Tab
The Settings Tab in the Inspector is similar to settings found in the Saver tool. The controls are
consistent and work the same way as the Settings in other tools.
For example, if the Red button on a Blur tool is deselected, the blur is first applied to the image, and
then the red channel from the original input is copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to skip
processing that channel entirely. Tools that do this generally possess a set of identical RGBA buttons
on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs are
identical.
Multiply by Mask
Selecting this option causes the RGB values of the masked image to be multiplied by the mask
channel’s values. This causes all pixels of the image not included in the mask (i.e., set to 0) to become
black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
"Understanding Image Channels," in the Fusion Reference Manual.
Motion Blur
— Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
— Quality: Quality determines the number of samples used to create the blur. A quality setting of
2 causes Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
— Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
— Center Bias: Center Bias modifies the position of the center of the motion blur. This allows for the
creation of motion trail effects.
— Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Shape Nodes
This chapter details the Shape nodes available in Fusion.
Contents
sBoolean��������������������������������������������������������������������������������������������������������������������� 1447
sDuplicate������������������������������������������������������������������������������������������������������������������ 1450
sEllipse������������������������������������������������������������������������������������������������������������������������� 1452
sExpand����������������������������������������������������������������������������������������������������������������������� 1455
sGrid����������������������������������������������������������������������������������������������������������������������������� 1457
sJitter���������������������������������������������������������������������������������������������������������������������������� 1458
sMerge������������������������������������������������������������������������������������������������������������������������ 1460
sNGon�������������������������������������������������������������������������������������������������������������������������� 1461
sOutline���������������������������������������������������������������������������������������������������������������������� 1464
sRectangle����������������������������������������������������������������������������������������������������������������� 1470
sRender����������������������������������������������������������������������������������������������������������������������� 1473
sStar������������������������������������������������������������������������������������������������������������������������������ 1476
sTransform����������������������������������������������������������������������������������������������������������������� 1479
The sBoolean node combines or excludes overlapping areas of two shapes based on a menu of
boolean operations.
Like almost all shape nodes, you can only view the sBoolean node’s results through a sRender node.
External Inputs
The following inputs appear on the node’s tile in the Node Editor. Except when using the subtract
boolean operation, which shape you connect into which input does not matter.
— Input1: [orange, required] This input accepts the output of another shape node. This input is used
as the base shape when the subtract boolean operation is chosen.
— Input2: [green, optional] This input accepts the output of another shape node. This input is used
to cut the base shape hole when the subtract boolean operation is chosen.
Inspector
Operation
The operation menu includes four boolean operations:
— Intersection: Sometimes called an AND operation, this setting will only show areas where the two
shapes overlap. The result is only where input 1 AND input 2 overlap.
— Union: Sometimes called an OR operation, this setting will only show areas where either of the
two shapes exists. The result is where either input 1 OR input 2 exists. The Union setting is similar
to the result of the sMerge node.
— Subtract: Sometimes called a NOT operation, this setting outputs the shape of input 1 but
eliminates the areas where input 2 overlaps. The result is input 1 minus input 2.
Style Mode
The Style mode menu only includes one option. The Replace setting replaces the color and alpha level
of the incoming shapes with the color set in the Style tab.
Style Tab
Style
Any color assigned to the individual shape nodes is replaced by the color set using the Style
tab controls.
Color
The color controls determine the color of the output shape from the sBoolean node. To choose a
shape color, you can click the color disclosure arrow, use the color swatch, or drag the eyedropper into
the viewer to select a color from an image. The RGBA sliders or number fields can be used to enter
each color channel’s value or the strength of the alpha channel.
For instance, if an ellipse’s alpha channel is set to .5, enabling the Allow Combining checkbox maintains
that value even if the shape passes through a duplicate or grid node that causes the shape to overlap.
Disabling the checkbox causes the alpha channel values to be compounded at each overlapping area.
When using the sBoolean node, the individual shape node checkboxes are ignored, and the sBoolean
node’s checkbox determines the alpha channel’s behavior.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sDuplicate
The sDuplicate node creates copies of the input shape, offsetting each copy’s position, size, and
rotation. Like almost all shape nodes, you can only view the sDuplicate node’s results through a
sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor:
— Input1: [orange, required] This input accepts the output of another Shape node. The shape
connected to this input is copied and offset based on the controls in the Inspector.
Inspector
Controls
The Controls tab is used to determine the number of copies and set their position, size, and
rotation offset.
Copies
This slider determines the number of copies created by the node. The number does not include the
original shape, so entering a value of five will produce five copies plus the original.
X and Y Offset
These sliders set the X and Y distance between each of the copies. Each copy is offset from the
previous copy by the value entered in the X and Y number fields. The copies all start at 0, the center of
the original shape, and are offset from there. Using Fusion’s normalized coordinate system, entering X
Offset at 0.5 would move each copy half the frame’s width to the right. Entering -1.0 would move each
copy to the left by the width of the frame.
X and Y Size
Sets the X and Y size offset based on the previous shape size. For instance, an X and Y value of 1.0
creates copies identical in size to the original but. Entering a value of X and Y of 0.5 will cause each
copy to be half the size of the copy before it.
— Absolute: Allows you to set an X and Y position for the axis of rotation based on the original
shape’s location. The axis of rotation is then copied and offset with each duplicated shape.
— Origin Relative: Each copy uses its center point as its axis of rotation.
— Origin Absolute: Each copy uses the center of the original shape as its axis of rotation.
— Progressive: Compounds each shape copy by progressively transforming each copy based on the
previous shape’s position, rotation, and scale.
X and Y Pivot
The X and Y pivot controls are displayed when the Axis mode is set to Absolute. You can use these
position controls to place the axis of rotation.
Rotation
Determines an offset rotation applied to each copy. The rotation is calculated from the offset rotation
of the previous copy. To rotate all copies identically, use the Angle parameter on the original shape or
use a sTransform node.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sEllipse
The sEllipse node is used to create circular shapes. Like almost all shape nodes, you can only view the
sEllipse node’s results through a sRender node.
External Inputs
This node generates shapes and does not have any inputs.
An sEllipse node connecting to an sGrid node, and then viewed using an sRender node
Inspector
Controls
The Controls tab is used to define the elliptical shape characteristics, including fill, border, size,
and position.
Solid
When enabled, the Solid checkbox fills the elliptical shape with the color defined in the Style tab.
When disabled, an outline created by the Border Width control is displayed, and the center is made
transparent.
Border Width
This parameter expands or contracts the border around the shape. Although it can be used when the
Solid checkbox is enabled, it is primarily used to determine the outline thickness when the checkbox
is disabled.
Cap style
When the Solid checkbox is disabled, three cap style options are displayed. The cap styles can create
lines with flat, rounded, or squared ends. Flat caps have flat, squared ends, while rounded caps have
semi-circular ends. Squared caps have projecting ends that extend half the line width beyond the end
of the line.
The caps are not visible unless the length is below 1.0.
Length
The length parameter is only displayed when the Solid checkbox is disabled. A length of 1.0 is a closed
shape. Setting the length below 1.0 creates an opening or gap in the outline. Keyframing the length
parameters allows you to create write-on style animations.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. An X offset of 0.0 is centered, and a value
of 0.5 places the center of the shape directly on the right edge of the frame.
Width/Height
The width and height determine the vertical and horizontal size of the ellipse. If the values are
identical, then you have a perfect circle.
Angle
The angle rotates the shape, which on a perfect circle doesn’t change the image all that much, but if
you create an oval or an outline with a short length, you can rotate the shape based on the center axis.
Style Tab
Style
The Style tab is used to assign a color to the shape and control its transparency.
Color
The color controls determine the color of the fill and border. To choose a shape color, you can click the
color disclosure arrow, use the color swatch, or drag the eyedropper into the viewer to select a color
Allow Combining
When this checkbox is enabled, the alpha channel value is maintained even when passing through
other nodes downstream that may cause the shape to overlap with copies of itself. When disabled, the
alpha channel value may increase when the shape overlaps itself.
For instance, if an ellipse’s alpha channel is set to .5, enabling the Allow Combining checkbox maintains
that value even if the shape passes through a Duplicate or Grid node that causes the shape to overlap.
Disabling the checkbox causes the alpha channel values to be compounded at each overlapping area.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sExpand
The sExpand node is used to dilate or erode shapes. Like almost all Shape nodes, you can only view
the sExpand node’s results through a sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor.
— Input1: [orange, required] This input accepts the output of another shape node. This shape or
compound shape connected to this input is either eroded or dilated.
Star and ellipse shapes combined in an sBoolean node, then output to an sExpand for dilating or eroding
Inspector
Controls
The Controls tab includes all of the parameters for the sExpand node.
Amount
A positive value dilates the shape while a negative value erodes it.
Border Style
The border style controls how the expanded or contracted shapes join at the corners. There are four
styles provided as options. Bevel squares off the corners. Round creates rounded corners. Miter
and Miter Clip maintain pointed edges, until a certain threshold. The Threshold is set by the Miter
limit slider.
Miter Limit
The Miter parameter is only displayed when the Miter or Miter Clip border style is selected. The miter
limit determines when the pointed edges become beveled based on the shape’s thickness.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
The sGrid node replicates the shape on an X and Y grid and adds the ability to offset the rows
and columns. Like almost all Shape nodes, you can only view the sGrid node’s results through a
sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor.
— Input1: [orange, required] This input accepts the output of another Shape node. The shape
connected to this input is replicated on a custom grid.
Inspector
Controls
The Controls tab is used to determine the number of grid cells and their offset position.
X and Y Offset
Sets the X and Y distance between the rows and columns. An offset value of 0.0 will have all the rows
and columns on top of each other. Entering X Offset at 1.0 would spread the columns the width to
the frame.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sJitter
The sJitter node is most often used to randomly position an array of shapes generated from a sGrid
or sDuplicate node. However, it includes an auto-animating random mode that can be used to distort
and randomly jitter single shapes.
Like almost all Shape nodes, you can only view the sJitter node’s results through a sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor.
— Input1: [orange, required] This input accepts the output of another Shape node. The shape
connected to this input is offset, distorted, and animated based on the sJitter node settings.
Controls
The Controls tab offers range sliders that determine the variation amount for offset, size, and rotation.
The Point Jitter parameters are used to offset the invisible points that create the vector shapes.
Jitter Mode
The Jitter Mode menu allows you to choose between static position and size offsets or enabling
an auto-animation mode. Leaving the default Fixed selection allows you to offset a grid of shapes,
animating with keyframes or modifiers if needed. The Random menu selection auto-animates the
parameters based on the range you define using the range sliders. If all the range sliders are left in
the default position, no random animation is created. Increasing the range on any given parameter
will randomly animate that parameter between the range slider values.
Shape Rotate
This parameter rotates each shape in an array.
Point Jitter
The X and Y Point Jitter parameters use the vector control points to distort the shape. This can be used
to give a distressed appearance to ellipses or wobbly animation to other shapes.
sMerge
The sMerge node combines shapes similar to a standard Merge node, except the sMerge node can
accept more than two shape inputs.
Like almost all Shape nodes, you can only view the sMerge node’s results through a sRender node.
External Inputs
The node displays only two inputs first, but as each shape node is connected, a new input appears on
the node, assuring there is always one free to add a new shape into the composite.
— Input[#]: These multi-colored inputs are used to connect multiple Shape node. There is no limit
to the number of inputs this node can accept. The node dynamically adds more inputs as needed,
ensuring that there is always at least one input available.
Controls
The only control for the sMerge node is the Override Axis checkbox, which overrides the shape’s axis.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sNGon
The sNGon node is used to create multi-sided shapes like triangles, pentagons, and octagons. Like
almost all Shape nodes, you can only view the sNGon node’s results through a sRender node.
External Inputs
This node generates shapes and does not have any inputs.
An sNGon node connecting to an sDuplicate node, and then viewed using an sRender node.
Controls
The Controls tab is used to define multi-sided shape characteristics, including fill, border, size,
and position.
Solid
When enabled, the Solid checkbox fills the NGon shape with the color defined in the Style tab.
When disabled, an outline created by the Border Width control is displayed, and the center is made
transparent.
Border Width
This parameter expands or contracts the border around the shape. Although it can be used when the
Solid checkbox is enabled, it is primarily used to determine the outline thickness when the checkbox
is disabled.
Border Style
The Border Style parameter controls how the sides of the NGon join at the corners. There are three
styles provided as options. Bevel squares off the corners. Round creates rounded corners. Miter
maintains pointed corners.
Cap style
When the Solid checkbox is disabled, three cap style options are displayed. The cap styles can create
lines with flat, rounded, or squared ends. Flat caps have flat, squared ends, while rounded caps have
semi-circular ends. Squared caps have projecting ends that extend half the line width beyond the end
of the line.
The caps are not visible unless the length is below 1.0.
Position
The Position parameter is only displayed when the Solid checkbox is disabled. It allows you to position
the starting point of the shape. When used in conjunction with the Length parameter, it positions the
gap in the outline.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. So, an X offset of 0.0 is centered and a
value of 0.5 places the center of the shape directly on the right edge of the frame.
Width/Height
The Width and Height parameters determine the vertical and horizontal size of the ellipse. If the
values are identical, then all sides are of equal length.
Angle
The Angle parameter rotates the shape based on the center axis.
Style Tab
Style
The Style tab is used to assign a color to the shape and control its transparency.
Color
The Color controls determine the color of the fill and border. To choose a shape color, you can click the
color disclosure arrow, use the color swatch, or drag the eyedropper into the viewer to select a color
from an image. The RGBA sliders or number fields can be used to enter each color channel’s value or
the strength of the alpha channel.
Allow Combining
When this checkbox is enabled, the alpha channel value is maintained even when passing through
other nodes downstream that may cause the shape to overlap with copies of itself. When disabled, the
alpha channel value may increase when the shape overlaps itself.
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sOutline
The sOutline node is used to create outlines from merged or boolean compound shapes. The
individual shapes retain their own style, color, size, position, and other characteristics. The only
difference is the border thickness, border style, position, and length are applied to all incoming
shapes uniformly in the sOutline node.
Like almost all shape nodes, you can only view the sOutline node’s results through a sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor:
— Input1: [orange, required] This input accepts the another shape node’s output, but more likely
a compound shape from a sMerge or sBoolean. An outline is created from the compound shape
connected to this input.
Inspector
Controls
The Controls tab is used to define the outline thickness, border and cap style, position, and length that
is applied to the compound shape connected to the input.
Thickness
This parameter controls the width of the outline.
Border Style
The Border Style parameter controls how the outline joins at the corners. There are three styles
provided as options. Bevel squares off the corners. Round creates rounded corners. Miter maintains
pointed corners.
Cap style
Three Cap Style options are used to create lines with flat, rounded, or squared ends. Flat caps have
flat, squared ends, while rounded caps have semi-circular ends. Squared caps have projecting ends
that extend half the line width beyond the end of the line.
The caps are not visible unless the length is below 1.0.
Length
The Length parameter controls the end position of the outline. A length of 1.0 is a closed shape.
Setting the length below 1.0 creates an opening or gap in the outline. Keyframing the Length
parameters allows you to create write-on style animations.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sPolygon [sPly]
The sPolygon tool allows users to draw custom shapes. Through intuitive point manipulation, this tool
allows users to effortlessly generate intricate forms, curves, and lines, which tie in with the existing
shape tools in Fusion. It provides the freedom to design custom elements, making it a valuable tool in
motion graphics design.
Inputs
This node generates shapes and does not have any inputs.
An sPolygon node connecting to an sDuplicate node, and then viewed using an sRender node
Controls
The Controls tab is used to define the polygon characteristics, including fill, border, size, and position.
Solid
When enabled, the Solid checkbox fills the rectangle shape with the color defined in the Style tab.
When disabled, an outline created by the Border Width control is displayed, and the center is made
transparent.
Border Width
This parameter expands or contracts the border around the shape. Although it can be used when the
Solid checkbox is enabled, it is primarily used to determine the outline thickness when the checkbox
is disabled.
Border Style
The Border Style parameter controls how the sides of the rectangle join at the corners. There are
three styles provided as options. Bevel squares off the corners. Round creates rounded corners. Miter
maintains pointed corners.
Cap style
The cap styles can create lines with flat, rounded, or squared ends. Flat caps have flat, squared ends,
while rounded caps have semi-circular ends. Squared caps have projecting ends that extend half the
line width beyond the end of the line.
The caps are not visible unless the length is below 1.0.
Position
The Position parameter is only displayed when the Solid checkbox is disabled. It allows you to position
the starting point of the shape. When used in conjunction with the Length parameter, it positions the
gap in the outline.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. So an X offset of 0.0 is centered and a
value of 0.5 places the center of the shape directly on the right edge of the frame.
Z Offset [3D]
If used with the 3D toolset, you can offset the polygon in back and forth along the Z-axis by adjusting
these controls.
Size
Use the Size control to adjust the scale of the polygon shape, without affecting the relative behavior of
the points that compose the shape or setting a keyframe in the shape animation.
X,Y,Z Rotation
Use these three controls to adjust the rotation angle of the shape along any axis.
The Fill Method menu offers two different techniques for dealing with overlapping regions of a
polyline. If overlapping segments in a shape are causing undesirable holes to appear, try switching the
setting of this control from Alternate to Non Zero Winding.
Style Tab
Style
The Style tab is used to assign color to the shape and control its transparency.
Allow Combining
When this checkbox is enabled, the Alpha channel value is maintained, even when passing through
other nodes downstream that may cause the shape to overlap with copies of itself. When disabled,
the Alpha channel value may increase when the shape overlaps itself. For instance, if a rectangle
Alpha channel is set to .5, enabling the Allow Combining checkbox maintains that value even if the
shape passes through a duplicate or grid node that causes the shape and Alpha channel to overlap.
Disabling the checkbox causes the Alpha channel values to be compounded at each overlapping area.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in "The Common Controls” section.
Adding Points
Adding Points to a polygonal shape is relatively simple. Immediately after adding the node to the Node
Editor, there are no points, but the tool will be in Click Append mode. Click once in the viewer wherever
a point is required for the shape. Continue clicking to draw the shape. When the shape is complete,
click on the initial point again to close the shape.
When the shape is closed, the mode of the polyline will change to Insert and Modify. This allows for
the adjusting and adding of additional points to the shape by clicking on segments of the polyline.
To lock down the shape and prevent accidental changes, switch the Polyline mode to Done using the
Polyline toolbar or contextual menu.
When a Polygon (or B-Spline) shape is added to a node, a toolbar appears above the viewer, offering
easy access to modes. Hold the pointer over any button in the toolbar to display a tooltip that
describes that button’s function.
— Click: Click is the default option when creating a polyline (or B-Spline) shape. It is a Bézier style
drawing tool. Clicking sets a control point and appends the next control point when you click again
in a different location.
— Draw: Draw is a freehand drawing tool. It creates a shape similar to drawing with a pencil on
paper. You can create a new shape using the Draw tool, or you can extend an existing open spline
by clicking the Draw tool and starting to draw from the last control point.
— Insert and Modify: Insert adds a new control point along the spline and lets you modify it.
— Modify Only: Modify allows you to safely move or smooth any exiting point along a spline without
worrying about adding new points accidentally.
sRectangle
The sRectangle node is used to create rectangular shapes. Like almost all shape nodes, you can only
view the sRectangle node’s results through a sRender node.
External Inputs
This node generates shapes and does not have any inputs.
Controls
The Controls tab is used to define the rectangle characteristics, including fill, border, size,
and position.
Solid
When enabled, the Solid checkbox fills the rectangle shape with the color defined in the Style tab.
When disabled, an outline created by the Border Width control is displayed, and the center is made
transparent.
Border Width
This parameter expands or contracts the border around the shape. Although it can be used when the
Solid checkbox is enabled, it is primarily used to determine the outline thickness when the checkbox
is disabled.
Border Style
The Border Style parameter controls how the sides of the rectangle join at the corners. There are
three styles provided as options. Bevel squares off the corners. Round creates rounded corners. Miter
maintains pointed corners.
Cap style
When the Solid checkbox is disabled, three Cap Style options are displayed. The cap styles can create
lines with flat, rounded or squared ends. Flat caps have flat, squared ends, while rounded caps have
semi-circular ends. Squared caps have projecting ends that extend half the line width beyond the end
of the line.
The caps are not visible unless the length is below 1.0.
Position
The Position parameter is only displayed when the Solid checkbox is disabled. It allows you to position
the starting point of the shape. When used in conjunction with the Length parameter, it positions the
gap in the outline.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. So an X offset of 0.0 is centered and a
value of 0.5 places the center of the shape directly on the right edge of the frame.
Width/Height
The Width and Height parameters determine the vertical and horizontal size of the rectangle. If the
values are identical, then you have a square.
Corner Radius
This parameter determines if the corners of the rectangle are sharp or rounded. A value of 0.0
produces sharp corners, while a value of 1.0 will create a circle from a staring square shape or a pill
shape from a rectangle.
Angle
The Angle parameter rotates the shape based on the center axis.
Style Tab
Style
The Style tab is used to assign color to the shape and control its transparency.
Color
The Color parameter controls determine the color of the fill and border from the sRectangle node.
To choose a shape color, you can click the color disclosure arrow and use the color swatch, or drag the
eye dropper into the viewer to select a color from an image. The RGBA sliders or number fields can be
used to enter the value of each color channel or the strength of the alpha channel.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sRender
The sRender node converts the vector shapes to an image. The output of the sRender allows the
vector shapes to be integrated with other elements in a composite.
Inputs
There is one input on the Background node for an Effect Mask input.
— Input1: [orange, required] This input accepts the output of your final shape node. A rendered
bitmap image is created from the sRender node for composting into the rest of your comp.
— Effect Mask: The optional blue effect mask input accepts a mask shape created by polylines, basic
primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input limits
the displayed area to only those pixels within the mask.
Multiple Shape nodes connected to the sRender node and then processed and composited with a title
Inspector
Image Tab
The controls in this tab are used to set the resolution, color depth, and pixel aspect of the image
produced by the sRender node.
Process Mode
Use this menu control to select the Fields Processing mode used by Fusion to render the resulting
image. The default Full Frames option is appropriate for progressive formats.
Width/Height
This pair of controls are used to set the Width and Height dimensions of the image to be created by
the sRender node.
NOTE: Right-click on the Width, Height, or Pixel Aspect controls to display a menu listing
the file formats defined in the preferences Frame Format tab. Selecting any of the listed
options will set the width, height, and pixel aspect to the values for that format, accordingly.
Auto Resolution
When this checkbox is selected, the width, height, and pixel aspect of the image created by the node
will be locked to values defined in the composition’s Frame Format preferences. If the Frame Format
preferences change, the resolution of the image produced by the node will change to match. Disabling
this option can be useful to build a composition at a different resolution than the eventual target
resolution for the final render.
Depth
The Depth button array is used to set the pixel color depth of the image created by the Creator node.
32-bit pixels require 4X the memory of 8-bit pixels but have far greater color accuracy. Float pixels
allow high dynamic range values outside the normal 0..1 range, for representing colors that are
brighter than white or darker than black.
— Auto: Automatically reads and passes on the metadata that may be in the image.
— Space: Displays a Color Space Type menu where you can choose the correct color
space of the image.
— Auto: Automatically reads and passes on the metadata that may be in the image.
— Space: Displays a Gamma Space Type menu where you can choose the correct gamma
curve of the image.
— Log: Brings up the Log/Lin settings, similar to the Cineon tool. For more information,
see Chapter 38, "Film Nodes," in the Fusion Reference Manual.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sStar
The sStar node is used to create multi-point star shapes. Like almost all Shape nodes, you can only
view the sStar node’s results through a sRender node.
External Inputs
This node generates shapes and does not have any inputs.
An sStar node connecting to an sDuplicate node, and then viewed using an sRender node
Controls
The Controls tab is used to define the star shape’s characteristics, including number of points, depth,
fill, border, size, and position.
Points
This slider determines the number of points or arms on the star.
Depth
The depth slider controls the inner radius or width of the arms. A depth of 0.001 makes hair-thin arms,
while a depth of 1.0 makes a faceted circle.
Solid
When enabled, the Solid checkbox fills the star shape with the color defined in the Style tab. When
disabled, an outline created by the Border Width control is displayed, and the center is made
transparent.
Border Width
This parameter expands or contracts the border around the shape. Although it can be used when the
Solid checkbox is enabled, it is primarily used to determine the outline thickness when the checkbox
is disabled.
Border Style
The Border Style parameter controls how the sides of the star join at the corners. There are three
styles provided as options. Bevel squares off the corners. Round creates rounded corners. Miter
maintains pointed corners.
Cap style
When the Solid checkbox is disabled, three cap style options are displayed. The cap styles can create
lines with flat, rounded or squared ends. Flat caps have flat, squared ends while rounded caps have
semi-circular ends. Squared caps have projecting ends that extend half the line width beyond the end
of the line.
The caps are not visible unless the length is below 1.0.
Length
The Length parameter is only displayed when the Solid checkbox is disabled. A length of 1.0 is a closed
shape. Setting the length below 1.0 creates an opening or gap in the outline. Keyframing the Length
parameters allows you to create write-on style animations.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. So an X offset of 0.0 is centered and a
value of 0.5 places the center of the shape directly on the right edge of the frame.
Width/Height
The Width and Height parameters determine the vertical and horizontal size of the star. If the values
are identical, then all arms of the star are of equal length.
Angle
The Angle parameter rotates the shape based on the center axis.
Style Tab
Style
The Style tab is used to assign color to the shape and control its transparency.
Color
The Color controls determine the color of the fill and border from the sStar node. To choose a shape
color, you can click the color disclosure arrow and use the color swatch, or drag the eye dropper into
the viewer to select a color from an image. The RGBA sliders or number fields can be used to enter the
value of each color channel or the strength of the alpha channel.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
sTransform
The sTransform node is used to add an additional set of transform controls to the existing controls
that are contained in Shape nodes. These additional transforms can be used to create hierarchical
animations. For instance, you can use a sStar’s built-in Angle control to spin the star around. The star
can then be output to an sTransform node. The rotation control in the sTransform can be used to orbit
the star around the frame.
Like almost all Shape nodes, you can only view the sStar node’s results through a sRender node.
External Inputs
The following input appears on the node’s tile in the Node Editor:
— Input1: [orange, required] This input accepts the output of another Shape node. The shape
connected to this input is moved, scaled, and rotated based on the sTransform settings.
Inspector
Controls
The Controls tab is used to define the add a set of transform controls to the incoming shape.
X and Y Offset
These parameters are used to position the shape left, right, up, and down in the frame. The shape
starts in the center of the frame, and the parameters are used to offset the position. The offset
coordinates are normalized based on the width of the frame. So an X offset of 0.0 is centered and a
value of 0.5 places the center of the shape directly on the right edge of the frame.
X and Y Size
The X and Y Size determine the vertical and horizontal scaling of the incoming shape. If the values are
different then the shape will be skewed from its original design.
Rotation
The dial rotates the shape based on the pivot controls.
X and Y Pivot
These parameters position the axis of rotation for the incoming shape. The pivot point is visible in the
viewer as a red X. The X can be dragged in the viewer for positioning.
Common Controls
Settings tab
The Settings tab in the Inspector is common to all Shape nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Common Controls
Nodes that handle Shape operations share a number of identical controls in the Inspector. This
section describes controls that are common amongst Shape nodes.
Settings Tab
The Settings tab in the Inspector can be found on every Shape node. Most of the controls listed here
are only found in the sRender node but a few are common to all Shape nodes.
For example, if the red button on a Blur tool is deselected, the blur is first applied to the image, then
the red channel from the original input is copied back over the red channel of the result.
There are some exceptions, such as tools where deselecting these channels causes the tool to skip
processing that channel entirely. Tools that do this generally possess a set of identical RGBA buttons on
the Controls tab in the tool. In this case, the buttons in the Settings and the Control tabs are identical.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower left corner of the node when the full
tile is displayed or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more information on
scripting nodes, see the Fusion scripting documentation.
Stereo Nodes
This chapter details the Stereo nodes available in Fusion.
Stereoscopic nodes are available only in Fusion Studio and
DaVinci Resolve Studio.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Anaglyph [Ana]������������������������������������������������������������������������������������������������������� 1484
NOTE: The Anaglyph node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The three inputs on the Anaglyph node are the left eye input, right eye input, and effect mask.
— Left Eye Input: The orange input is used to connect the 2D image representing the left eye in the
stereo comp.
— Right Eye Input: The green input is used to connect the 2D image representing the right eye in
the stereo comp.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the stereoscopic
creation to only those pixels within the mask.
When using separate images for the left and right eye, the left eye image is connected to the orange
input, and the right eye image is connected to the green input of the node. When using either
horizontally or vertically stacked images containing both left eye and right eye information, these only
connect to the orange input.
Controls Tab
Using the parameters in the Controls tab, the separate images are combined to create a
stereoscopic output.
Method
In addition to the color used for encoding the image, you can also choose five different methods
from the Method menu: Monochrome, Half-color, Color, Optimized, and Dubois. These methods are
described below.
Monochrome Half-Color
— Color: The left eye contains the color channels from the left image that match the glasses’ color
for that eye. The right eye contains the color channels from the right image that match the
glasses’ color for that eye.
— Optimized: Used with red/cyan glasses, for example, the resulting brightness of what shows
through the left eye is substantially less than the brightness of the right eye. Using typical ITU-R
601 ratios for luminance as a guide, the red eye would give 0.299 brightness, while the cyan eye
would give 0.587+0.114=0.701 brightness—over twice as bright. The difference in brightness
between the eyes can produce what are referred to as retinal rivalry or binocular rivalry, which
can destroy the stereo effect. The Optimized method generates the right eye in the same fashion
as the Color method. The left eye also uses the green and blue channels but in combination with
increased brightness that reduces retinal rivalry. Since it uses the same two channels from each
of the source images, it doesn’t reproduce the remaining one. For example, 1.05× the green and
0.45× the blue channels of the left image is placed in the red output channel, and the green and
blue channels of the right image are placed in the output green and blue channels. Red from both
the left and right images is not used.
Color Optimized
Dubois
Swap Eyes
Allows you to swap the left and right eye inputs easily.
Horiz Stack
Takes an image that contains both left and right eye information stacked horizontally. These images
are often referred to as “crosseyed” or “straight stereo” images. You only need to connect that one
image to the orange input of the node. It then creates an image half the width of the original input,
using the left half of the original image for the left eye and the right half of the original image for the
right eye. Color encoding takes place using the specified color type and method.
Vert Stack
Takes an image that contains both left and right eye information stacked vertically. You only need to
connect that one image to the orange input of the node. It then creates an image half the height of
the original input, using the bottom half of the original image for the left eye and the top half of the
original image for the right eye. Color encoding takes place using the specified color type and method.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The Combiner node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The two inputs on the Combiner node are used to connect the two images that get combined in a
stacked stereo image.
— Image 1 Input: The orange input is used to connect the 2D image representing the left eye in the
stereo comp.
— Image 2 Input: The green input is used to connect the 2D image representing the right eye in the
stereo comp.
Left and right eye images are connected into a Combiner node to generate a stacked stereo image.
Controls Tab
To stack the images, the left eye image is connected to the orange input, and the right eye image is
connected to the green input of the node.
Combine
The Combine menu provides three options for how the two images are made into a stacked
stereo image.
— None: No operation will take place. The output image is identical to the left eye input.
— Horiz: Both images will be stacked horizontally, or side-by-side, with the image connected to the
left eye input on the left. This will result in an output image double the width of the input image.
— Vert: Both images will be stacked vertically, or on top of each other, with the image connected
to the left eye input on the bottom. This will result in an output image double the height of the
input image.
Swap Eyes
Allows you to easily swap the left and right eye input.
Add Metadata
Metadata is carried along with the images and can be added to the existing metadata using this
checkbox. To view Metadata, use the viewer’s SubView menu set to Metadata.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The Disparity node is available only in Fusion Studio and DaVinci Resolve Studio.
The generated disparity is stored in the output image’s Disparity aux channel, where the left image
contains the left > right disparity, and the right image contains the right > left disparity. Because
disparity works based on matching regions in the left eye to regions in the right eye by comparing
colors and gradients of colors, colors in the two eyes must be as similar as possible. Thus, it is a good
idea to color correct ahead of time. It is also a good idea to crop away any black borders around the
frames, as this confuses the disparity tracking (and also causes problems if you are using the Color
Corrector’s histogram match ability to do the color matching).
In Stack mode, left and right outputs deliver the same image. If the left and right images have a global
vertical offset larger than a few pixels, it can help the disparity tracking algorithm if you vertically align
features in the left/right eyes ahead of time using a Transform node. Small details tend to get lost in
the tracking process when you have a large vertical offset between left/right eyes.
Consider using a SmoothMotion node to smooth your disparity channel. This can help reduce
time-dependent flickering when warping an eye. Also, think about whether you want to remove
lens distortion before computing disparity. If you do not, your Disparity map becomes a combined
Disparity and Lens Distortion map. This can have advantages and disadvantages.
One disadvantage is that if you then do a vertical alignment, you are also removing lens distortion
effects. When trying to reduce the computation time, start first with adjusting the Proxy and Number
of Iterations sliders.
— Left Input: The orange input is used to connect either the left eye image or the stacked image.
— Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
Outputs
Unlike most nodes in Fusion, Disparity has two outputs for the left and right eye.
Left Output: This holds the left eye image with a new disparity channel, or a Stacked Mode image with
a new disparity channel.
Right Output: This holds the right eye image with a new disparity channel. This output is visible only if
Stack Mode is set to Separate.
Left and right eye images are connected into a Disparity node to generate and render out a stereo image
Inspector
Advanced
The Advanced settings section has parameter controls to tune the Disparity map calculations.
The default settings have been chosen to be the best default values from experimentation with
many different shots and should serve as a good standard. In most cases, tweaking of the Advanced
settings is not needed.
Smoothness
This controls the smoothness of the disparity. Higher values help deal with noise, while lower values
bring out more detail.
Edges
This slider is another control for smoothness but applies it based on the color channel. It tends to have
the effect of determining how edges in the disparity follow edges in the color images. When it is set
to a lower value, the disparity becomes smoother and tends to overshoot edges. When it is set to a
higher value, edges in the disparity align more tightly with the edges in the color images, and details
from the color channels start to slip into the disparity, which is not usually desirable.
As a rough guideline, if you are using the disparity to produce a Z channel for post effects like depth
of field, experiment with higher values, but if you are using the disparity to do interpolation, you might
want to keep the values lower.
In general, if the Edges slider is set is too high, there can be problems with streaked out edges when
the disparity is used for interpolation.
Match Weight
This controls how matching is done between neighboring pixels in the left image and neighboring
pixels in the right image. When a lower value is used, large structural color features are matched.
When higher values are used, small sharp variations in the color are matched. Typically, a good value
for this slider is in the [0.7, 0.9] range. Setting this option higher tends to improve the matching results
in the presence of differences due to smoothly varying shadows or local lighting variations between
the left and right images. You should still color match the initial images so they are as similar as
possible; this option tends to help with local variations (e.g., lighting differences due to light passing
through a mirror rig).
Warp Count
Turning down the Warp Count makes the disparity computations faster. In particular, the
computational time depends linearly upon this option. To understand what this option does, you need
to understand that the Disparity algorithm progressively warps the left image until it matches with
the right image. After some point, convergence is reached, and additional warps are just a waste of
computational time. The default value in Fusion is set high enough that convergence should always
be reached. You can tweak this value to speed up the computations, but it is good to watch how the
disparity is degrading in quality at the same time.
Iteration Count
Turning down the Iteration Count makes the disparity computations faster. In particular, the
computational time depends linearly upon this option. Just like adjusting Warp Count, at some point
adjusting this option higher will yield diminishing returns and will not produce significantly better
results. By default, this value is set to something that should converge for all possible shots and can
be tweaked lower fairly often without reducing the disparity’s quality.
Filtering
This menu determines the filtering operations used during flow generation. Catmull-Rom filtering will
produce better results, but at the same time, it increases the computation time steeply.
Stack Mode
This menu determines how the input images are stacked.
When set to Separate, the Right Input and Output will appear, and separate left and right images must
be connected.
Swap Eyes
Enabling this checkbox causes the left and right images to swap.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The Disparity to Z node is available only in Fusion Studio and DaVinci Resolve Studio.
Optionally, this node can output Z into the RGB channels. Ideally, either a stereo Camera 3D or a
tracked stereo camera is connected into Disparity To Z. However, if no camera is connected, the node
provides artistic controls for determining a Z channel. The depth created by this node can be used for
post effects like fogging or depth of field (DoF).
The Z values produced become more incorrect the larger (negative) they get. The reason is that
disparity approaches a constant value as Z approaches -infinity. So Z = -1000 and Z = -10000 and Z
= -100000 may map to D=142. 4563 and D=142. 4712 and D=142. 4713. As you can see, there is only
0.0001 in D to distinguish between 10,000 and 100,000 in Z. The maps produced by disparity are not
accurate enough to make distinctions like this.
Inputs
The three inputs on the Disparity To Z node are used to connect the left and right images and a
camera node.
— Left Input: The orange input is used to connect either the left eye image or the stack image.
— Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
— Stereo Camera: The magenta input is used to connect a stereo camera node.
Outputs
Unlike most nodes in Fusion, Disparity To Z has two outputs for the left and right eye.
— Left Output: This holds the left eye image with a new Z channel, or a Stacked Mode image with a
new disparity channel.
— Right Output: This holds the right eye image with a new Z channel. This output is visible only if
Stack Mode is set to Separate.
Inspector
Controls Tab
In addition to outputting Z values in the Z channel, this tab promotes the color channels to float32 and
outputs the Z values into the color channels as {z, z, z, 1}. This option is useful to get a quick look at the
Z channel.
NOTE: Z values are negative, becoming more negative the further you are from the
camera. The viewers only show 0.0 to 1.0 color, so to visualize other data it has to be
converted via a normalization method to fit in a display 0-1 range. To do this, right-click in
the viewer and choose Options > Show Full Color Range.
Output Z to RGB
Rather than keeping the Z values within the associated aux channel only, they will be copied into the
RGB channels for further modification with any of Fusion’s nodes.
Refine Z
The Enable checkbox refines the depth map based upon the RGB channels. The refinement causes
edges in the flow to align more closely with edges in the color channels. The downside is that
unwanted details in the color channels start to show up in the flow. You may want to experiment with
using this option to soften out harsh edges for Z-channel post effects like depth of field or fogging.
Strength
Increasing this slider does two things. It smooths out the depth in constant color regions and moves
edges in the Z channel to correlate with edges in the RGB channels.
Increasing the refinement has the undesirable effect of causing texture in the color channel to show
up in the Z channel. You will want to find a balance between the two.
Radius
This is the radius of the smoothing algorithm.
Stack Mode
This menu determines how the input images are stacked.
When set to Separate, the Right Input and Output will appear, and separate left and right images must
be connected.
Swap Eyes
Enabling this checkbox causes left and right images to be swapped.
Camera Tab
If you need correct real-world Z values because you are trying to match some effect to an existing
scene, you should use the External Camera options to get precise Z values back. If any Z-buffer will
suffice and you are not that particular about the exact details of how it is offset and scaled, or if there
is no camera available, the Artistic option might be helpful.
— External Mode: An input is available on the node to connect an existing stereo Camera 3D. This
can either be a single stereo Camera 3D (i.e., its eye separation is set to non-zero), or a pair of
(tracked) Camera 3Ds connected via the Camera 3D > Stereo > Right Camera input.
— Artistic Mode: If you do not have a camera, you can adjust these controls to produce an “artistic”
Z channel whose values will not be physically correct but will still be useful. To reconstruct the
Disparity > Z Curve, pick (D, Z) values for a point in the foreground and a point in the background.
Foreground Depth
This is the depth to which Foreground Disparity will be mapped. Think of this as the depth of the
nearest object. Note that values here are positive depth.
Background Depth
This is the depth to which Background Disparity will be mapped. Think of this as the depth of the most
distant object.
Falloff
Falloff controls the shape of the depth curve between the requested foreground and background
depths. When set to Hyperbolic, the disparity-depth curve behaves roughly like depth = constant/
disparity. When set to Linear, the curve behaves like depth = constant * disparity. Hyperbolic tends
to emphasize Z features in the foreground, while linear gives foreground/background features in the
Z channel equal weighting.
Unless there’s a specific reason, choose Hyperbolic, as it is more physically accurate, while Linear does
not correspond to nature and is purely for artistic effect.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The Global Align node is available only in Fusion Studio and DaVinci Resolve Studio.
Global Align comes in handy at the beginning of the node chain to visually correct major differences
between the left and right eye before calculating Disparity.
Manual correction of large discrepancies between left and right, as well as applying an initial color
matching, helps Disparity generate more accurate results.
Inputs
The two inputs on the Global Align node are used to connect the left and right images.
— Left Input: The orange input is used to connect either the left eye image or the stack image.
— Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
Outputs
Unlike most nodes in Fusion, Global Align has two outputs for the left and right eye.
— Left Output: This outputs the newly aligned left eye image.
— Right Output: This outputs the newly aligned right eye image.
Inspector
Controls Tab
The Controls tab includes translation and rotation controls to align the stereo images manually.
Translation X and Y
— Balance: Determines how the global offset is applied to the stereo footage.
— None: No translation is applied.
— Left Only: The left eye is shifted, while the right eye remains unaltered.
— Right Only: The right eye is shifted, while the left eye remains unaltered.
— Split Both: Left and right eyes are shifted in opposite directions.
Angle
This dial adjusts the angle of the rotation. Keep in mind that the result depends on the Balance
settings. If only rotating one eye by, for example, 10 degrees, a full 10-degree rotation will be applied
to that eye.
When applying rotation in Split mode, one eye will receive a -5 degree and the other eye a
+5 degree rotation.
Visualization
This control allows for different color encodings of the left and right eye to conveniently examine the
results of the above controls without needing to add an extra Anaglyph or Combiner node.
Stack Mode
Determines how the input images are stacked.
When set to Separate, the right input and output will appear, and separate left and right images
must be connected.
Swap Eyes
With Stacked Mode, image stereo pairs’ left and right images can be swapped.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The New Eye node is available only in Fusion Studio and DaVinci Resolve Studio.
You can map the left eye onto the right eye and replace it. This can be helpful when removing errors
from certain areas of the frame.
New Eye does not interpolate the aux channels but instead destroys them. In particular, the disparity
channels are consumed/destroyed. Add another Disparity node after the New Eye if you want to
generate Disparity for the realigned footage.
Inputs
The two inputs on the New Eye node are used to connect the left and right images.
— Left Input: The orange input is used to connect either the left eye image or the stack image.
— Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
Outputs
Unlike most nodes in Fusion, New Eye has two outputs for the left and right eye.
— Left Output: This outputs the left eye image with a new disparity channel, or a Stacked Mode
image with a new disparity channel.
— Right Output: This outputs the right eye image with a new disparity channel. This output is visible
only if Stack Mode is set to Separate.
A New Eye node creates a new stereo image using embedded disparity
Inspector
Controls Tab
The Controls tab is divided into identical parameters for the left eye and right eye. The parameters are
used to select which eye to recreate and the methods used for the interpolation.
Enable
The Enable checkbox allows you to activate the left or right eye independently. The New Eye will
replace enabled eye with an interpolated eye. For example, if the left eye is your “master” eye and you
are recreating the right eye, you would disable the left eye and enable the right eye.
XY Interpolation Factor
Interpolation determines where the interpolated frame is positioned, relative to the two source
frames: A slider position of -1.0 outputs the frame Left and a slider position of 1.0 outputs the frame
Right. A slider position of 0.0 outputs a result that is halfway between Left and Right.
Depth Ordering
The Depth Ordering is used to determine which parts of the image should be rendered on top. When
warping images, there is often overlap. When the image overlaps itself, there are two options for
which values should be drawn on top.
— Largest Disparity On Top: The larger disparity values will be drawn on top in the
overlapping image sections.
— Smallest Disparity On Top: The smaller disparity values will be drawn on top in the
overlapping image sections.
Clamp Edges
Under certain circumstances, this option can remove the transparent gaps that may appear on the
edges of interpolated frames. Clamp Edges will cause a stretching artifact near the edges of the frame
that is especially visible with objects moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use Clamp Edges only to correct small gaps around the
edges of an interpolated frame.
Softness
Helps to reduce the stretchy artifacts that might be introduced by Clamp Edges.
If you have more than one of the Source Frame and Warp Direction checkboxes turned on, this
can lead to doubling up of the stretching effect near the edges. In this case, you’ll want to keep the
softness rather small at around 0.01. If you have only one checkbox enabled, you can use a larger
softness at around 0.03.
It’s good to experiment with various options to see which gives the best effect. Using both the left
and right eyes can help fill in gaps on the left/right side of images. Using both the Forward/Backward
Disparity can give a doubling-up effect in places where the disparities disagree with each other.
— Left Forward: Takes the Left frame and uses the Forward Disparity to interpolate the new frame.
— Right Forward: Takes the Right frame and uses the Forward Disparity to interpolate the new frame.
— Left Backward: Takes the Left frame and uses the Back Disparity to interpolate the new frame.
— Right Backward: Takes the Right frame and uses the Back Disparity to interpolate the new frame.
Splitter [Spl]
NOTE: The Splitter node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The two inputs on the Splitter node are used to connect the left and right images.
— Left Input: The orange input is used to connect a stacked stereo image.
Outputs
Unlike most nodes in Fusion, the Splitter node has two outputs for the left and right eye.
A Splitter node creates a left and right image from a stacked stereo image
l
The Splitter Controls tab
Controls Tab
The Controls tab is used to define the type of stacked image connected to the node’s input.
Split
The Split menu contains three options for determining the orientation of the stacked input image.
— None: No operation takes place. The output image on both outputs is identical to the input image.
— Horiz: The node expects a horizontally stacked image. This will result in two output images, each
being half the width of the input image.
— Vert: The node expects a vertically stacked image. This will result in two output images, each
being half the height of the input image.
Swap Eyes
Allows you to easily swap the left and right eye outputs.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The Stereo Align node is available only in Fusion Studio and DaVinci Resolve Studio.
By combining these operations in one node, you can execute them using only a single image
resampling. In essence, this node can be thought of as applying scales and translation to the
disparities and then using the modified disparities to interpolate between the views.
NOTE: Changing the eye separation can cause holes to appear, and it may not be
possible to fill them since the information needed may not be in either image. Even if the
information is there, the disparity may have mismatched the holes. You may need to fill the
holes manually. This node modifies only the RGBA channels.
TIP: Stereo Align does not interpolate the aux channels but instead destroys them.
In particular, the disparity channels are consumed/destroyed. Add another Disparity node
after the StereoAlign if you want to generate Disparity for the realigned footage.
Inputs
The two inputs on the Stereo Align node are used to connect the left and right images.
— Left Input: The orange input is used to connect either the left eye image or the stack image.
— Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
Outputs
Unlike most nodes in Fusion, Stereo Align has two outputs for the left and right eye.
— Left Output: This outputs the left eye image with a new disparity channel, or a Stacked Mode
image with a new disparity channel.
— Right Output: This outputs the right eye image with a new disparity channel. This output is visible
only if Stack Mode is set to Separate.
Inspector
Controls Tab
Vertical Alignment
This option determines how the vertical alignment is split between two eyes. Usually, the left eye is
declared inviolate, and the right eye is aligned to it to avoid resampling artifacts.
When doing per pixel vertical alignment, it may be helpful to roughly pre-align the images by a global
Y-shift before disparity computation because the disparity generation algorithm can have problems
resolving small objects that move large distances.
Apply to
— Right: Only the right eye is adjusted.
— Left: Only the left eye is adjusted.
— Both: The vertical alignment is split evenly between the left and right eyes.
Mode
— Global: The eyes are simply translated up or down by the Y-shift to match up.
— Per Pixel: The eyes are warped pixel-by-pixel using the disparity to vertically align.
Keep in mind that this can introduce sampling artifacts and edge artifacts.
Y-shift
Y-shift is available only when the Mode menu is set to Global. You can either adjust the Y-shift
manually to get a match or drag the Sample button into the viewer, which picks from the disparity
channel of the left eye. Also remember, if you use this node to modify disparity, you can’t use the
Sample button while viewing the node’s output.
Convergence Point
The Convergence Point section is used as a global X-translation of L/R images.
Apply to
This menu determines which eyes are affected by convergence. You can choose to apply the
convergence to the left eye, right eye, or split between the two. In most cases, this will be set to Split.
If you set the eyes to Split, then the convergence will be shared 50-50 between both eyes. Sharing the
convergence between both eyes means you get half the shift in each eye, which in turn means smaller
holes and artifacts that need to be fixed later. The tradeoff is that you’ve resampled both eyes rather
than keeping one eye as a pure reference master.
X-shift
You can either use the slider to adjust the X-shift manually to get a match or use the Sample button to
pick from the disparity channels for easy point-to-feature alignment.
Snap
You can snap the global shift to whole pixels using this option. In this mode, there is no resampling of
the image, but rather a simple shift is done so there will be no softening or image degradation.
This has the same effect as the Eye Separation option in the Camera 3D node.
Separation
This is a scale factor for eye separation.
Unlike the Split option for vertical alignment, which splits the alignment effect 50-50 between
both eyes, the Both option will apply 100-100 eye separation to both eyes. If you are changing eye
separation, it can be a good idea to enable per-pixel vertical alignment, or the results of interpolating
from both frames can double up.
Depth Ordering
The Depth Ordering is used to determine which parts of the image should be rendered on top. When
warping images, there is often overlap. When the image overlaps itself, there are two options for
which values should be drawn on top.
— Largest Disparity On Top: The larger disparity values will be drawn on top in the overlapping
image sections.
— Smallest Disparity On Top: The smaller disparity values will be drawn on top in the overlapping
image sections.
Clamp Edges
Under certain circumstances, this option can remove the transparent gaps that may appear on the
edges of interpolated frames. Clamp Edges will cause a stretching artifact near the edges of the frame
that is especially visible with objects moving through it or when the camera is moving.
Because of these artifacts, it is a good idea to use Clamp Edges only to correct small gaps around the
edges of an interpolated frame.
Edge Softness
Helps to reduce the stretchy artifacts that might be introduced by Clamp Edges.
If you have more than one of the Source Frame and Warp Direction checkboxes turned on, this
can lead to doubling up of the stretching effect near the edges. In this case, you’ll want to keep the
softness rather small at around 0.01. If you have only one checkbox enabled, you can use a larger
softness at around 0.03.
It’s good to experiment with various options to see which gives the best effect. Using both the left
and right eyes can help fill in gaps on the left/right side of images. Using both the Forward/Backward
Disparity can give a doubling-up effect in places where the disparities disagree with each other.
— Left Forward: Takes the Left frame and uses the Forward Disparity to interpolate the new frame.
— Right Forward: Takes the Right frame and uses the Forward Disparity to
interpolate the new frame.
— Left Backward: Takes the Left frame and uses the Back Disparity to interpolate the new frame.
— Right Backward: Takes the Right frame and uses the Back Disparity to interpolate the new frame.
Stack Mode
In Stack Mode, L and R outputs will output the same image.
If High Quality is off, the interpolations are done using nearest-neighbor sampling, leading to a more
“noisy” result. To ensure High Quality is enabled, right-click under the viewers, near the transport
controls, and choose High Quality from the pop-up menu.
Swap Eyes
Allows you to easily swap the left and right eye outputs.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
EXAMPLE
NOTE: The Z To Disparity node is available only in Fusion Studio and DaVinci Resolve Studio.
Inputs
The three inputs on the Z To Disparity node are used to connect the left and right images and a
camera node.
— Left Input: The orange input is used to connect either the left eye image or the stack image.
— Right Input: The green input is used to connect the right eye image. This input is available only
when the Stack Mode menu is set to Separate.
— Stereo Camera: The magenta input is used to connect a stereo perspective camera, which may
be either a Camera 3D with eye separation, or a tracked L/R Camera 3D.
Outputs
Unlike most nodes in Fusion, Z To Disparity has two outputs for the left and right eye.
— Left Output: This outputs the left eye image containing a new disparity channel, or a Stacked
Mode image with a new disparity channel.
— Right Output: This outputs the right eye image with a new disparity channel. This output is visible
only if Stack Mode is set to Separate.
Inspector
Controls Tab
The Controls tab includes settings that refine the conversion algorithm.
When activated, this option will automatically promote the RGBA color channels to float32. This option
is useful for a quick look to see what the disparity channel looks like.
Refine Disparity
This refines the Disparity map based on the RGB channels.
Strength
Increasing this slider does two things. It smooths out the depth in constant color regions and moves
edges in the Z channel to correlate with edges in the RGB channels. Increasing the refinement has the
undesirable effect of causing texture in the color channel to show up in the Z channel. You will want to
find a balance between the two.
Radius
This is the pixel-radius of the smoothing algorithm.
If HiQ is off, the interpolations are done using nearest-neighbor sampling, leading to a more
“noisy” result.
Swap Eyes
This allows you to easily swap the left and right eye outputs.
Camera Tab
The Camera tab includes settings for selecting a camera and setting its conversion point if necessary.
Camera Mode
If you need correct real-world disparity values because you are trying to match some effect to an
existing scene, you should use the External setting to get precise disparity values back. When External
is selected, a magenta camera input is available on the node to connect an existing stereo Camera 3D
node, and use the Camera settings to determine the Disparity settings.
If you just want any disparity and do not particularly care about the exact details of how it is offset and
scaled, or if there is no camera available, then the Artistic setting might be helpful.
Camera
If you connect a Merge 3D node that contains multiple cameras to the camera input, the Camera
menu allows you to select the camera to use.
If you do not have a camera, you can adjust the artistic controls to produce a custom disparity channel
whose values will not be physically correct but will be good enough for compositing hacks. There are
two controls to adjust:
Objects that are closer appear to pop out of the screen, and objects that are further appear behind
the screen.
Settings Tab
The Settings tab in the Inspector is also duplicated in other Stereo nodes. These common controls are
described in detail in the following “The Common Controls” section.
Settings Tab
For example, if the Red button on a Blur tool is deselected, the blur will first be applied to the image,
and then the red channel from the original input will be copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this will generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
"Understanding Image Channels," in the Fusion Reference Manual.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Tracker Nodes
This chapter details the Tracker nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Tracker [Tra]�������������������������������������������������������������������������������������������������������������� 1518
For more information, see Chapter 22, "Using the Tracker Node," in the Fusion Reference Manual.
Inputs
The Tracker has three inputs:
— Background: The orange image input accepts the main 2D image to be tracked.
— Foreground: The optional green foreground accepts a 2D image to be merged on top of the
background as a corner pin or match move.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the tracking
to certain areas.
The Tracker can also work as a replacement for a Merge tool in match-moving setups. Below, the
Tracker tracks the image connected to the orange background input and applies the tracking data
to the image connected to the foreground input. The same foreground-over-background merge
capabilities are available in the Tracker node.
Search Rectangle
Whenever the mouse moves over the pattern rectangle, a second rectangle with a dashed outline
appears. The dashed outline represents the search area, which determines how far away from the
current pattern the Tracker looks in the next frame. The search area should always be larger than the
pattern, and it should be large enough to encompass the largest frame-to-frame movement in the
scene. Faster moving objects require larger search areas, and slower moving objects can get away
with smaller search areas. The larger the search area, the longer it takes to track, so try not to make
the search area larger than necessary. If the selected Tracker has a custom name, the name of that
Tracker is displayed as a label at the bottom right of the search area rectangle.
The Tracker can be employed in two forms: as a node in the Node Editor or as a modifier
attached to a parameter. When used as a node in the Node Editor, the image tracked
comes from the input to the Tracker node. When used as a modifier, controls appear in the
Modifiers tab for the node with the connected control. Tracker Modifiers can track only
one pattern, but the image source can come from anywhere in the composition. Use this
technique when tracking a quick position for an element.
Inspector
Trackers Tab
The Trackers tab contains controls for creating, positioning, and initiating tracking operations. After
tracking, offset controls are used to improve the alignment of the image following the track.
— Track Reverse: Clicking this button causes all active trackers to begin tracking, starting at the end
of the render range and moving backward through time until the beginning of the render range.
— Track Reverse From Current Time: Clicking this button causes all active trackers to begin
tracking, starting at the current frame and moving backward through time until the beginning of
the render range.
— Stop Tracking: Clicking this button or pressing ESC stops the tracking process immediately.
This button is active only when tracking is in process.
— Track Forward From Current Time: Clicking this button causes all active trackers to begin
tracking, starting at the current frame and moving forward through time until the end of
the render range.
— Track Forward: Clicking this button causes all active trackers to begin tracking, starting at
the first frame in the render range and moving forward through time until the end of the
render range.
Increasing the value causes the tracked path to be less accurate. This may be desirable if the track is
returning fluctuating results, but under normal circumstances, leave this control at its default value.
TIP: If the project is field rendered, a value of 1 sets a keyframe on every field. Since the
Tracker is extremely accurate, this will result in a slight up-and-down jittering due to the
position of the fields. For better results when tracking interlaced footage in Field mode, set
the Frames Per Path Point slider to a value of 2, which results in one keyframe per frame of
your footage.
Adaptive Mode
Fusion is capable of reacquiring the tracked pattern, as needed, to help with complex tracks. This
menu determines the Adaptive tracking method.
— None: When set to None, the tracker searches for the original pattern in each frame.
— Every Frame: When set to Every Frame, the tracker reacquires the pattern every frame. This helps
the Tracker compensate for gradual changes in profile and lighting over time.
Path Center
This menu determines how the Tracker behaves when repositioning a pattern. This menu is
particularly useful when a pattern leaves the frame or changes so significantly that it can no longer
be tracked.
— Pattern Center: When Pattern Center is selected in the menu, the tracked path continues from
the center of the new path. This is appropriate when replacing an existing path entirely.
— Track Center (append): When Track Center (append) is selected in the menu, the path tracked by
a new pattern will be appended to the existing path. The path created is automatically offset by
the required amount. This setting is used to set a new tracking pattern when the original pattern
moves out of the frame or gets obscured by other objects. This technique work bests if the new
pattern is located close to the position of the original pattern to avoid any problems with parallax
or lens distortion.
Tracker List
A Tracker node can track multiple patterns. Each tracker pattern created in the current Tracker node is
managed in the Tracker List.
Tracker List
The Tracker List shows the names of all trackers created.
— Each tracker pattern appears in the list by name, next to a small checkbox. Clicking the name of
the tracker pattern will select that tracker pattern.
— The controls below the list will change to affect that tracker pattern only. Click a selected tracker
pattern once to rename the tracker pattern to something more descriptive.
— Clicking the checkbox changes the state of the tracker.
Tracker States
— Enabled (black checkbox): An enabled pattern will re-track each time the track is initiated.
Its path data is available for use by other nodes, and the data is available for Stabilization and
Corner Positioning.
— Suspended (white circle): A Suspended pattern does not re-track when the track is initiated.
The data is locked to prevent additional changes. The data from the path is still available for
other nodes, and the data is available for advanced Tracking modes like Stabilization and
Corner Positioning.
— Disabled (clear): A Disabled pattern does not create a path when tracking is initialized, and its
data is not available to other nodes or for advanced Tracking operations like Stabilization and
Corner Positioning.
Add/Delete Tracker
Use these buttons to add or delete trackers from your Tracker List.
Show
This menu selects what controls are displayed in the Tracker node controls. They do not affect the
operation of the tracker; they only affect the lower half of the Inspector interface.
— Selected Tracker Details: When Selected Tracker Details is chosen, the controls displayed
pertain only to the currently selected tracker. You will have access to the Pattern window and the
Offset sliders.
— All Trackers: When All Trackers is selected, the pattern window for each of the added tracking
patterns is displayed simultaneously below the Tracker List.
As the onscreen controls move while tracking, the display in the leftmost window updates to show
the pattern. As the pattern moves, the vertical bars immediately to the right of the image indicate the
clarity and contrast of the image channels.
The best channel or channels get selected for tracking based on clarity. These channels have a gray
background in the vertical bar representing that channel. You can use the automatic tracking or
override the selection and choose the channel by selecting the button beneath the channel to track.
Under normal circumstances, the channel selected shows in the pattern display. If the selected
channel is blue, then a grayscale representation of the blue channel for the pattern appears.
The image is represented in color only when you activate the Full Color button.
TIP: Because Fusion looks for the channel with the highest contrast automatically, you
might end up tracking a noisy but high-contrast channel. Before tracking, it’s always a good
idea to zoom in to your footage and check the RGB channels individually.
As the tracking occurs, the pattern from each frame accumulates into a Flipbook, which can be played
back in the pattern window after tracking by using the transport controls at the bottom of the window.
While the track is progressing, the vertical bar immediately to the right of the pattern shows how
confident Fusion is that the current pattern matches the initially selected pattern. A green bar
indicates a high degree of confidence that the current pattern matches the original, a yellow bar
indicates less certainty, and a red bar indicates extreme uncertainty.
After tracking, the pattern display shows a small Flipbook of the track for that pattern to help identify
problem frames for the track.
Tracker Sizes
In addition to onscreen controls, each tracker has a set of sizing parameters that let you adjust the
pattern and search box.
— Pattern Width and Height: Use these controls to adjust the width and height of the selected
tracker pattern manually. The size of the tracker pattern can also be adjusted in the viewer, which
is the normal method, but small adjustments are often easier to accomplish with the precision of
manual controls.
— Search Width and Height: The search area defines how far Fusion will look in the image from
frame to frame to reacquire the pattern during tracking. As with the Pattern Width and Height, the
search area can be adjusted in the viewer, but you may want to make small adjustments manually
using these controls.
Tracked Center
This positional control indicates the position of the tracker’s center. To remove a previously
tracked path from a tracker pattern, right-click this parameter and select Remove Path from the
contextual menu.
X and Y Offset
The Offset controls help to create a track for objects that may not provide very well defined or reliable
patterns. The Offset controls permit the tracking of something close to the intended object instead.
Use these Offsets to adjust the desired position of the path, while the tracker pattern rectangle is
positioned over the actual tracking location.
The Offset can also be adjusted directly in the viewer by activating the Offsets button in the
viewer toolbar.
Operation Tab
While the Trackers tab controls let you customize how the Tracker node analyzes motion to create
motion paths, the Operation tab puts the analyzed motion data to use, performing image transforms
of various kinds.
The Tracker node is capable of performing a wide variety of functions, from match moving an object
into a moving scene, smoothing out a shaky camera movement, or replacing the content of a sign. Use
the options and buttons in the Operation tab to select the function performed by the Tracker node.
Operation Menu
The Operation menu contains four functions performed by the Tracker. The remaining controls in this
tab fine-tune the result of this selection.
— None: The Tracker performs no additional operation on the image beyond merely locating and
tracking the chosen pattern. This is the default mode, used to create a path that will then drive
another parameter on another node.
— Match Move: When only the orange background input is connected, this mode stabilizes the
image. When a foreground image is connected to the green foreground input, the foreground
image matches the position, rotation, and scaling based on the tracking patterns. Stabilizing and
match move require a minimum of one tracking pattern to determine position, and two or more to
determine scaling and rotation.
Merge
The Merge control determines what is done (if anything) with the image provided to the green
Foreground input of the Tracker. This menu appears when the operation is set to anything other
than None.
— BG Only: The foreground input is ignored; only the background is affected. This is used primarily
when stabilizing the background image.
— FG Only: The foreground input is transformed to match the movement in the background, and
this transformed image is passed through the Tracker’s output. This Merge technique is used
when match moving one layer’s motion to another layer’s motion.
— FG Over BG: The foreground image is merged over the background image, using the Merge
method described by the Apply Mode control that appears.
— BG Over FG: The background is merged over the foreground. This technique is often used when
tracking a layer with an Alpha channel so that a more static background can be applied behind it.
— Apply Modes: The Apply Mode setting determines the math used when blending or combining
the foreground and background pixels.
— Normal: The Default merge mode uses the foreground’s alpha channel as a mask to
determine which pixels are transparent and which are not. When this is active, another menu
shows possible operations, including Over, In, Held Out, Atop, and XOr.
— Screen: Screen merges the images based on a multiplication of their color values. The alpha
channel is ignored, and layer order becomes irrelevant. The resulting color is always lighter.
Screening with black leaves the color unchanged, whereas screening with white will always
produce white. This effect creates a similar look to projecting several film frames onto the
same surface. When this is active, another menu shows possible operations, including Over,
In, Held Out, Atop, and XOr.
— Dissolve: Dissolve mixes two image sequences together. It uses a calculated average of the
two images to perform the mixture.
NOTE: For an excellent description of the math underlying the Operation modes, read
“Compositing Digital Images,” Porter, T., and T. Duff, SIGGRAPH 84 proceedings, pages
253-259. Essentially, the math is as described below.
TIP: Some modes not listed in the Operator drop-down menu (Under, In, Held In,
Below) are easily obtained by swapping the foreground and background inputs and
choosing a corresponding mode.
The formula used to combine pixels in the merge is always fg * x + bg * y. The different operations
determine exactly what x and y are, as shown in the description for each mode.
— Over: The Over mode adds the foreground layer to the background layer by replacing the
pixels in the background with the pixels from the Z wherever the foreground’s alpha channel
is greater than 1.
x = 1, y = 1-[foreground Alpha]
— In: The In mode multiplies the alpha channel of the background input against the pixels in
the foreground. The color channels of the foreground input are ignored. Only pixels from the
foreground are seen in the final output. This essentially clips the foreground using the mask
from the background.
x = [background Alpha], y = 0
— Held Out: Held Out is essentially the opposite of the In operation. The pixels in the
foreground image are multiplied against the inverted alpha channel of the background image.
You can accomplish exactly the same result using the In operation and a Matte Control node
to invert the matte channel of the background image.
x = 1-[background Alpha], y = 0
— ATop: ATop places the foreground over the background only where the background has
a matte.
— XOr: XOr combines the foreground with the background wherever either the foreground or
the background have a matte, but never where both have a matte.
In most software applications, you will find the Additive/Subtractive option displayed as a simple
checkbox. Fusion lets you blend between the Additive and Subtractive versions of the merge
operation, which is occasionally useful for dealing with problem composites with edges that are
calling attention to themselves as too bright or too dark.
For example, using a Subtractive setting on a premultiplied image may result in darker
edges. Using an Additive setting with a non-premultiplied image may result in lightening
the edges. By blending between Additive and Subtractive, you can tweak the edge
brightness to be just right for your situation.
Mapping Type
The Mapping Type control appears only in the Corner Positioning mode. There are two options
in the menu:
— Bi_Linear: The first method is Bi-Linear, where the foreground image is mapped into the
background without any attempt to correct for perspective distortion. This is identical to how
previous versions of Fusion operated.
— Perspective: The foreground image is mapped into the background taking perspective distortion
into account. This is the preferred setting since it maps better to the real world than the older
Bi‑Linear setting.
Stabilize Settings
The Tracker node automatically outputs several steady and unsteady position outputs to which other
controls in the Node Editor can be connected. The Stable Position output provides X and Y coordinates
to match or reverse motion in a sequence. These controls are available even when the operation is not
set to Match Move, since the Stable Position output is always available for connection to other nodes.
Pivot Type
The Pivot type menu determines how the anchor point for rotation is selected.
Reference
The Reference mode determines the “snapshot frame” based on the frame where the pattern is first
selected. All Stabilization is intended to return the image back to that reference.
— End: The Snapshot Frame is determined to be the last frame in the tracked path. All Stabilization is
intended to return the image back to that reference.
TIP: By default, the Tracker displays a single displacement path of the tracked data in
the Spline Editor. To view X and Y paths of the tracked points in the Spline Editor, go to
Preferences > Globals > Splines.
Enlargement Scale
The zoom factor that is used when positioning the pattern rectangle when the above
option is activated.
TIP: The outputs of a tracker (seen in the Connect to… menu) can also be used by
scripts. They are:
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Tracking nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
For more information on using the Planar Tracker, see Chapter 23, "Planar Tracking," in the Fusion
Reference Manual.
TIP: Part of using a Planar Tracker is also knowing when to give up and fall back to using
Fusion’s Tracker node or to manual keyframing. Some shots are simply not trackable,
or the resulting track suffers from too much jitter or drift. The Planar Tracker is a useful
time-saving node in the artist’s toolbox, but while it may track most shots, it is not a
100% solution.
— The point trackers no longer appear in the viewer when a comp containing a Planar Tracker node
is saved and reloaded.
— Tracking may not be resumed after a comp containing a Planar Tracker node has been saved and
reloaded. In particular, this also applies to auto saves. For this reason, it is good to complete all
planar tracking within one session.
— The size of composition files is kept reasonable (in some situations, a Planar Tracker can produce
hundreds of megabytes of temporary tracking data).
— Saving and loading of compositions is faster and more interactive.
1 Remove lens distortion: The more lens distortion in the footage, the more the resulting track will
slide and wobble.
2 Connect footage: Connect a Loader or MediaIn node that contains a planar surface to the orange
background input and view the Planar Tracker node in a viewer.
3 Select a reference frame: Move to a frame where the planar surface to be tracked is not
occluded and click the Set button to set this as a reference frame.
4 Choose the pattern: In the viewer, make sure the onscreen controls are visible, and draw
a polygon around the planar surface you want to track. This is called the “pattern.” In most
cases, this will probably be a rectangle, but an arbitrary closed polygon can be used. The pixels
enclosed by this region will serve as the pattern that will be searched for on other frames. Note
that it is important that the pattern is drawn on the reference frame. Do not confuse the pattern
with the region to corner pin (which always has four corners and is separately specified in
Corner Pin mode).
Inspector
Controls Tab
The Controls tab contains controls for determining how the Planar Tracker will be used, setting the
reference frame and initiating the track.
Operation Mode
The Operation Mode menu selects the purpose of the Planar Tracker node. The Planar Tracker has
four modes of operation:
— Track: Used to isolate a planar surface and track its movement over time. Then, you can create a
Planar Transform node that uses this data to match move another clip in various ways.
The last three modes (Steady, Corner Pin, and Stabilize) use the tracking data produced in Track mode.
NOTE: None of the operations can be combined together. For example, both Corner Pin
and Stabilize cannot be done at the same time, nor can a track be done while in corner
pinning mode.
Reference Time
The Reference Time determines the frame where the pattern is outlined. It is also the time from which
tracking begins. The reference frame cannot be changed once it has been set without destroying all
pre-existing tracking information, so scrub through the footage to be tracked and choose carefully.
The reference frame must be chosen carefully to give the best possible quality track.
You choose a reference frame by moving the playhead to an appropriate frame and then clicking the
Set button to choose that frame.
Pattern Polygon
You specify which region of the image you want to track by drawing a polygon on the reference frame.
Typically, when you first add a Planar Tracker node, you are immediately ready to start drawing a
polygon in the viewer, so it’s best to do this right away. When choosing where to draw a polygon, make
sure the region selected belongs to a physically planar surface in the shot. In a pinch, a region that is
only approximately planar can be used, but the less planar the surface, the poorer the quality of the
resulting track.
As a rule of thumb, the more pixels in the pattern, the better the quality of the track. In particular, this
means the reference frame pattern should be:
— As large as possible.
— As much in frame as possible.
— As unoccluded as possible by any moving foreground objects.
— At its maximal size (e.g., when tracking an approaching road sign, it is good to pick a later frame
where it is 400 x 200 pixels big rather than 80 x 40 pixels).
— Relatively undistorted (e.g., when the camera orbits around a flat stop sign, it is better to pick a
frame where the sign is face-on parallel to the camera rather than a frame where it is at a highly
oblique angle).
After you’ve drawn a pattern, a set of Pattern parameters lets you transform and invert the resulting
polygon, if necessary.
Track Mode
Track mode is unlike the other three options in the Operation menu in that is the only option that
initiates the planar tracking. The other modes use the tracking data generated by the Track mode.
Tracker
There are two available trackers to pick from:
— Point: Tracks points from frame to frame. Internally, this tracker does not actually track points-
per-se but rather small patterns like Fusion’s Tracker node. The point tracker possesses the ability
to automatically create its internal occlusion mask to detect and reject outlier tracks that do not
belong to the dominant motion. Tracks are colored green or red in the viewer, depending on
whether the point tracker thinks they belong to the dominant motion or they have been rejected.
The user can optionally supply an external occlusion mask to further guide the Point tracker.
— Hybrid Point/Area: Uses an Area tracker to track all the pixels in the pattern. Unlike the Point
tracker, the Area tracker does not possess the ability to automatically reject parts of the pattern
that do not belong to the dominant motion, so you must manually provide it with an occlusion
mask. Note that for performance reasons, the Hybrid tracker internally first runs the Point tracker,
which is why the point tracks can still be seen in the viewer.
There is no best tracker. They each have their advantages and disadvantages:
— Artist Effort (occlusion masks): The Point tracker will automatically create its internal
occlusion mask. However, with the Hybrid tracker, you need to spend more time manually
creating occlusion masks.
— Accuracy: The Hybrid tracker is more accurate and less prone to wobble, jitter, and drift since
it tracks all the pixels in the pattern rather than a few salient feature points.
— Speed: The Hybrid tracker is slower than the Point tracker.
In general, it is recommended to first quickly track the shot with the Point tracker and examine the
results. If the results are not good enough, then try the Hybrid tracker.
Motion Type
Determines how the Planar Tracker internally models the distortion of the planar surface being
tracked. The five distortion models are:
— Translation.
— Translation, Rotation (rigid motions).
— Translation, Rotation, Scale (takes squares to squares, scale is uniform in x and y).
— Affine includes translation, rotation, scale, skew (maps squares to parallelograms).
— Perspective (maps squares to generic quadrilaterals).
Each successive model is more general and includes all previous models as a special case.
Sometimes with troublesome shots, it can help to drop down to a simpler motion model—for
example, when many track points are clustered on one side of the tracked region or when tracking a
small region where there are not many trackable pixels.
Output
Controls what is output from the Planar Tracker node while in the Track operation mode.
Track Channel
Determines which image channel in the background image is tracked. It is good to pick a channel
with high contrast, lots of trackable features, and low noise. Allowed values are red, green, blue,
and luminance.
Tracking Controls
These controls are used to control the Tracker. Note that while tracking, only track to a new frame if
the current frame is already tracked or it is the reference frame.
— Track to start: Tracks from the current frame backward in time to the start (as determined by the
current render range).
— Step tracker to previous frame: Tracks from the current frame to the previous frame.
— Stop tracking: Stops any ongoing tracking operations.
— Step tracker to next frame: Tracks from the current frame to the next frame.
— Track to end: Tracks from the current frame forward in time to the end (as determined by the
current render range).
— Trim to start: Removes all tracking data before the current frame.
— Delete: Deletes all tracking data at all times. Use this to destroy all current results and start
tracking from scratch.
— Trim to end: Removes all tracking data after the current frame. This can be useful, for example, to
trim the end of a track that has become inaccurate when the pattern starts to move off frame.
Show Splines
This button to the right of the “Trim to end” button opens the Spline Editor and shows the splines
associated with the Planar Tracker node. This can be useful for manually deleting points from the Track
and Stable Track splines.
Steady Mode
In Steady mode, the Planar Tracker transforms the background plate to keep the pattern as motionless
as possible. Any leftover motion is because the Planar Tracker failed to follow the pattern accurately or
because the pattern did not belong to a physically planar surface.
Steady mode is not very useful for actual stabilization, but is useful for checking the quality of a
track. If the track is good, during playback the pattern should not move at all while the rest of the
background plate distorts around it. It can be helpful to zoom in on parts of the pattern and place
the mouse cursor over a feature and see how far that feature drifts away from the mouse cursor
over time.
Steady Time
This is the time where the pattern’s position is snapshotted and frozen in place. It is most common to
set this to the reference frame.
Clipping Mode
Determines what happens to the parts of the background image that get moved off frame by
the steady transform:
Domain mode is useful when Steady mode is being used to “lock” an effect to the pattern.
As an example, consider painting on the license plate of a moving car. One way to do this is to use a
Planar Tracker node to steady the license plate, then a Paint node to paint on the license plate, and
then a second Planar Tracker to undo the steady transform. If the Clipping mode is set to Domain, the
off frame parts generated by the first Planar Tracker are preserved so that the second Planar Tracker
can, in turn, map them back into the frame.
1 Track: select a planar surface in the shot that you wish to attach a texture to or replace the
texture on. Track the shot (see the tracking workflow in the Track section).
2 Switch the Operation Mode to Corner Pin: When Corner Pin mode is entered from Track mode,
the pattern polygon is hidden and a corner pin control is shown in the viewer.
3 Connect in the texture: In the Node Editor, connect the output of the MediaIn node containing
the texture to the Corner Pin 1 input on the Planar Tracker node.
4 Adjust corner pin: Drag the corners of the corner pin in the viewer until the texture is positioned
correctly. Sometimes the Show Grid option is useful when positioning the texture. Additionally, if it
helps to position it more accurately, scrub to other times and make adjustments to the corner pin.
5 Review: Play back the footage and make sure the texture “sticks” to the planar surface.
Merge Mode
Controls how the foreground (the corner pinned texture) is merged over the background (the tracked
footage). If there are multiple corner pins, this option is shared by all of them. There are four options
to pick from:
— BG only
— FG only
— FG over BG
— BG over FG
Stabilize Mode
Stabilize mode is used to smooth out shakiness in the camera by applying a transform that partially
counteracts the camera shake. This stabilizing transform (contained in the Stable Track spline) is
computed by comparing neighboring frames.
Be aware that the Planar Tracker stabilizes based on the motion of the pattern, so it is essential
to choose the pattern carefully. If the motion of the pattern does not represent the motion of the
camera, then there may be unexpected results. For example, if tracking the side of a moving truck and
the camera is moving alongside it, the Planar Tracker smooths the combined motion of both the truck
and the mounted camera. In some cases, this is not the desired effect. It may be better to choose
the pattern to be on some fixed object like the road or the side of a building, which would result in
smoothing only the motion of the camera.
One unavoidable side effect of the stabilization process is that transparent edges appear along the
edges of the image. These edges appear because the stabilizer does not have any information about
what lies off frame, so it cannot fill in the missing bits. The Planar Tracker node offers the option to
either crop or zoom away these edges. When filming, if the need for post-production stabilization is
anticipated, it can sometimes be useful to film at a higher resolution (or lower zoom).
Parameters to Smooth
Specify which of the following parameters to smooth:
— X Translation
— Y Translation
— Rotation
— Scale
Smoothing Window
When stabilizing a particular frame, this determines how the contributions of neighboring frames are
weighted. Available choices are Box and Gaussian.
Compute Stabilization
Clicking this button runs the stabilizer, overwriting the results of any previous stabilization. As soon as
the stabilization is finished, the output of the Planar Tracker node will be immediately updated with the
stabilization applied.
NOTE: The stabilizer uses the Track spline (created by the tracker) to produce the Stable
Track spline. Both of these splines’ keyframes contain 4 x 4 matrices, and the keyframes are
editable in the Spline Editor.
Clipping Mode
Determines what happens to the parts of the background image that get moved off frame by the
stabilization:
Frame Mode
This controls how transparent edges are handled. The available options include:
— Zoom: Scales the image bigger until the transparent edges are off frame. Choosing this option
causes an image resampling to occur. The downside of this approach is that it reduces the quality
(slightly softens) of the output image. In Zoom mode, use the Auto Zoom button or manually
adjust the zoom window by changing the X Offset, Y Offset, and Scale sliders.
— Auto Zoom: When this button is clicked, the Planar Tracker will examine all the frames and
pick the smallest possible zoom factor that removes all the transparent edges. The computed
zoom window will always be centered in frame. When clicked, Auto Zoom updates the X/Y
Offset and Scale sliders.
Darken Image
Darkens the image while in Track mode in order to better see the controls and tracks in the viewer.
The Shift+D keyboard shortcut toggles this.
Show Trails
Toggles the display of the trails following the location of trackers.
Trail Length
Allows changing the length of tracker trails. If the pattern is moving very slowly, increasing the length
can sometimes make the trails easier to follow in the viewer. If the pattern is moving very fast, the
tracks can look like spaghetti in the viewer. Decreasing the length can help.
Inlier/Outlier Colors
When tracking, the tracker analyzes the frame and detects which of the multitudinous tracks belong
to the dominant motion and which ones represent anomalous, unexplainable motion. By default,
tracks belonging to the dominant motion are colored green (and are called inliers) and those that do
not belong are colored red (and are called outliers). Only the inlier tracks are used when computing
the final resulting track.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Tracking nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
The Planar Transform node applies perspective distortions generated by a Planar Tracker node onto
any input mask or masked image. The Planar Transform node can be used to reduce the amount of
time spent on rotoscoping objects. The workflow here centers around the notion that the Planar
Tracker node can be used to track objects that are only roughly planar. After an object is tracked, a
Planar Transform node can then be used to warp a rotospline, making it approximately follow the
object over time. Fine-level cleanup work on the rotospline then must be done.
Depending on how well the Planar Tracker followed the object, this can result in substantial time
savings in the amount of tedious rotoscoping. The key to using this technique is recognizing situations
where the Planar Tracker performs well on an object that needs to be rotoscoped.
1 Track: Using a Planar Tracker node, select a pattern that represents the object to be rotoscoped.
Track the shot (see the tracking workflow in the Track section for the Planar Tracker node).
2 Create a Planar Transform node: Press the Create Planar Transform button on the Planar Tracker
node to do this. The newly created Planar Transform node can be freely cut and pasted into
another composition as desired.
3 Rotoscope the object: Move to any frame that was tracked by the Planar Tracker. When unsure if a
frame was tracked, look in the Spline Editor for a tracking keyframe on the Planar Transform node.
Connect a Polygon node into the Planar Transform node. While viewing the Planar Transform
node, rotoscope the object.
4 Refine: Scrub the timeline to see how well the polygon follows the object. Adjust the polyline on
frames where it is off. It is possible to add new points to further refine the polygon.
Inputs
The Planar Transform has only two inputs:
— Image Input: The orange image input accepts a 2D image on which the transform will be applied.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the output of the
Planar Transform to certain areas.
Inspector
Controls Tab
The Planar Transform node has very few controls, and they are all located in the Controls tab. It’s
designed to apply the analyzed Planar Tracking data as a match move,
Reference Time
This is the reference time that the pattern was taken from in the Planar Tracker node used to produce
the Planar Transform.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Tracking nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
For more information about how to use the Camera Tracker, see Chapter 26, "3D Camera Tracking,"
in the Fusion Reference Manual.
3D objects composited on to the video clip that use the camera tracker to
remain aligned with the objects in the frame as the image moves
— Background: The orange image input accepts a 2D image you want tracked.
— Occlusion Mask: The white occlusion mask input is used to mask out regions that do not need to
be tracked. Regions where this mask is white will not be tracked. For example, a person moving
in front of and occluding bits of the scene may be confusing to the tracker, and a quickly-created
rough mask around the person can be used to tell the tracker to ignore the masked-out bits.
Inspector
Track Tab
The Track tab contains the controls you need to set up an initial analysis of the scene.
Reset
Deletes all the data internal to the Camera Tracker node, including the tracking data and the solve data
(camera motion path and point cloud). To delete only the solve data, use the Delete button on the Solve tab.
Detection Threshold
Determines the sensitivity to detect features. Automatically generated tracks will be assigned to
the shot and the Detection Threshold will force them to be either in locations of high contrast or
low contrast.
Track Channel
Used to nominate a color channel to track: red, green, blue, or luminance. When nominating a
channel, choose one that has a high level of contrast and detail.
Track Range
Used to determine which frames are tracked:
— Global: The global range, which is the full duration of the Timeline.
— Render: The render duration set on the Timeline.
— Valid: The valid range is the duration of the source media.
— Custom: A user determined range. When this is selected, a separate range slider appears to set
the start and end of the track range.
Bidirectional Tracking
Enabling this will force the tracker to track backward after the initial forward tracking. When tracking
backward, new tracks are not started but rather existing tracks are extended backward in time.
It is recommended to leave this option on, as long tracks help give better solved cameras and
point clouds.
Gutter Size
Trackers can become unstable when they get close to the edge of the image and either drift or jitter
or completely lose their pattern. The Camera Tracker will automatically terminate any tracks that enter
the gutter region. Gutter size is given as a percentage of pattern size. By default, it’s 100% of pattern
size, so a 0.04 pattern means a 0.04 gutter.
— Tracker: Internally, all the Trackers use the Optical Flow Tracker to follow features over time and
then further refine the tracks with the trusted Fusion Tracker or Planar Tracker. The Planar Tracker
method allows the pattern to warp over time by various types of transforms to find the best fit.
These transforms are:
— Translation
— Translation and Rotation Translation, Rotation, and Scale Affine
— Perspective
It is recommended to use the default TRS setting when using the Planar Tracker. The Affine and
Perspective settings need large patterns in order to track accurately.
— Close Tracks When Track Error Exceeds: Tracks will be automatically terminated when the
tracking error gets too high. When tracking a feature, a snapshot of the pixels around a feature
are taken at the reference time of the track. This is called a pattern, and that same pattern of
pixels is searched for at future times. The difference between the current time pattern and the
reference time pattern is called the track error. Setting this option higher produces longer but
increasingly less accurate tracks.
— Solve Weight: By default, each track is weighted evenly in the solve process. Increasing a track’s
weight means it has a stronger effect on the solved camera path. This is an advanced option that
should be rarely changed.
Camera Tab
The controls of the Camera tab let you specify the physical aspects of the live-action camera,
which will be used as a starting point when searching for solve parameters that match the real-
The Camera tab includes controls relating to the lens and gate aspects of the camera being solved for.
Focal Length
Specify the known constant focal length used to shoot the scene or provide a guess if the Refine Focal
Length option is activated in the Solve tab.
Film Gate
Choose a film gate preset from the drop-down menu or manually enter the film back size in the
Aperture Width and Aperture Height inputs. Note that these values are in inches.
Aperture Width
In the event that the camera used to shoot the scene is not in the preset drop-down menu, manually
enter the aperture width (inches).
Aperture Height
In the event that the camera used to shoot the scene is not in the preset drop-down menu, manually
enter the aperture height (inches).
Typically, fit to Width or Height is the best setting. The other fit modes are Inside, Outside,
or Stretched.
Center Point
This is where the camera lens is aligned to the camera. The default is (0.5, 0.5), which is the middle of
the sensor.
SolveTab
The Solve tab is where the tracking data is used to reconstruct the camera’s motion path along with
the point cloud. It is also where cleanup of bad or false tracks is done, and other operations on the
tracks can be performed, such as defining which marks are exported in the Point Cloud 3D. The
markers can also have their weight set to affect the solve calculations.
For example, a good camera solve may have already been generated, but there are not enough
locators in the point cloud in an area where an object needs to be placed, so adding more tracks and
setting their Solve Weight to zero will not affect the solved camera but will give more points in the
point cloud.
Solve
Pressing Solve will launch the solver, which uses the tracking information and the camera
specifications to generate a virtual camera path and point cloud, approximating the motion of
the physical camera in the live-action footage. The console will automatically open, displaying the
progress of the solver.
Delete
Delete will remove any solved information, such as the camera and the point cloud, but will keep all
the tracking data.
Foreground Threshold
This slider sets the detection threshold for finding the tracks on fast-moving objects. The higher the
value, the more forgiving.
Disabling this will allow the user to select their own two frames. Manual choice of seed frames is an
option for advanced users. When choosing seed frames, it is important to satisfy two conflicting
desires: the seed frames should have lots of tracks in common yet be far apart in perspective (i.e.,
the baseline distance between the two associated cameras is long).
— Refine Center Point: Normally disabled, camera lenses are normally centered in the middle
of the film gate but this may differ on some cameras For example, a cine camera may be set
up for Academy 1.85, which has a sound stripe on the left, and shooting super35, the lens is
offset to the right.
— Refine Lens Parameters: This will refine the lens distortion or curvature of the lens. There tends
to be larger distortion on wide angle cameras
NOTE: When solving for the camera’s motion path, a simulated lens is internally created to
model lens distortion in the source footage. This simulated lens model is much simpler than
real-world lenses but captures the lens distortion characteristics important for getting an
accurate camera solve. Two types of distortion are modeled by Camera Tracker:
Radial Distortion: The strength of this type of distortion varies depending on the distance
from the center of the lens. Examples of this include pincushion, barrel, and mustache
distortion. Larger values correspond to larger lens curvatures. Modeling radial distortion
is especially important for wide angle lenses and fisheye lenses (which will have a lot of
distortion because they capture 180 degrees of an environment and then optically squeeze
it onto a flat rectangular sensor).
Tangential Distortion: This kind of distortion is produced when the camera’s imaging
sensor and physical lens are not parallel to each other. It tends to produce skew distortions
in the footage similar to distortions that can be produced by dragging the corners of a
corner pin within Fusion. This kind of distortion occurs in very cheap consumer cameras
and is practically non-existent in film cameras, DSLRs, and pretty much any kind of camera
used in film or broadcast. It is recommended that it be left disabled.
— Radial Quadratic: Model only Quadratic radial lens curvature, which is either barrel or pincushion
distortion. This is the most common type of distortion. Selecting this option causes the low and
high order distortion values to be solved for.
— Radial Quartic: Model only Quartic radial lens curvature, which combines barrel and pincushion
distortion. This causes the low and high order distortion values to be solved for.
Track Filtering
The Camera Tracker can produce a large number of automatically generated tracks. Rather than
spending a lot of time individually examining the quality of each track, it is useful to have some less
time-intensive ways to filter out large swaths of potentially bad tracks. The following input sliders
are useful for selecting large amounts of tracks based on certain quality metrics, and then a number
of different possible operations can be made on them. For example, weaker tracks can selected
and deleted, yielding a stronger set of tracks to solve from. Each filter can be individually enabled
or disabled.
Delete Will remove the tracks from the set. When there are bad tracks, the
simplest and easiest option is to simply delete them.
Trim Previous Will cut the tracked frames from the current frame to the start of the
track. Sometimes it can be more useful to trim a track than deleting it.
For example, high quality long tracks that become inaccurate when the
feature they are tracking starts to become occluded or when the tracked
feature moves too close to the edge of the image.
Trim Next Will cut the tracked frames from the current frame to the end of the track.
Rename Will replace the current auto generated name with a new name.
Set Color Will allow for user assigned color of the tracking points.
Export Flag This controls whether the locators corresponding to the selected tracks
will be exported in the point cloud. By default all locators flagged
as exportable.
Solve Weight By default, all the tracks are used and equally weighted when solving
for the camera’s motion path. The most common use of this option is to
set a track’s weight to zero so it does not influence the camera’s motion
path but is still has a reconstructed 3D locator. Setting a tracks’ weight to
values other than 1.0 or 0.0 should only be done by advanced users.
TIP: Also select tracks directly in the 2D viewer using the mouse or in the 3D viewer by
selecting their corresponding locators in the point cloud.
Export Tab
The Export tab lets you turn the tracked and solved data this node has generated into a form that can
be used for compositing.
— A Camera 3D with animated translation and rotation that matches the motion of the live-action
camera and an attached image plane.
— A Point Cloud 3D containing the reconstructed 3D positions of the tracks.
— A Shape 3D set to generate a ground plane.
— A Merge 3D merging together the camera, point cloud, and ground plane. When the Merge 3D is
viewed through the camera in a 3D viewer, the 3D locators should follow the tracked footage.
— A Renderer 3D set to match the input footage.
The export of individual nodes can be enabled/disabled in the Export Options tab.
3D Scene Transform
Although the camera is solved, it has no idea where the ground plane or center of the scene is located.
By default, the solver will always place the camera in Fusion’s 3D virtual environment so that on the
first frame it is located at the origin (0, 0, 0) and is looking down the -Z axis. You have the choice to
export this raw scene without giving the Camera Tracker any more information, or you can set the
ground plane and origin to simplify your job when you begin working in the 3D scene. The 3D Scene
Transform controls provide a mechanism to correctly orient the physical ground plane in the footage
with the virtual ground plane in the 3D viewer. Adjusting the 3D Scene Transform does not modify
the camera solve but simply repositions the 3D scene to best represent the position of the live-
action camera.
NOTE: If you export the scene and then make changes in the 3D Scene Transform,
it is important to manually click Update Previous Export to see the results in the
exported nodes.
Once alignment of the ground plane and origin has been completed, the section is locked by switching
the menu to Aligned.
To get the best result when setting the ground plane, try to select as many points as possible
belonging to the ground and having a wide separation.
TIP: When selecting points for the ground plane, it is helpful to have the Camera Tracker
node viewed in side-by-side 2D and 3D views. It may be easier to select tracks belonging to
the ground by selecting tracks from multiple frames in the 2D viewer rather than trying to
box select locators in the 3D viewer.
Setting the origin can help you place 3D objects in the scene with more precision. To set the origin,
you can follow similar steps, but only one locator is required for the origin to be set. When selecting a
locator for the origin, select one that has a very low solve error.
Subdivision Level Shows how many polygons are in the ground plane.
Wireframe Sets whether the ground plane is set as wireframe or solid surface when
displayed in 3D.
Offset By default, the center of the ground plane is placed at the origin (0, 0, 0).
This can be used to shift the ground plane up and down along the y-axis.
Export Options
Provides a checkbox list of what will be exported as nodes when the Export button is pressed. These
options are Camera, Point Cloud, Ground Plane, Renderer, Lens Distortion, and Enable Image Plane in
the camera.
The Animation menu allows you to choose between animating the camera and animating the point
cloud. Animating the camera leaves the point cloud in a locked position while the camera is keyframed
to match the live-action shot. Animating the point cloud does the opposite. The camera is locked in
position while the entire point cloud is keyframed to match the live-action shot.
Previous Export
When the Update Previous Export button is clicked, the previously exported nodes listed here are
updated with any new data generated (this includes the camera path and attributes, the point cloud,
and the renderer).
Options Tab
The Options tab lets you customize the Camera Tracker’s onscreen controls so you can work most
effectively with the scene material you have.
Trail Length
Displays trail lines of the tracks overlaid on the viewer. The amount of frames forward and back from
the current frame is set by length.
Track Colors, Locator Colors, and Export Colors each have options for setting their color to
one of the following:
— User Assigned
— Solve Error
— Take From Image
— White
Export Colors Colors of the locators that get exported within the Point Cloud node.
Darken Image
Dims the brightness of the image in viewers to better see the overlaid tracks. This affects both the 2D
and 3D viewers.
Visibility
Toggles which overlays will be displayed in the 2D and 3D viewers. The options are Tracker Markers,
Trails, Tooltips in the 2D Viewer, Tooltips in the 3D viewer, Reprojected Locators, and Tracker Patterns.
Colors
Sets the color of the overlays.
Reporting
Outputs various parameters and information to the Console.
The Camera Tracker must solve for hundreds of thousands of unknown variables, which is a complex
task. For the process to work, it is essential to get good tracking data that exists in the shot for a long
time. False or bad tracks will skew the result. This section explains how to clean up false tracks and
other techniques to get a good solve.
Initially, there are numerous tracks, and not all are good, so a process of filtering and cleaning up
unwanted tracks to get to the best set is required. At the end of each cleanup stage, pressing Solve
ideally gives you a progressively lower solve error. This needs to be below 1.0 for it to be good for use
with full HD content, and even lower for higher resolutions. Refining the tracks often but not always
results in a better solve.
False Tracks
False tracks are caused by a number of conditions, such as moving objects in a shot, or reflections and
highlights from a car. There are other types of false tracks like parallax errors where two objects are at
different depths, and the intersection gets tracked. These moiré effects can cause the track to creep.
Recognizing these False tracks and eliminating them is the most important step in the solve process.
Track Lengths
Getting a good set of long tracks is essential; the longer the tracks are, the better the solve. The Bi-
Directional tracking option in the Tracker tab is used to extend the beginning of tracks in time. The
longer in time a track exists and the more tracks that overlap in time of a shot, the more consistent
and accurate the solve.
Seed Frames
Two seed frames are used in the solve process. The algorithm chooses two frames that are as far
apart in time yet share the same tracks. That is why longer tracks make a more significant difference
in the selection of seed frames.
The two Seed frames are used as the reference frames, which should be from different angles of the
same scene. The solve process will use these as a master starting point to fit the rest of the tracks in
the sequence.
There is an option in the Solve tab to Auto Detect Seed Frames, which is the default setting and most
often a good idea. However, auto detecting seed frames can make for a longer solve. When refining
the Trackers and re-solving, disable the checkbox and use the Seed 1 and Seed 2 sliders to enter the
previous solve’s seed frames. These seed frames can be found in the Solve Summary at the top of the
Inspector after the initial solve.
Refine Filters
After the first solve, all the Trackers will have extra data generated. These are solve errors and
tracking errors.
Use the refine filters to reduce unwanted tracks, like setting minimum tracker length to eight frames.
As the value for each filter is adjusted, the Solve dialog will indicate how many tracks are affected by
the filter. Then Solve again.
You can view the exported scene in a 3D perspective viewer. The point cloud will be visible. Move and
pan around the point cloud, select and delete points that seem to have no inline with the image and
the scene space. Then Solve again.
Repeat the process until the solve error is below 1.0 before exporting.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Tracking nodes. These common controls
are described in detail in the following “The Common Controls” section.
Inspector
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
For example, if the Red button on a Blur tool is deselected, the blur will first be applied to the image,
and then the red channel from the original input will be copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this will generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
"Understanding Image Channels," in the Fusion Reference Manual.
Motion Blur
— Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is toggled
on, the tool’s predicted motion is used to produce the motion blur caused by the virtual camera’s
shutter. When the control is toggled off, no motion blur is created.
— Quality: Quality determines the number of samples used to create the blur. A quality setting of 2
will cause Fusion to create two samples to either side of an object’s actual motion. Larger values
produce smoother results but increase the render time.
— Shutter Angle: Shutter Angle controls the angle of the virtual shutter used to produce the motion
blur effect. Larger angles create more blur but increase the render times. A value of 360 is the
equivalent of having the shutter open for one full frame exposure. Higher values are possible and
can be used to create interesting effects.
— Center Bias: Center Bias modifies the position of the center of the motion blur. This allows for the
creation of motion trail effects.
— Sample Spread: Adjusting this control modifies the weighting given to each sample. This affects
the brightness of the samples.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off GPU hardware-
accelerated rendering. Enabled uses the GPU hardware for rendering the node. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Transform Nodes
This chapter details the Transform nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Camera Shake [CSh]����������������������������������������������������������������������������������������������� 1567
For more information on the Shake modifier, see Chapter 62, "Modifiers," in the Fusion
Reference Manual.
The Camera Shake node concatenates its result with adjacent transformation nodes for
higher-quality processing.
Inputs
The two inputs on the Camera Shake node are used to connect a 2D image and an effect mask, which
can be used to limit the camera shake area.
— Input: The orange input is used for the primary 2D image that shakes.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the camera
shake area to only those pixels within the mask. An effects mask is applied to the tool after the
tool is processed.
Controls Tab
The Controls tab includes parameters for adjusting the offsets, strength, speed, and frequency of the
simulated camera shake movement.
Deviation X and Y
These controls determine the amount of shake applied to the image along the horizontal (X) and
vertical (Y) axes. Values between 0.0 and 1.0 are permitted. A value of 1.0 generates shake positions
anywhere within the boundaries of the image.
Rotation Deviation
This determines the amount of shake that is applied to the rotational axis. Values between 0.0 and 1.0
are permitted.
Randomness
Higher values in this control cause the movement of the shake to be more irregular or random.
Smaller values cause the movement to be more predictable.
Overall Strength
This adjusts the general amplitude of all the parameters and blends that affect in and out. A value of
1.0 applies the effect as described by the remainder of the controls.
Speed
Speed controls the frequency, or rate, of the shake.
Frequency Method
This selects the overall shape of the shake. Available frequencies are Sine, Rectified Sine, and Square
Wave. A Square Wave generates a much more mechanical-looking motion than a Sine.
The best filter for the job often depends on the amount of scaling and on the contents of the
image itself.
Most of these filters are useful only when making an image larger. When shrinking images, it is
common to use the Bi-Linear filter, however, the Catmull-Rom filter will apply some sharpening to the
results and may be useful for preserving detail when scaling down an image.
EXAMPLE
Resize filters. From left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic, Catmull-
Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel.
— Canvas: This causes the edges that are revealed by the shake to be the canvas color—usually
transparent or black.
— Wrap: This causes the edges to wrap around (the top is wrapped to the bottom, the left is
wrapped to the right, and so on).
— Duplicate: This causes the Edges to be duplicated, causing a slight smearing effect at the edges.
— Mirror: Image pixels are mirrored to fill to the edge of the frame.
Invert Transform
Select this control to Invert any position, rotation, or scaling transformation. This option might be
useful for exactly removing the motion produced in an upstream Camera Shake.
Flatten Transform
The Flatten Transform option prevents this node from concatenating its transformation with adjacent
nodes. The node may still concatenate transforms from its input, but it will not concatenate its
transformation with the node at its output.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Transform nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
Crop [Crp]
TIP: You can crop an image in the viewer by activating the Allow Box Selection in the upper-
left corner of the viewer while the Crop node is selected and viewed. Then, drag a crop
rectangle around the area of interest to perform the operation.
NOTE: Because this node changes the physical resolution of the image, animating the
parameters is not advised.
— Input: The orange input is used for the primary 2D image you want to crop.
Inspector
Controls Tab
The Controls tab provides XY Offset and XY Size methods for cropping the image.
Offset X and Y
These controls position the image off the screen by pushing it left/right or up/down. The cropped
image disappears off the edges of the output image. The values of these controls are measured
in pixels.
Keep Aspect
When toggled on, the Crop node maintains the aspect of the input image.
Keep Centered
When toggled on, the Crop node automatically adjusts the X and Y Offset controls to keep the image
centered. The XY Offset sliders are automatically adjusted, and control over the cropping is done with
the Size sliders or the Allow Box Selection button in the viewer.
Reset Size
This resets the image dimensions to the size of the input image.
Reset Offset
This resets the X and Y Offsets to their defaults.
Clipping Mode
This option sets the mode used to handle the edges of the image when performing domain of
definition (DoD) rendering. This is profoundly important for nodes like Blur, which may require
samples from portions of the image outside the current domain.
— Frame: The default option is Frame, which automatically sets the node’s domain of definition
to use the full frame of the image, effectively ignoring the current domain of definition. If the
upstream DoD is smaller than the frame, the remaining area in the frame will be treated as black/
transparent.
— Domain: Setting this option to Domain will respect the upstream DoD when applying the node’s
effect. This can have adverse clipping effects in situations where the node employs a large filter.
— None: Setting this option to None does not perform any source image clipping at all. This means
that any data required to process the node’s effect that would normally be outside the upstream
DoD is treated as black/transparent.
Auto Crop
This evaluates the image and attempts to determine the background color. It then crops each side of
the image to the first pixel that is not that color.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Transform nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
DVE [DVE]
Inputs
The three inputs on the DVE node are used to connect a 2D image, DVE mask, and an effect mask,
which can be used to limit the DVE area.
— Input: The orange input is used for the primary 2D image that is transformed by the DVE.
— DVE Mask: The white DVE mask input is used to mask the image prior to the DVE transform being
applied. This has the effect of modifying both the image and the mask.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input causes the DVE to modify
only the image within the mask. An effects mask is applied to the tool after the tool is processed.
Inspector
Controls Tab
The Controls tab includes all the transform parameters for the DVE.
Pivot X, Y, and Z
Positions the axis of rotation and scaling. The default is 0.5, 0.5 for X and Y, which is in the center of the
image, and 0 for Z, which is at the center of Z space.
Rotation Order
Use these buttons to determine in what order rotations are applied to the image.
XYZ Rotation
These controls are used to rotate the image around the pivot along the X-, Y- and Z-axis.
Center X and Y
This positions the center of the DVE image onscreen. The default is 0.5, 0.5, which positions the DVE in
the center of the image.
Z Move
This zooms the image in and out along the Z-axis. Visually, when this control is animated, the effect is
similar to watching an object approach from a distance.
Perspective
This adds additional perspective to an image rotated along the X- or Y-axis, similar to changing the
Field of View and zoom of a camera.
Unlike regular effect masks, the masking process occurs before the transformation. All the usual mask
types can be applied to the DVE mask.
Black Background
Toggle this on to erase the area outside the mask from the transformed image.
Fill Black
Toggle this on to erase the area within the mask (before transformation) from the DVE’s input,
effectively cutting the masked area out of the image. Enabling both Black Background and Fill Black
will show only the masked, transformed area.
Alpha Mode
This determines how the DVE will handle the alpha channel of the image when merging the
transformed image areas over the untransformed image.
— Ignore Alpha: This causes the input image’s alpha channel to be ignored, so all masked
areas will be opaque.
— Subtractive/Additive: These cause the internal merge of the pre-masked DVE image over the
input image to be either Subtractive or Additive.
— An Additive setting is necessary when the foreground DVE image is premultiplied, meaning
that the pixels in the color channels have been multiplied by the pixels in the alpha channel.
The result is that transparent pixels are always black, since any number multiplied by 0 always
equals 0. This obscures the background (by multiplying with the inverse of the foreground
alpha), and then simply adds the pixels from the foreground.
— A Subtractive setting is necessary if the foreground DVE image is not premultiplied. The
compositing method is similar to an Additive merge, but the foreground DVE image is first
multiplied by its own alpha, to eliminate any background pixels outside the alpha area.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Transform nodes. These common controls
are described in detail at the end of this chapter in “The Common Controls” section.
NOTE: Because this node changes the physical resolution of the image, animating the
controls is not recommended.
Inputs
The single input on the Letterbox node is used to connect a 2D image for letterbox/cropping.
— Input: The orange input is used for the primary 2D image you want to letterbox/crop.
The Letterbox node converting the Merge output resolution and adding letterbox masking where needed.
Controls Tab
The Controls tab includes parameters for adjusting the resolution and pixel aspect of the image. It
also has the option of letterboxing or pan-and-scan formatting.
TIP: You can use the formatting contextual menu to quickly select a resolution from a
list. Place the pointer over the Width or Height controls, and then right-click to display the
contextual menu. The bottom of the menu displays a Select Frame Format submenu with
available frame formats. Select any one of the choices from the menu to set the Height,
Width, and Aspect controls automatically.
Auto Resolution
Activating this checkbox automatically sets the Width and Height sliders to the Frame Format settings
found in the Preferences window for Fusion Studio or to the resolution of the DaVinci Resolve Timeline.
Center X and Y
This Center control repositions the image window when used in conjunction with Pan-and-Scan mode.
It has no effect on the image when the node is set to Letterbox mode.
Mode
This control is used to determine the Letterbox node’s mode of operation.
— Letterbox/Envelope: This corrects the aspect of the input image and resizes it to match the
specified width.
— Pan-and-Scan: This corrects the aspect of the input image and resizes it to match the specified
height. If the resized input image is wider than the specified width, the Center control can be used
to animate the visible portion of the resized input.
Most of these filters are useful only when making an image larger. When shrinking images, it is
common to use the Bi-Linear filter; however, the Catmull-Rom filter will apply some sharpening to the
results and may be useful for preserving detail when scaling down an image.
EXAMPLE
Different resize filters. From left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic,
Catmull-Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel.
Resize [Rsz]
NOTE: Because this node changes the physical resolution of the image, animating the
controls is not advised.
Inputs
The single input on the Resize node is used to connect a 2D image for resizing.
— Input: The orange input is used for the primary 2D image you want to resize.
The Resize node can be used to scale an image and change its resolution.
Controls Tab
The Controls tab includes parameters for changing the resolution of the image. It uses pixel values in
the Width and Height controls.
Width
This controls the new resolution for the image along the X-axis.
Height
This controls the new resolution for the image along the Y-axis.
TIP: You can use the formatting contextual menu to quickly select a resolution from a list.
Place the mouse pointer over the Width or Height controls, and then right-click to display
the contextual menu. The bottom of the menu displays a Select Frame Format submenu
with available frame formats. Select any one of the choices from the menu to set the Height
and Width controls automatically.
Auto Resolution
Activating this checkbox automatically sets the Width and Height sliders to the Frame Format settings
found in the Preferences window for Fusion Studio or the resolution in the DaVinci Resolve Timeline.
Reset Size
Resets the image dimensions to the original size of the image.
Most of these filters are useful only when making an image larger. When shrinking images, it is
common to use the Bi-Linear filter; however, the Catmull-Rom filter will apply some sharpening to the
results and may be useful for preserving detail when scaling down an image.
EXAMPLE
Different resize filters. From left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic,
Catmull-Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel.
Scale [Scl]
NOTE: Because this node changes the physical resolution of the image, animating the
controls is not advised.
Inputs
The single input on the Scale node is used to connect a 2D image for scaling.
— Input: The orange input is used for the primary 2D image you want to scale.
The Scale node can be used to scale an image and change its resolution.
Controls Tab
The Controls tab includes parameters for changing the resolution of the image. It uses a multiplier of
size to set the new resolution. An Edges menu allows you to determine how the edges of the frame
are handled if the scaling decreases.
Lock X/Y
When selected, only a Size control is shown, and changes to the image’s scale are applied to both axes
equally. If the checkbox is cleared, individual Size controls appear for both X and Y Size.
Size
The Size control is used to set the scale used to adjust the resolution of the source image. A value of
1.0 would have no affect on the image, while 2.0 would scale the image to twice its current resolution.
A value of 0.5 would halve the image’s resolution.
Filter Method
When rescaling a pixel, surrounding pixels are often used to give a more realistic result. There are
various algorithms for combining these pixels, called filters. More complex filters can give better
results but are usually slower to calculate. The best filter for the job often depends on the amount of
scaling and on the contents of the image itself.
Most of these filters are useful only when making an image larger. When shrinking images, it is
common to use the Bi-Linear filter; however, the Catmull-Rom filter will apply some sharpening to the
results and may be useful for preserving detail when scaling down an image.
NOTE: Because this node changes the physical resolution of the image, animating the
controls is not advised.
EXAMPLE
Different resize filters. From left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic,
Catmull-Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel.
The Transform node concatenates its result with adjacent Transformation nodes. The Transform node
does not change the image’s resolution.
Inputs
The two inputs on the Transform node are used to connect a 2D image and an effect mask, which can
be used to limit the transformed area.
— Input: The orange input is used for the primary 2D image that gets transformed.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the transform area to
only those pixels within the mask. An effects mask is applied to the tool after the tool is processed.
The Transform node can be used to scale an image without changing its resolution.
Controls Tab
The Controls tab presents multiple ways to transform, flip (vertical), flop (horizontal), scale, and rotate
an image. It also includes reference size controls that can reinterpret the coordinates used for width
and height from relative values of 0-1 into pixel values based on the image’s resolution.
Center X and Y
This sets the position of the image on the screen. The default is 0.5, 0.5, which places the image in the
center of the screen. The value shown is always the actual position multiplied by the reference size.
See below for a description of the reference size.
Pivot X and Y
This positions the axis of rotation and scaling. The default is 0.5, 0.5, which is the center of the image.
Size
This modifies the scale of the image. Values range from 0 to 5, but any value greater than zero can
be entered into the edit box. If the Use Size and Aspect checkbox is selected, this control will scale
the image equally along both axes. If the Use Size and Aspect option is off, independent control is
provided for X and Y.
Aspect
This control changes the aspect ratio of an image. Setting the value above 1.0 stretches the image
along the X-axis. Values between 0.0 and 1.0 stretch the image along the Y-axis. This control is
available only when the Use Size and Aspect checkbox is enabled.
Edges
This menu determines how the edges of the image are treated when the edge of the raster
is exposed.
— Canvas: This causes the edges of the image that are revealed to show the current Canvas Color.
This defaults to black with no Alpha and can be set using the Set Canvas Color node.
— Wrap: This wraps the edges of the image around the borders of the image. This is useful for
seamless images to be panned, creating an endless moving background image.
— Duplicate: This causes the edges of the image to be duplicated as best as possible, continuing the
image beyond its original size.
— Mirror: Image pixels are mirrored to fill to the edge of the frame.
Filter Method
When rescaling a pixel, surrounding pixels are often used to give a more realistic result. There are
various algorithms for combining these pixels, called filters. More complex filters can give better
results but are usually slower to calculate. The best filter for the job often depends on the amount of
scaling and on the contents of the image itself.
Most of these filters are useful only when making an image larger. When shrinking images, it is
common to use the Bi-Linear filter; however, the Catmull-Rom filter will apply some sharpening to the
results and may be useful for preserving detail when scaling down an image.
EXAMPLE
Different resize filters. From left to right: Nearest Neighbor, Box, Linear, Quadratic, Cubic,
Catmull-Rom, Gaussian, Mitchell, Lanczos, Sinc, and Bessel.
Invert Transform
Select this control to invert any position, rotation, or scaling transformation. This option is useful when
connecting the Transform to the position of a tracker for the purpose of reintroducing motion back
into a stabilized image.
Flatten Transform
The Flatten Transform option prevents this node from concatenating its transformation with adjacent
nodes. The node may still concatenate transforms from its input, but it will not concatenate its
transformation with the node at its output.
Reference Size
The controls under the Reference Size menu do not directly affect the image. Instead they allow you to
control how Fusion represents the position of the Transform node’s center.
Normally, coordinates are represented as values between 0 and 1, where 1 is a distance equal to the
full width or height of the image. This allows for resolution independence, because you can change
the size of the image without having to change the value of the center.
The Reference Size controls allow you to specify the dimensions of the image. This changes the way
the control values are displayed, so that the Center shows the actual pixel positions in the X and
Y number fields of the Center control. Extending our example, if you set the Width and Height to
100 each, the Center would now be shown as 50, 50, and we would move it 5 pixels toward the right by
entering 55, 50.
Auto Resolution
Enable this checkbox to use the current frame format settings in Fusion Studio or the timeline
resolution in DaVinci Resolve to set the Reference Width and Reference Height values.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Transform category. The Settings
controls are even found on third-party Transform-type plugin tools. The controls are consistent and
work the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
For example, if the Red button on a Blur tool is deselected, the blur will first be applied to the image,
and then the red channel from the original input will be copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this will generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
"Understanding Image Channels," in the Fusion Reference Manual.
Motion Blur
— Motion Blur: This toggles the rendering of Motion Blur on the tool. When this control is
toggled on, the tool’s predicted motion is used to produce the motion blur caused by the virtual
camera’s shutter. When the control is toggled off, no motion blur is created.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off GPU hardware-
accelerated rendering. Enabled uses the GPU hardware for rendering the node. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
USD Nodes
This chapter details the Universal Scene Descriptor (USD) nodes
available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve
are interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
uCamera [uCa]�������������������������������������������������� 1593 USD Lights
Inputs
— None.
The uCamera node is connected to a uMerge node to give standard camera controls to a USD scene.
Displaying a uCamera node directly in the viewer shows only an empty scene; there is nothing for the
camera to see. To view the scene through the camera, view the uMerge node where the uCamera
is connected, or any node downstream of that uMerge. Then right-click on the viewer and select
uCamera > [Camera name] from the contextual menu. Right-clicking on the axis label found in the
lower corner of each USD viewer also displays the Camera submenu.
The aspect of the viewer may be different from the aspect of the camera, so the camera view may not
match the actual boundaries of the image rendered by the uRenderer node. Guides can be enabled to
represent the portion of the view that the camera sees and assist you in framing the shot. Right-click
on the viewer and select an option from the Guides > Frame Aspect submenu. The default option uses
Inspector
Controls Tab
The Controls tab contains some of the most fundamental camera settings, including the camera’s
clipping plains, focal length, and film back properties.
Projection Type
The Projection Type menu is used to select between Perspective and Orthographic cameras.
Generally, real-world cameras are perspective cameras. An orthographic camera uses parallel
orthographic projection, a technique where the view plane is perpendicular to the viewing direction.
This produces a parallel camera output that is undistorted by perspective.
Orthographic cameras present controls only for the near and far clipping planes and a control to set
the viewing scale.
Near/Far Clip
The clipping planes are used to limit what geometry in a scene is rendered based on an object’s
distance from the camera’s focal point. Clipping planes ensure objects that are extremely close to the
camera, as well as objects that are too far away to be useful, are excluded from the final rendering.
The clip values use units, so a far clipping plane of 20 means that any object more than 20 units from
the camera is invisible to the camera. A near clipping plane of 0.1 means that any object closer than 0.1
units is also invisible.
NOTE: A smaller range between the near and far clipping planes allows greater accuracy
in all depth calculations. If a scene begins to render strange artifacts on distant objects, try
increasing the distance for the Near Clip plane.
Exposure
When selected, the renderer automatically adjusts the camera’s Near/Far Clipping plane to match
the extents of the scene. This setting overrides the values of the Near and Far clip range controls
described above. This option is not available for orthographic cameras.
Focal Length
In the real world, a lens’ focal length is the distance from the center of the lens to the film plane.
The shorter the focal length, the closer the focal plane is to the back of the lens. The focal length
is measured in millimeters. The angle of view and focal length controls are directly related. Smaller
focal lengths produce a wider angle of view, so changing one control automatically changes the
other to match.
The relationship between focal length and angle of view is angle = 2 * arctan[aperture / 2 / focal_
length]. Use the vertical aperture size to get the vertical angle of view and the horizontal aperture size
to get the horizontal angle of view.
Focal Distance
Like a focal point on a real-world camera, this setting defines the distance from the camera to an
object and is used to calculate depth of field.
F Stop
This is used to define the aperture size of the synthetic lens; it will affect exposure and is used to
calculate depth of field.
Film Back
This section allows you to control the technical parameters of the non-lens part of the camera.
Horizontal/Vertical Aperture
The Horizontal Aperture Width and Vertical Aperture Height sliders control the dimensions of the
camera’s aperture or the portion of the camera that lets light in on a real-world camera. In video and
film cameras, the aperture is the size of the opening that defines the area of each frame exposed, also
known as sensor size.
Stereo Role
If the camera is stereo, this defines the center of role for the two cameras, left, center, or right.
Control Visibility
This section allows you to selectively activate the onscreen controls that are displayed along with
the camera.
— Show View Controls: Displays or hides all camera onscreen controls in the viewers.
— Frustum: Displays the actual viewing cone of the camera.
— View Vector: Displays a white line inside the viewing cone, which can be used to determine the
center of the frame.
— Near Clip: The Near Clipping plane. This plane can be subdivided for better visibility.
— Far Clip: The Far Clipping plane. This plane can be subdivided for better visibility.
— Focal Plane: The plane set by the Focus Distance. This plane can be subdivided for better visibility.
— Convergence Distance: The point of convergence when using Stereo mode. This plane can be
subdivided for better visibility.
Common Controls
Transform and Settings Tab
The Transform and Settings tab in the Inspector are also duplicated in other USD nodes. These
common controls are described in detail at the end of this chapter in “The Common Controls” section.
— Input: The yellow input is used for the connection of a MediaIn image.
The uImage Plane Node accepts a 2D image and places it into a 3D USD scene.
Inspector
Controls Tab
Most of the Controls tab is taken up by common controls. The Image Plane-specific controls at the top
of the Inspector allow minor adjustments.
Filename
The uImagePlane has an input so an image can be piped directly into the node. This node can also
directly load an image by browsing, and it will show the name and path in the filename. Use the
Browse button to open the file browser and select an image.
Size
Size will set the size of the image plane in the USD 3D scene.
Common Controls
Transform and Settings Tab
The Transform and Settings tab in the Inspector are also duplicated in other USD nodes. These
common controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
— None.
Inspector
Trim
Lets you select an In and Out point of a USD file to bring only that range into the Fusion project.
Time Scale
Animated .usd scenes can have their animation speed adjusted to be faster and slower using
time scale.
Frame Offset
Will define when an animation will start, so timing can be moved along the timeline.
Reverse
When checked, this will reverse the animation of the USD file.
Loop
When checked, this will loop the animation of the USD file.
Reload
This button will reload the USD file set in the Filename field above. This lets you make changes to the
USD file in another application and see the results in Fusion by reloading it.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other USD nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
uMerge [uMg]
The uMerge provides the standard transformation controls found on most nodes in Fusion’s USD
suite. Unlike those nodes, changes made to the translation, rotation, or scale of the uMerge affect all
the objects connected to the uMerge. This behavior forms the basis for all parenting in Fusion’s 3D
environment.
Inputs
The uMerge node displays three inputs initially, but as each input is connected a new input appears on
the node, assuring there is always one free to add a new element into the scene.
— SceneInput[#]: These inputs are used to connect USD image planes, 3D cameras, lights, entire
USD scenes, as well as other uMerge nodes. There is no limit to the number of inputs this node
can accept. The node dynamically adds more inputs as needed, ensuring that there is always at
least one input available for connection.
The uMerge node combines the 3D USD scene from the uLoader, two light sources, and
the camera, then passes that view out to the uRenderer node to flatten into a 2D image.
Inspector
Common Controls
Transform and Settings Tab
The Transform and Settings tab in the Inspector is also duplicated in other USD nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The uRenderer node has two inputs. The main scene input takes in the uMerge or other USD nodes
that need to be converted to 2D. The effect mask limits the uRenderer output.
— SceneInput: The orange scene input is a required input that accepts a USD scene that you want
to convert to 2D.
— EffectMask: The blue effects mask input uses a 2D image to mask the output of the node.
Controls Tab
Camera
The Camera menu is used to select which camera from the scene is used when rendering. The Default
setting uses the first camera found in the scene. If no camera is located, the default perspective view
is used instead.
Renderer Type
This menu lists the available render engines. Storm is currently the only renderer available.
AOV
AOV or Arbitrary Output Variable, are additional passes that the render can output; these are also
known as Deep Channels or Auxiliary channels. Color is one; others include Depth and Camera Depth.
— Color: This outputs the main full-color render of the USD scene. This will include textures and lighting
— Depth: This outputs a black-and-white depth map. This particular depth map is normalized for
every frame, showing dynamic black to white values.
— PrimID: This outputs a numeric value for each different prim in the USD scene.
— Camera Depth: This outputs an accurate camera depth map, which calculates the camera’s
position relative to the USD scene. The image is rendered as a float 32bit image; at first glance this
will appear black.
Lighting
Controls how the USD scene is to be lit.
Aux Channel Z
This option enables rendering of the Z channel, the depth channel that represents the distance of
each pixel from the camera. When enabled, the camera depth map is rendered to the Z channel.
Max Iterations
This defines the amount of iterations of the rendering algorithm that will be processed.
Image Tab
Process Mode: Lets you choose whether the output of the node will be processed as Full Frames or via
one of the specified interlaced methods.
— Image: Lets you control the output image size and aspect ratio.
— Width/Height: Lets you set the pixel dimensions for the output of the node.
— Pixel Aspect XY: Lets you set a custom pixel aspect ratio for non-square video formats.
— Auto Resolution: Sets the resolution to the Timeline resolution. Uncheck to manually set the
width/height above.
— Depth: Lets you manually set the bit depth of the output from the uRenderer.
— Auto: Passes along any metadata that might be in the incoming image.
— Space: Allows the user to set the color space from a variety of options.
— Auto: Passes along any metadata that might be in the incoming image.
— Space: Allows you to choose a specific setting from a Gamma Space drop-down menu, while a
visual graph lets you see a representation of the gamma setting you’ve selected.
— Log: Similar to the Log-Lin node, this option reveals specific log-encoded gamma profiles so that
you can select the one that matches your content. A visual graph shows a representation of the
log setting you’ve selected. When Cineon is selected from the Log Type menu, additional Lock
RGB, Level, Soft Clip, Film Stock Gamma, Conversion Gamma, and Conversion table options are
presented to finesse the gamma output.
— Remove Curve: Depending on the selected gamma space or on the gamma space found in
Auto mode, the associated gamma curve is removed from the material, effectively converting it to
output in a linear color space.
— Pre-Divide/Post-Mujltiply: Lets you convert “straight” Alpha channels into pre-multiplied Alpha
channels, when necessary.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other USD nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
— None.
Inspector
The Shape primitives are Capsule, Cone, Cube, Cylinder, Ico sphere, Plane, Sphere and Torus.
Double Sided
This will make the polygons have two faces and receive lighting on both sides.
Lock Width/Height/Depth
Only for Plane and Cube shapes. If this checkbox is selected, the width, height, and depth controls are
locked together as a single size slider. Otherwise, individual controls over the size of the shape along
each axis are provided.
Size Width/Height/Depth
Only for Plane and Cube shapes. Used to control the size of the shape.
Radius
When a Capsule, Sphere, Cylinder, Cone, or Torus is selected in the shape menu, this control sets the
radius of the selected shape.
Height
When a Capsule, Cone, or Cylinder is selected in the shape menu, this control sets the height of the
selected shape.
Top Radius
When a cone is selected in the Shape menu, this control is used to define a radius for the top of a
cone, making it possible to create truncated cones.
Subdivision Level/Base/Height/Cap/Cylinder
The Subdivision controls are used to determine the tessellation of the mesh on all shapes. The higher
the subdivision, the more vertices and polygons in each shape.
Subdivision Scheme
The Subdivision Scheme defines which algorithm is used to tessellate the polygons in the shapes.
The methods are None, Bilinear, Loop and Catmull-Clark.
Angle
When the Capsule, Cone, Cylinder, Sphere, or Torus shape is selected in the Shape menu, this range
control determines how much of the shape is drawn. A start angle of 180° and end angle of 360°
would only draw half of the shape.
Latitude
When a Sphere or Torus is selected in the Shape menu, this range control is used to crop or slice the
object by defining a latitudinal subsection of the object.
Cap Bottom/Top
When Cylinder or Cone is selected in the Shape menu, the Bottom Cap and Top Cap checkboxes are
used to determine if the end caps of these shapes are created or if the shape is left open
Material Tab
Diffuse Mode
Diffuse describes the base surface characteristics without any additional effects like reflections or
specular highlights.
Emissive
Emissive adds a global lighting effect to a shape, creating a layer of color over the existing texture.
It can be adjusted to different levels of intensity and can be used to simulate the emission of light.
Workflow Mode
Lets you set the mode between Metallic and Specular.
— Metallic: When enabled, this control shows the Metallic slider. The Metallic slider adds a
reflective quality to objects, creating the appearance of metal. It enhances the reflective
properties of a shape.
— Specular: When enabled, this shows the Specular Color controls. Specular Color determines
the color of light that reflects from a shiny surface. The more specular a material is, the glossier
it appears.
Clearcoat
Affects the appearance of a surface by adding a glossy layer on top of it, mimicking the effect of a
protective coating. It is commonly used in creating materials like car paint or polished metal.
Clearcoat Roughness
Adjusts the level of imperfection or smoothness of the glossy layer added on top of a surface. This can
affect the realism and reflection of materials like car paint or polished metal.
Opacity
Reduces the material and object opacity, impacting the color and Alpha values of the shape.
Common Controls
Transform and Settings Tab
The Transform and Settings tab in the Inspector is also duplicated in other USD nodes. These common
controls are described in detail at the end of this chapter in “The Common Controls” section.
uTransform [uXf]
Inputs
The uTransform node has a single required input for a USD scene or USD object.
— Scene Input: The orange scene input is connected to a USD scene or USD object to apply a
second set of transformation controls.
The uTransform node gives you another set of 3D transform controls that can
be used to modify those in the Transform tab on the uShape node.
Inspector
Controls Tab
Translation
Transform controls are used to position the USD shape in 3D space.
Rotation Order
Use these buttons to select the order used to apply the rotation along each axis of the object.
For example, XYZ would apply the rotation to the X-axis first, followed by the Y-axis, and then
the Z-axis.
Pivot
A pivot point is the point around which an object rotates. Normally, an object rotates around its own
center, which is considered to be a pivot of 0,0,0. These controls can be used to offset the pivot from
the center.
Scale
If the Lock X/Y/Z checkbox is checked, a single scale slider is shown. This adjusts the overall size of the
object. If the Lock checkbox is unchecked, individual X, Y, and Z sliders are displayed to allow scaling in
any dimension.
Scale
Selecting the Use Target checkbox enables a set of controls for positioning an XYZ target. When the
target is enabled, the object always rotates to face the target. The rotation of the object becomes
relative to the target.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other USD nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
uVariant [uVa]
Using the uVisibility tool, you can show or hide individual 3D assets. In the above, the cityscape
model (left) has been marked visible, and below it is hidden with the visible box unchecked.
Inspector
Controls Tab
The Controls tab is used to set the color and brightness of the uCylinder light. The position, direction,
and scale of the light source is controlled in the Transform tab.
Color
Use this standard Color control to set the color of the light.
Exposure
This will change how much light will expose a scene; this is similar to Intensity.
Diffuse Response
Controls the amount the light will contribute to the Diffuse color of a material.
Specular Response
Controls the amount the light will contribute to the Specular color of a material.
Normalize
This will normalize the light’s contribution in the scene.
Treat As Line
This will make the light a simpler line light source.
Length
Defines the length of the light.
Radius
Defines the diameter of the light.
Controls Tab
The Controls tab is used to set the color and brightness of the uDisk light. The position, direction, and
scale of the light source is controlled in the Transform tab.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the intensity of the light. A value of 0.2 indicates 20% percent light.
Exposure
This will change how much light will expose a scene; this is similar to Intensity.
Diffuse Response
Controls the amount the light will contribute to the Diffuse color of a material.
Specular Response
Controls the amount the light will contribute to the Specular color of a material.
Normalize
This will normalize the light’s contribution in the scene.
Shaping Focus
This defines focus point of the lens effect of this light.
Radius
Defines the diameter of the light.
Inspector
Controls Tab
The Controls tab is used to set the color and brightness of the uDistant light. The position, direction,
and scale of the light source is controlled in the Transform tab.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the intensity of the light. A value of 0.2 indicates 20% percent light.
Exposure
This will change how much light will expose a scene; this is similar to Intensity.
Specular Response
Controls the amount the light will contribute to the Specular color of a material.
Normalize
This will normalize the light’s contribution in the scene.
Angular Size
Defines the spread of the light.
Inspector
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the intensity of the light. A value of 0.2 indicates 20% percent light.
Exposure
This will change how much light will expose a scene; this is similar to Intensity.
Diffuse Response
Controls the amount the light will contribute to the Diffuse color of a material.
Specular Response
Controls the amount the light will contribute to the Specular color of a material.
Normalize
This will normalize the light’s contribution in the scene.
Guide Radius
This sets how large the dome is around the scene. The default is set to large and far away, similar to
the outdoors. This radius can be set smaller to simulate a room size environment.
Texture Format
Sphere mapped images can be stored in different schemes; the common schemes are Lat-Long,
MirrorBall, Angular, and Cube Mapped Vertical Cross.
Inspector
Controls Tab
The Controls tab is used to set the color and brightness of the uDome light. The position, direction,
and scale of the light source is controlled in the Transform tab.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the intensity of the light. A value of 0.2 indicates 20% percent light.
Diffuse Response
Controls the amount the light will contribute to the Diffuse color of a material.
Specular Response
Controls the amount the light will contribute to the Specular color of a material.
Normalize
This will normalize the light’s contribution in the scene.
Shaping Focus
This defines focus point of the lens effect in this light.
Width
Defines how wide the light appears.
Height
Defines how high the light appears.
Controls Tab
The Controls tab is used to set the color and brightness of the uSphere light. The position, direction,
and scale of the light source is controlled in the Transform tab.
Color
Use this standard Color control to set the color of the light.
Intensity
Use this slider to set the intensity of the light. A value of 0.2 indicates 20% percent light.
Exposure
This will change how much light will expose a scene; this is similar to Intensity.
Diffuse Response
Controls the amount the light will contribute to the Diffuse color of a material.
Specular Response
Controls the amount the light will contribute to the Specular color of a material.
Normalize
This will normalize the light’s contribution in the scene.
Treat as Point
This will make the light a simple classic point source.
Radius
This sets the size of the sphere light.
Transform Tab
The Transform tab can be found in the Inspector in most USD nodes. The controls are the same as the
uTransform node, which can be applied separately.
Translation
Transform controls are used to position the USD shape in 3D space.
Rotation Order
Use these buttons to select the order used to apply the rotation along each axis of the object. For
example, XYZ would apply the rotation to the X-axis first, followed by the Y-axis, and then the Z-axis.
Rotation
Use these controls to rotate the shape around its pivot point. If the Use Target checkbox is selected,
then the rotation is relative to the position of the target; otherwise, the global axis is used.
Pivot
A pivot point is the point around which an object rotates. Normally, an object rotates around its own center,
which is considered to be a pivot of 0,0,0. These controls can be used to offset the pivot from the center.
Scale
If the Lock X/Y/Z checkbox is checked, a single scale slider is shown. This adjusts the overall size of the
object. If the Lock checkbox is unchecked, individual X, Y, and Z sliders are displayed to allow scaling in
any dimension.
Settings Tab
The Settings tab in the Inspector can be found on every tool in the USD category. The controls are
consistent and work the same way for each tool.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
VR Nodes
This chapter details the Virtual Reality (VR) nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
VR Nodes�������������������������������������������������������������������������������������������������������������������� 1623
The equirectangular (lat-long) format often used for 360° video is similar to how a globe is
represented by a flat world map, with the poles at the top and bottom edges of the image and the
forward viewpoint at the center.
TIP: You can create stereo VR using two stacked Lat Long images, one for each eye.
Fusion supports several common spherical image formats and can easily convert between them.
— VCross and HCross: VCross and HCross are the six square faces of a cube laid out in a cross,
vertically or horizontally, with the forward view in the center of the cross in a 3:4 or 4:3 image.
— VStrip and HStrip: VStrip and HStrip are the six square faces of a cube laid vertically or
horizontally in a line, ordered as Left, Right, Up, Down, Back, Front (+X, -X, +Y, -Y, +Z, -Z)
in a 1:6 or 6:1 image.
— LatLong: LatLong is a single 2:1 image in an equirectangular mapping.
Fusion’s “Fix it in post” tools for VR make it easy to perform several important tasks that are common
in these types of productions.
NOTE: The VR category and Lat Long node are available only in Fusion Studio and
DaVinci Resolve Studio.
Input
The Lat Long Patcher node includes two inputs. The orange input accepts a 2D image in an
equirectangular format, where the X-axis represents 0–360 degrees longitude, and the Y-axis
represents –90 to +90 degrees latitude. The effect mask input is provided, although rarely used
on VR nodes.
— Image Input: The orange image input accepts a equirectangular (lat-long) 2D RGBA image.
— Effect Mask: The effect mask input is provided, although rarely used on VR nodes.
Inspector
Controls Tab
The Controls tab is used to extract and later reapply a section from an equirectangular image.
Rotation controls allow you to select the exact portion you need to repair.
Mode
— Extract: Pulls a de-warped 90-degree square image from the equirectangular image.
— Apply: Warps and merges a 90-degree square image over the equirectangular image. Because
the square image’s alpha is used, this allows, for example, paint strokes or text drawn over a
transparent black background to be applied to the original equirectangular image, avoiding any
double-filtering from de-warping and re-warping the original.
Rotation Order
These buttons choose the ordering of the rotations around each axis. For example, XYZ rotates first
around the X axis (pitch/tilt), then around the Y axis (pan/yaw), and then around the Z axis (roll). Any of
the six possible orderings can be chosen.
Rotation
These dials rotate the spherical image around each of the X, Y, and Z axes, offering independent
control over pitch/tilt, pan/yaw, and roll, respectively.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other VR nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
NOTE: The VR category and Pano Map node are available only in Fusion Studio and
DaVinci Resolve Studio.
Input
The Pano Map node includes two inputs. The orange input accepts a 2D image in an equirectangular,
cube map or other spherical formats. The effect mask input is provided, although rarely used
on VR nodes.
— Image Input: The orange Image input accepts a spherical formatted 2D RGBA image that gets
converted to another spherical format.
— Effect Mask: The effect mask input is provided, although rarely used on VR nodes.
Controls Tab
The Controls tab is used to determine the format of the input image and the desired output format.
From/To
— Auto: Auto detects the incoming image layout from the metadata and image frame aspect.
— VCross and HCross: VCross and HCross are the six square faces of a cube laid out in a cross,
vertically or horizontally, with the forward view in the center of the cross in a 3:4 or 4:3 image.
— VStrip and HStrip: VStrip and HStrip are the six square faces of a cube laid vertically or
horizontally in a line, ordered as Left, Right, Up, Down, Back, Front (+X, -X, +Y, -Y, +Z, -Z)
in a 1:6 or 6:1 image.
— LatLong: LatLong is a single 2:1 image in equirectangular mapping.
Rotation Order
These buttons choose the ordering of the rotations around each axis. For example, XYZ rotates first
around the X axis (pitch/tilt), then around the Y axis (pan/yaw), and then around the Z axis (roll). Any of
the six possible orderings can be chosen.
Rotation
These dials rotate the spherical image around each of the X, Y, and Z axes, offering independent
control over pitch/tilt, pan/yaw, and roll, respectively.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other VR nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
For more detail on the Spherical Camera node, see Chapter 29, "3D Nodes," in the Fusion
Reference Manual.
Spherical Stabilizer
NOTE: The VR category and Spherical Stabilizer node are available only in Fusion Studio
and DaVinci Resolve Studio.
— Image: This orange image input node requires an image in a spherical layout, which can be any of
Lat Long (2:1 equirectangular), Horizontal/Vertical Cross, or Horizontal/Vertical Strip.
Inspector
Controls Tab
The Controls tab contains parameters to initiate the tracking and modify the results for
stabilization or smoothing.
— Track Backward from End Frame starts tracking backward from the end of the
current render range.
— Track Backward from Current Time starts tracking backward from the current frame.
— Stop ceases tracking, preserving all results so far.
— Track Forward from Current Time starts tracking forward from the start of the current
render range.
— Track Forward from Start Frame starts tracking forward from the current time.
Append to Track
— Replace causes the Track Controls to discard any previous tracking results and replace them with
the newly-created track.
— Append adds the new tracking results to any earlier tracks.
Stabilization Strength
This control varies the amount of smoothing or stabilization applied, from 0.0 (no change) to
1.0 (maximum).
Smoothing
The Spherical Stabilizer node can eliminate all rotation from a shot, fixing the forward viewpoint
(Still mode, 0.0) or gently smooth out any panning, rolling, or tilting to increase viewer comfort
(Smooth mode, 1.0). This slider allows either option or anything in between.
Offset Rotation
Often a shot is not entirely level and needs the horizon to be realigned, or perhaps a desired pan
should be reintroduced after fully stabilizing the shot. The Offset Rotation controls allow additional
manual control of the Spherical Stabilizer’s rotation of the footage, for pitch/tilt (X), pan/yaw (Y),
and roll (Z), respectively. Rotation is always performed in the order X, Y, Z.
Common Controls
Settings Tab
The Settings tab in the Inspector is duplicated in other VR nodes. These common controls are
described in detail in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the VR category. The controls are
consistent and work the same way for each tool.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off hardware-accelerated
rendering using the graphics card in your computer. Auto uses a capable GPU if one is available and
falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Warp Nodes
This chapter details the Warp nodes available in Fusion.
The abbreviations next to each node name can be used in the Select Tool dialog when
searching for tools and in scripting references.
For purposes of this document, node trees showing MediaIn nodes in DaVinci Resolve are
interchangeable with Loader nodes in Fusion Studio, unless otherwise noted.
Contents
Coordinate Space [CdS]���������������������������������������������������������������������������������������� 1634
Inputs
The two inputs on the Coordinate Space node are used to connect a 2D image and an effect mask,
which can be used to limit the distorted area.
— Input: The orange input is used for the primary 2D image that is distorted.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the distortion to only
those pixels within the mask. An effects mask is applied to the tool after the tool is processed.
The Coordinate Space node can help create motion graphics backgrounds
1. Add a Text+ node with some text, and then animate it to move along a path from the
top of the frame to the bottom.
As the text moves from top to bottom along the original path, it appears to move from an
infinite distance in the Coordinate Space node. It may be necessary to flip the text using a
Transform node to make it appear the correct way in the Coordinate Space node. Another
common use for the Coordinate Space node is to use it in pairs: two of them set to different
Shape settings with a Drip or Transform node in between. When used in this way, the effect
gets modified while the image remains the same.
Inspector
Controls Tab
The Controls tab Shape menu switches between Rectangular to Polar and Polar to Rectangular.
Consider the following example to demonstrate the two coordinate spaces.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Corner Positioner node are used to connect a 2D image and an effect mask,
which can be used to limit the warped area.
— Input: The orange input is used for the primary 2D image that is warped.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes,
paint strokes, or bitmaps from other tools. Connecting a mask to this input limits the Corner
Positioner to only those pixels within the mask. An effects mask is applied to the tool after the tool
is processed.
Controls Tab
The Controls tab includes transform and offset adjustments for the four corners of the image
Mapping Type
This determines the method used to project the image caused by the Corner Positioner. In Bi-Linear
mode, a straight 2D warping takes place. In Perspective mode, the image is calculated with the offsets
in 2D space and then mapped into a 3D perspective.
Corners X and Y
There are four points in the Corner Positioner. Drag these around to position each corner of the image
interactively. Attach these control points to any of the usual modifiers.
The image input is deformed and perspective corrected to match the position of the four corners.
Offset X and Y
These controls can be used to offset the position of the corners slightly. This is useful when the
corners are attached to Trackers with patterns that may not be positioned exactly where they
are needed.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Dent node are used to connect a 2D image and an effect mask, which can be
used to limit the warped area.
— Input: The orange input is used for the primary 2D image that is warped.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the Dent to only those
pixels within the mask. An effects mask is applied to the tool after the tool is processed.
The Dent node can help create lens distortion effects or a motion graphics background.
Inspector
Type
Select the type of Dent filter to use from this menu. All parameters for the Dent can be keyframed.
Dent 1
This creates a bulge dent.
Kaleidoscope
This creates a dent, mirrors it, and inverts it.
Dent 2
This creates a displacement dent.
Dent 3
This creates a deform dent.
Cosine Dent
This creates a fracture to a center point.
Sine Dent
This creates a smooth rounded dent.
Center X and Y
This positions the Center of the Dent effect on the image. The default values are 0.5, 0.5, which center
the effect in the image.
Size
This changes the size of the area affected by the dent. Animate this slider to make the dent grow.
Strength
This changes the overall strength of the dent.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There are three inputs on the Displace node: The primary image, the displacement map foreground
image, and an effect mask.
— Input: The orange image input is a required connection for the primary image you wish
to displace.
— Foreground Image: The green input is also required as the image used to displace the
background. Once connected, you can choose red, green, blue, alpha, or luminance channel to
create the displacement.
— Effect Mask: The optional blue effect mask input expects a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
limits the displacement to only those pixels within the mask. An effects mask is applied to the tool
after it is processed.
The Displace node using a Fast Noise node for the Displace map
Controls Tab
The Controls tab is used to change the style, position, size , strength, and lighting (embossing) of the
displacement.
Type
The Type menu is used to choose in what mode the Displace node operates. The Radial mode uses
the map image that refracts each pixel out from the center, while X/Y mode provides control over the
amount of displacement along each axis individually.
NOTE: There is one set of Refraction controls while in Radial mode, and two sets in XY
mode- one for each of the X and Y channels.
Refraction Channel
This drop-down menu controls which channel from the foreground image is used to displace the
image. Select from Red, Green, Blue, Alpha, or Luminance channels. In XY mode, this control appears
twice, once for the X displacement and once for the Y displacement.
Light Angle
This sets the angle of the simulated light source.
Spread
This widens the Displacement effect and takes the edge off the Refraction map. Higher values cause
the ridges or edges to spread out.
Light Channel
Select the channel from the refraction image to use as the simulated light source. Select from Color,
Red, Green, Blue, Alpha, or Luminance channels.
NOTE: The Radial mode pushes pixels inward or outward from a center point, based on
pixel values from the Displacement map. The XY mode uses two different channels from
the map to displace pixels horizontally and vertically, allowing more precise results. Using
the XY mode, the Displace node can even accomplish simple morphing effects. The Light
controls allow directional highlighting of refracted pixels for simulating a beveled look.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Drip [DRP]
— Input: The orange input is used for the primary 2D image that is warped.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the warping to only
those pixels within the mask. An effects mask is applied to the tool after the tool is processed.
Inspector
Controls Tab
The Controls tab is used to change the style, position, size , strength, and phase for animating the
“ripples” of the Drip.
Shape
Use this control to select the shape of the Drip.
Circular
This creates circular ripples. This is the default Drip mode.
Square
This creates even-sided quadrilateral drips.
Horizontal
This creates horizontal waves that move in one direction.
Vertical
This creates vertical waves that move in one direction.
Exponential
This creates a Drip effect that looks like a diamond shape with inverted, curved sides (an exponential
curve flipped and mirrored).
Star
This creates an eight-way symmetrical star-shaped ripple that acts as a kaleidoscope when the phase
is animated.
Radial
This creates a star-shaped ripple that emits from a fixed pattern.
Center X and Y
Use this control to position the center of the Drip effect in the image. The default is 0.5, 0.5, which
centers the effect in the image.
Aspect
Control the aspect ratio of the various Drip shapes. A value of 1.0 causes the shapes to be
symmetrical. Smaller values cause the shape to be taller and narrower, while larger values cause
shorter and wider shapes.
Amplitude
The Amplitude of the Drip effect refers to the peak height of each ripple. Use the slider to change
the amount of distortion the Drip applies to the image. A value of 0.0 gives all ripples no height and
therefore makes the effect transparent. A maximum Amplitude of 10 makes each ripple extremely
visible and completely distorts the image. Higher numbers can be entered via the text entry boxes.
Dampening
Controls the Dampening, or falloff, of the amplitude as it moves away from the center of the effect. It
can be used to limit the size or area affected by the Drip.
Frequency
This changes the number of ripples emanating from the center of the Drip effect. A value of
0.0 indicates no ripples. Move the slider up to a value of 100 to correspond with the density of
desired ripples.
Phase
This controls the offset of the frequencies from the center. Animate the Phase value to make the ripple
emanate from the center of the effect.
Inputs
The two inputs on the Grid Warp node are used to connect a 2D image and an effect mask, which can
be used to limit the warped area.
— Input: The orange input is used for the primary 2D image that is warped.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the warping to only
those pixels within the mask. An effects mask is applied to the tool after the tool is processed.
Controls Tab
The Controls tab contains parameters that configure the onscreen grid as well the type of distortion
applied when a control point on the grid is moved.
All other controls in this tab affect the grid selected by this control.
Selection Type
These three buttons determine the selection types used for manipulating the points. There are three
options available.
Selected
When in Selected mode, adjustments to the grid are applied only to the currently selected points.
This mode is identical to normal polyline operation.
Region
In Region mode, all points within the area around the mouse pointer move when the mouse button is
clicked. New points that enter the region during the move are ignored. Choosing this option exposes
Magnet Distance and Magnet Strength controls to determine the size and falloff of the area.
Magnetic
In Magnetic mode, all points within the area around the mouse pointer move when the mouse button
is clicked. New points that enter the region during the move are affected as well. Choosing this option
exposes Magnet Distance and Magnet Strength controls to determine the size and falloff of the area.
To increase the size of the magnet, increase the value of this slider. Alternately, adjust the size of the
magnet by holding down the D key while dragging the mouse.
Magnet Strength
The Magnet Strength slider increases or decreases the falloff of the magnet cursor’s effect. At a
setting of 0.0, the magnetic cursor has no effect, and vertices do not move at all. As the values
increase, the magnet causes a greater range of motion in the selected vertices. Use smaller values for
a more sensitive adjustment and larger values for broad-sweeping changes to the grid.
Be aware that changing either of these controls after applying changes in the grid resets the entire
grid. Set the X and Y grid sizes to the appropriate resolution before making detailed adjustments
to the grid.
Subdivision Level
The Subdivision Level determines how many subdivisions there are between each set of divisions.
Subdivisions do not generate vertices at intersections. The more subdivisions, the smoother the
deformation is likely to be, but the slower it is to render.
Center
The Center coordinates determine the exact center of the grid. The onscreen Center control is invisible
while editing the grid. Select the Edit Rect mode, and the grid center becomes visible and available
for editing.
Use the Center control to move the grid through a scene without affecting the animation applied to
the individual vertices. For example, while deforming lips, track the motion of the face with a Tracker,
and connect the grid center to the Tracker. This matches the grid with slight movements of the head
while focusing on the deformation of the lips.
Angle
This Angle control rotates the entire grid.
Size
The Size control increases or decreases the scale of the grid.
Edit Buttons
There are four edit modes available, each of which can be selected by clicking on the
appropriate button.
Edit None
Set the grid to Edit None mode to disable the display of all onscreen controls.
Edit Rectangle
When the grid is in Edit Rectangle mode, the onscreen controls display a rectangle that determines
the dimensions of the grid. The sides of the rectangle can be adjusted to increase or decrease the
grid’s dimension. This mode also reveals the onscreen Center control for the grid.
Edit Line
The Edit Line mode is beneficial for creating grids around organic shapes. When this mode is
enabled, all onscreen controls disappear, and a spline can be drawn around the shape or object to be
deformed. While drawing the spline, a grid is automatically created that best represents that object.
Additional controls for Tolerance, Over Size, and Snap Distance appear when this mode is enabled.
These controls are documented below.
Copy Buttons
These two buttons provide a technique for copying the exact shape and dimensions of the source grid
to the destination, or the destination grid to the source. This is particularly useful after setting the
source grid to ensure that the destination grid’s initial state matches the source grid before beginning
a deformation.
Point Tolerance
This control is visible only when the Edit Line mode is enabled. The Point Tolerance slider determines
how much tessellation the grid applies to match the density of points in the spline closely. The lower
this value, the fewer vertices there are in the resulting grid, and the more uniform the grid appears.
Higher values start applying denser grids with variations to account for regions in the spline that
require more detail.
Oversize Amount
This control is visible only when the Edit Line mode is enabled. The Oversize Amount slider is used to
set how large an area around the spline should be included in the grid. Higher values create a larger
border, which can be useful when blending a deformation back into the source image.
Snap Distance
This control is visible only when the Edit Line mode is enabled. The Snap Distance slider dictates how
strongly the drawn spline attracts surrounding vertices. If a vertex is close enough to a spline’s edge,
the vertex moves to line up with the spline. The higher the value, the farther the reach of the spline.
The grid uses a Polychange spline. Any adjustment to the control points adds or modifies the
keyframe for all points on that spline.
Render Tab
The Render tab controls the final rendered quality and appearance of the warping.
Render Method
The Render Method drop-down menu is used to select the rendering technique and quality applied to
the mesh. The three settings are arranged in order of quality, with the first, Wireframe, as the fastest
and lowest of quality. The default mode is Render, which produces final resolution, full-quality results.
Anti-Aliasing
The Anti-Aliasing control appears only as a checkbox when in Wireframe Render mode.
In other modes, it is a drop-down menu with three levels of quality. Higher degrees of anti-aliasing
improve image quality dramatically but vastly increase render times. The Low setting may be an
appropriate option while setting up a large dense grid or previewing a node tree, but rarely for a
final render.
Filter Type
When the Render Method is set to something other than Wireframe mode, the Filter Type menu is
visible and set to Area Sample. This setting prevents the grid from calculating area samples for each
vertex in the grid, providing good render quality. Super Sample can provide even better results but
requires much greater render times.
Wireframe Width
This slider appears only when the Render Method is set to Wireframe. It determines the width of the
lines that make up the wireframe.
Anti-Aliased
This checkbox appears only when the Render Method is set to Wireframe. Use this checkbox to
enable/disable anti-aliasing for the lines that make up the wireframe.
Modify Only/Done
These two options set the mesh to Modify Only and Done modes, respectively. Select Modify Only to
edit the mesh or Modify Done to prevent any further changes to a mesh.
Select All
This option selects all points in the mesh.
Stop Rendering
This option stops rendering, which disables all rendering of the Grid Warp node until the mode is
turned off. This is frequently useful when making a series of fine adjustments to a complex grid.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
One reason to remove lens distortion is to composite with an undistorted layer. For example,
compositing a 3D element over a distorted live-action layer will cause unwanted effects like straight
lines not matching up on the foreground and background. The resulting composite will not look
believable.
Inputs
The two inputs on the Lens Distort node are used to connect a 2D image and an effect mask, which
can be used to limit the distorted area.
— Input: The orange input is used for the primary 2D image that is distorted.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the distortion to only
those pixels within the mask. An effects mask is applied to the tool after the tool is processed.
Lens Distort applied on the live-action media at the beginning of the node tree, and once again at the end
Controls Tab
The Controls tab presents various ways to customize or build the lens distortion model you want.
Camera Settings allow you to specify the camera used to capture the content.
Mode
Undistort removes the lens distortion to create a flattened image. Distort brings the original lens
distortion back into the image.
Edges
Determines how samples that fall outside the frame are treated.
— Canvas: Pixels outside the frame are set to the default canvas color. In most cases, this is black
with no alpha.
— Duplicate: Pixels outside the frame are duplicated. This results in “smeared” edges but is useful
when, for example, applying a blur because in that case black pixels would result in the unwanted
blurring between the actual image and the black canvas.
Camera Settings
The options known from the Camera 3D are duplicated here. They can either be set manually or
connected to an already existing Camera 3D.
Supersampling [HiQ]
Sets the number of samples used to determine each destination pixel. As always, higher
supersampling leads to higher render times. 1×1 bilinear is usually of sufficient quality, but with high
lens distortion near the edges of the lens, there are noticeable differences to higher settings.
And finally, one could try to manually eyeball the amount of lens distortion using the control sliders.
To do that, one could either look for horizontal or vertical lines in the footage that are supposed to be
straight and straighten them out using the controls, or shoot a full-frame checkerboard pattern on set
as a reference.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
The two inputs on the Perspective Positioner node are used to connect a 2D image and an effect
mask, which can be used to limit the transformed area.
— Input: The orange input is used for the primary 2D image that is transformed.
— Effect Mask: The blue input is for a mask shape created by polylines, basic primitive shapes, paint
strokes, or bitmaps from other tools. Connecting a mask to this input limits the transform to only
those pixels within the mask. An effects mask is applied to the tool after the tool is processed.
Controls Tab
The Controls tab contains parameters for selecting vector channels and controlling how much
distortion they apply to an image.
Mapping Type
The Mapping Type menu is used to select the type of transform used to distort the image. Bi-Linear
is available for support of older projects. Leaving this on Perspective is strongly suggested since the
Perspective setting maps the real world more accurately.
Corners X and Y
There are the four control points of the Perspective Positioner. Interactively drag these in the viewers
to position each corner of the image. You can refine their position using the Top, Bottom, Left, and
Right controls in the Inspector.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
— Input: The orange image input is a required connection for the primary image you wish to distort.
If this image has vector channels, they are used in the distortion.
— Distort: The green input is an optional distort image input used to distort the background image
based on vector channels. Once connected, it overrides vector channels in the input image.
— Effect Mask: The optional blue effect mask input expects a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
limits the displacement to only those pixels within the mask. An effects mask is applied to the tool
after it is processed.
Inspector
Scale
Use the Scale slider to apply a multiplier to the values of the distortion reference image.
Center Bias
Use the Center Bias slider to shift or nudge the distortion along a given axis.
Edges
This menu determines how the edges of the image are treated.
— Canvas: This causes the edges that are revealed by the shake to be the canvas color—usually
transparent or black.
— Duplicate: This causes the edges to be duplicated, causing a slight smearing effect at the edges.
Glow
Use this slider to add a glow to the result of the vector distortion.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail at the end of this chapter in “The Common Controls” section.
Inputs
There are two inputs on the Vortex node for the primary 2D image and the effect mask.
— Input: The orange image input is a required connection for the primary image you wish to swirl.
— Effect Mask: The optional blue effect mask input expects a mask shape created by polylines,
basic primitive shapes, paint strokes, or bitmaps from other tools. Connecting a mask to this input
limits the swirling vortex to only those pixels within the mask. An effects mask is applied to the
tool after it is processed.
Controls Tab
The Controls tab contains parameters for adjusting the position, size, and strength of the
Vortex effect.
Center X and Y
This control is used to position the center of the Vortex effect on the image. The default is 0.5, 0.5,
which positions the effect in the center of the image.
Size
Size changes the area affected by the Vortex. You can drag the circumference of the effect in the
viewer or use the Size slider.
Angle
Drag the rotation handle in the viewer or use the thumbwheel control to change the amount of
rotation in the Vortex. The higher the angle value, the greater the swirling effect.
Power
Increasing the Power slider makes the Vortex smaller but tighter. It effectively concentrates it inside
the given image area.
Common Controls
Settings Tab
The Settings tab in the Inspector is also duplicated in other Warp nodes. These common controls are
described in detail in the following “The Common Controls” section.
Inspector
Settings Tab
The Settings tab in the Inspector can be found on every tool in the Warp category. The Settings
controls are even found on third-party Warp-type plugin tools. The controls are consistent and work
the same way for each tool.
Blend
The Blend control is used to blend between the tool’s original image input and the tool’s final modified
output image. When the blend value is 0.0, the outgoing image is identical to the incoming image.
Normally, this will cause the tool to skip processing entirely, copying the input straight to the output.
For example, if the Red button on a Blur tool is deselected, the blur will first be applied to the image,
and then the red channel from the original input will be copied back over the red channel of the result.
There are some exceptions, such as tools for which deselecting these channels causes the tool to
skip processing that channel entirely. Tools that do this will generally possess a set of identical RGBA
buttons on the Controls tab in the tool. In this case, the buttons in the Settings and the Controls tabs
are identical.
Multiply by Mask
Selecting this option will cause the RGB values of the masked image to be multiplied by the mask
channel’s values. This will cause all pixels of the image not included in the mask (i.e., set to 0) to
become black/transparent.
Correct Edges
This checkbox appears only when the Use Object or Use Material checkboxes are selected. It toggles
the method used to deal with overlapping edges of objects in a multi-object image. When enabled,
the Coverage and Background Color channels are used to separate and improve the effect around
the edge of the object. If this option is disabled (or no Coverage or Background Color channels are
available), aliasing may occur on the edge of the mask.
For more information on the Coverage and Background Color channels, see Chapter 18,
"Understanding Image Channels," in the Fusion Reference Manual.
Use GPU
The Use GPU menu has three settings. Setting the menu to Disable turns off GPU hardware-
accelerated rendering. Enabled uses the GPU hardware for rendering the node. Auto uses a capable
GPU if one is available and falls back to software rendering when a capable GPU is not available.
Comments
The Comments field is used to add notes to a tool. Click in the empty field and type the text. When a
note is added to a tool, a small red square appears in the lower-left corner of the node when the full
tile is displayed, or a small text bubble icon appears on the right when nodes are collapsed. To see the
note in the Node Editor, hold the mouse pointer over the node to display the tooltip.
Scripts
Three Scripting fields are available on every tool in Fusion from the Settings tab. They each contain
edit boxes used to add scripts that process when the tool is rendering. For more details on scripting
nodes, please consult the Fusion scripting documentation.
Modifiers
This chapter details the modifiers available in Fusion.
Contents
Modifiers����������������������������������������������������������� 1665 MIDI Extractor������������������������������������������������ 1686
NOTE: Text3D and Text+ have additional text-specific modifiers, which are covered in
their nodes’ sections.
Anim Curves
The Animation Curves modifier (Anim Curves) is used to dynamically adjust the timing, values, and
acceleration of an animation, even if you decide to change the duration of a comp. Using this modifier
makes it infinitely easier to stretch or squish animations, create smooth motion, add bouncing
properties, or mirror animations curves without the complexity of manually adjusting splines.
When creating Fusion templates for the Edit and Cut page in DaVinci Resolve, the Anim Curves
modifier allows the keyframed animation you’ve created in Fusion to stretch and squish appropriately
as the transition, title, or effect’s duration changes on the Edit and Cut page Timelines.
— Source: This drop-down menu has three options based on how the comp is created from
DaVinci Resolve’s Edit page.
— Transition: This setting is automatically selected when the comp is created from an Edit page
transition effect. If the duration of the transition is updated in the Edit page, the timing of the
animation updates as well.
— Duration: Use this setting when the comp is created from a clip on the Edit page. The
animation timing will update if the clip’s duration changes by trimming.
— Custom: Displays an Input dial to manually control the timing.
— Input: This dial is only visible when Source is set to Custom. It is used to change the
input keyframe value.
— Curve: The Curve drop-down menu selects the interpolation method used between keyframes.
The three choices are: linear, easing, or custom.
— Linear: The default Linear interpolation method maintains a fixed, consistent acceleration
between keyframes.
— Easing: Displays interpolation menus for both the start of the curve (In) and the end of
the curve (Out).
— Custom: Opens a mini Spline Editor to customize the interpolation from the start of the
animation to the end.
— Mirror: Plays the animation forward, and after reaching the end, it returns to the starting value.
This causes the initial animation to be twice as fast, since the second half of the comp is used for
the reverse animation.
— Invert: Flips the animation curve upside-down so that the values start high and end low.
— Scale: This number is a multiplier applied to the value of the keyframes. If the Scale value is 2 and
a keyframe has a value of 0, it remains 0. If the Scale value is 2 and a keyframe has a value of 10,
the result is as if the keyframe is set to 20. This can be thought of as the ending value for the
animation. It is best to set this while viewing the last frame in the comp.
— Offset: The offset is added to the keyframe values and can be thought of as the starting value for
the animation. It is best to set this while viewing the first frame in the comp.
— Clip Low: Ensures the output value never dips below 0.0.
— Clip High: Ensures the output value never exceeds 1.0.
Timing
The Timing parameters adjust the animation timing using relative values.
— Time Scale: Stretches or squishes the animation, causing it to run faster or slower. A value of 1.0
keeps the animation running for the comp’s duration (unless you have customized the animation
using other controls in the Modifier).
— Time Offset: This value delays the animation as a fraction of its total duration. A value of
0.0 applies no delay. A value of 0.5 delays the animation starting point midway into the
comp’s duration.
Once you create a Macro from this node tree and save it as a Transition template, you can apply it in
the Edit page Timeline. If you change the transition duration in the Edit page, the animation timing will
update appropriately.
1 In Fusion, create two keyframes that cause text to start at the top of the frame and drop to the
bottom. This automatically creates a Path modifier.
2 In the Inspector’s Modifier tab, right-click over the Displacement parameter and choose Insert >
Anim Curves. The animation is normalized to the duration of the comp.
3 Set the Source menu to Duration, since this is not a transition and we are not customizing
the duration.
4 From the Curve menu, choose Easing, then for the Out menu, choose Bounce.
5 Play the animation to see the Bounce animation.
6 To make the bounce occur halfway down the frame, change the Scale to .05.
7 To make the animation run twice as fast, enter 2.0 in the Time Scale parameter.
Once you create a macro from this node tree and save it as a Title template, you can apply it in the
Edit page Timeline. If you change the title’s duration in the Edit page, the animation timing will update
appropriately.
TIP: To view the resulting animation curve in the Spline Editor, select the parameter name
in the Spline Editor’s header. The spline is updated as you change the controls.
Bézier Spline
The Bézier Spline is one of the animation modifiers in Fusion and is typically applied to numerical
values rather than point values. It is automatically applied when you keyframe a parameter or each
time you right-click a number field and select Animate.
Unlike most modifiers, this modifier has no actual Controls tab in the Inspector. However, the Spline
Editor displays the Bézier Spline, and it can be controlled there. The Bézier Spline offers individual
control over each control point’s smoothness using Bézier handles. The smoothness is applied in
multiple ways:
— To make the control points smooth, select them, and press Shift-S. The handles can be used to
modify the smoothness further.
— To make the control points linear, select them, and press Shift-L. These operations can also be
performed using the contextual menu.
— Select the control point(s), right-click, and select Smooth or Linear. The menu also allows the user
to smooth a spline using a convolution analysis called a Savitzky-Golay filter. Select the control
point(s), right-click, and select Smooth Points -Y Dialog.
Ease In/Out
Traditional Ease In/Out can also be modified by using the number field virtual sliders in the Spline
Editor. Select the control points you want to modify, right-click, and select Ease In/Out... from the
contextual menu. Then use the number field virtual sliders to control the Ease In/Out numerically.
Usage
B-Spline Editor
— This animation spline modifier has no actual Controls tab. However, the Spline Editor displays
the B-spline, and it can be controlled there. Notice that, though the actual value of the second
keyframe is 0, the value of the resulting spline is 0.33 due to the unique smoothing and weighing
algorithms of a B-spline.
— The weight can be modified by clicking the control point to select it, holding the W key, and
moving the mouse left and right to lower or increase the tension. This is also done with multiple
selected control points simultaneously.
Calculation
Calculations are used to create indirect connections between parameters. A Calculation can perform a
mathematical expression based on two operands, where each operand can be connected to another
parameter or set manually.
Additionally, using Time offsets and Scale controls in the Time tab, the Calculation control can access
values of a parameter at times other than the current time.
The Calculation’s most common use is for connecting two parameters when one value range or scope
is inappropriate for the other parameter.
NOTE: The Expression modifier is essentially a more flexible version of the Calculation
modifier, with a single exception. It is far easier to manipulate the timing of the operands
provided in the Calculation modifier than it is to do so with an Expression.
Calc Tab
The Calc tab includes two dials used for the connected parameter and value that gets mathematically
combined. The Operator menu selects how the Second Operand value combines with the
parameter’s value.
Operator
Select from the mathematical operations listed in this menu to determine how the two operands are
combined. Clicking the drop-down arrow opens the menu with the following options:
Time Tab
The Time tab is used to modify the time of the Calculation modifier. The controls here retime the
speed of the effect or offset it in time.
EXAMPLE
The following example uses a calculation to apply blur to a Text node in inverse proportion
to the size of the text.
1. Open a new composition that starts on frame 0 and ends on frame 100.
4. Click the Keyframe button to the right of the Size slider to add a keyframe.
8. Right-click the Blur Size and select Modify With > Calculation from the contextual menu.
This adds a Calculation modifier to the Blur node. At the top of the Inspector, a new set
of controls appears in the Modifiers tab while the Blur node is selected.
10. Right-click the First Operand slider and select Connect To > Text 1 > Size from the
contextual menu.
Although the Blur Size is now connected to the Text Size parameter, this connection
isn’t very useful. The maximum value of the Blur Size control is 0.5, which is hardly
noticeable as a blur.
13. Switch to the Time tab of the modifier and set the First Operand Time Scale to -1.0.
Normally, the first operand gets the value of the control it is connected to from the
same frame as the current time. So at frame 10, the first operand is set to the same
value as the Text size at frame 10. By setting this value to -1, the value is read from one
frame back in time whenever the current time of the composition advances by 1 frame.
However, this means that the Calculation would be reading the value of the Text size at
frame -10 when we are at frame 10 in the composition.
14. To correct for this, set the First Operand Time Offset slider to 100.
15. Return to the Tools tab at the top of the Inspector and press Play (Spacebar) to watch
how the value of the Blur Size relates to the value of the Text Size.
Inspector
Controls Tab
The Controls tab has two fields for the target and scene input. The target is for the node continuing
the original coordinates, while the scene input is used for the scene with the new coordinates.
Target Object
This control is connected to the 3D tool that produces the original coordinates to be transformed. To
connect a tool, drag the node from the Node Editor into the text edit control, or right-click the control
and select the tool from the contextual menu. It is also possible to type the tool’s name directly into
the control.
SubID
The SubID slider can be used to target an individual sub-element of certain types of geometry, such as
an individual character produced by a Text 3D tool or a specific copy created by a Duplicate 3D tool.
Scene Input
This control should be connected to the 3D tool, which outputs the scene containing the object at
the new location. To connect a tool, drag and drop a tool tile from the Node Editor into the text edit
control, or right-click the control and select an object from the Connect To pop-up menu.
Usage
Being an animation spline, this modifier has no actual Controls tab. However, its effect can be seen
and influenced in the Spline Editor.
Custom Poly
The Custom Poly modifier can be added to Polygon masks or paths. Similar in function to the Custom
and pCustom tools, existing points can be repositioned in the polyline, or replaced completely with a
new set of points. The expressions are evaluated for each point on the output polygon. The modifier
can be applied by right-clicking on the “Right-click here for shape animation” text at the bottom of the
Polygon controls, and selecting Insert > Custom Poly from the contextual menu.
Inspector
Controls Tab
The Custom Poly Controls tab are available once you select the modifiers tab on the Polygon’s
inspector. It has a single point input, and number variables to use in animating the expressions.
The default is one point and four numbers, but they can be expanded to a maximum of nine.
Additional points and numbers can be added from the Config tab.
Polyline Tab
The polyline tab exposes the controls to connect and modify the polyline.
Number of Points
The number of output points can be set, controlling the amount of custom subdivision of the polyline.
A value of zero uses the number of original source points.
It uses most of the same expression variables as the Expression modifier below (i.e., n1..n9, p1x..p9x,
p1y..p9y, math functions, etc.), and it adds:
Expression
An expression is a variable or a mathematical calculation added to a parameter, rather than a straight
numeric value. You can add an expression to any parameter in Fusion, or you can add the Expression
modifier, which adds several tabs to the modifier Inspector. Adding this modifier to a parameter adds
the ability to manipulate that parameter based on any number of controls, either positional or value-
based. This modifier offers exceptional flexibility compared to the more limited Calculation or Offset
modifiers, but it is unable to access values from frames other than the current time.
The Expression modifier accepts nine value inputs and nine position inputs that are used as part of a
user-defined mathematical expression to output a value.
To add the Expression modifier to a parameter, right-click the parameter in the Inspector and choose
Modify With > Expression from the contextual menu. The type of value returned by the Expression
depends entirely on the type of control it is modifying.
When used with a value control (like a slider), the Expression in the Number Out tab is evaluated to
create the result. When used to modify a positional control (like Center), the Point Out tab controls
the result.
The Inspector’s Modifiers tab contains the controls for the Expression modifier, described below.
Inspector
These values can be set manually, connected to other parameters, animated, and even connected to
other Expressions or Calculations.
Config Tab
A good expression is reused over and over again. As a result, it can be useful to provide more
descriptive names for each parameter or control and to hide the unused ones. The Config Tab
of the Expressions modifier can customize the visibility and name for each of the nine point and
number controls.
Random Seed
The Random Seed control sets the starting number for the Rand() function. The rand(x, y) function
produces a random value between X and Y, producing a new value for every frame. As long as the
setting of this Random Seed slider remains the same, the values produced at frame x are always the
same. Adjust the Seed slider to a new value to get a different value for that frame.
e The value of e.
dist(x1, y1, x2, y2) The distance between point x1,y2 and x2,y2.
x+y x plus y.
x-y x minus y.
-x (0.0 - x).
xy x multiplied by y.
xy x divided by y.
x && y 1.0 if both x and y are not 0.0, otherwise 0.0 (identical to above).
x|y 1.0 if either x or y (or both) are not 0.0, otherwise 0.0.
1.0 if either x or y (or both) are not 0.0, otherwise 0.0 (identical to
x || y
above).
EXAMPLE 1
To make a numeric control equal to the Y value of a motion path, add an expression to the
desired target control and connect the Path to Point In 1. Enter the formula:
p1y
EXAMPLE 3
Add a Background node set to solid black and a Hotspot node. Set the Hotspot size to 0.08
and set the Strength to maximum. Modify the Hotspot center with an expression. Change
the current frame to 0.
Set n1 to 0.0 and add a Bézier Spline. At frame 29, set the value of n1 to 1.0. Select both
points and loop the spline using the Spline Editor. Now enter the following equations into
the Point Out tab of the expression.
X-Axis Expression
n1
Y-Axis Expression
0.5 + sin(time*50) 4
Render out a preview and look at the results. (Try this one with motion blur.)
From Image
The From Image modifier only works on gradients, like the gradient on a Background node. It takes
samples of an image along a user-definable line and creates a gradient from those samples.
Unlike other modifiers, From Image is not located in the Modify With menu. This modifier can be
applied by right-clicking a Gradient bar in the Inspector and selecting From Image.
Inspector
Image to Scan
Drop into this box the node from the Node Editor that you want to be color sampled.
Edges
Edges determines how the edges of the image are treated when the sample line extends over the
actual frame of the image to be sampled.
Black
This outputs black for every point on the sample line outside of the image bounds.
Wrap
This wraps the edges of the line around the borders of the image.
Duplicate
This causes the edges of the image to be duplicated as best as possible, continuing the image beyond
its original size.
Color
This outputs a user-definable color instead of black for every point on the sample line outside of the
image bounds.
EXAMPLE
The source image on the left shows the color selection line in red. The image on the right
shows the resulting gradient from that selection.
It can be applied by right-clicking a parameter and selecting Modify With > Gradient Color.
Inspector
Controls Tab
The Controls tab consists of a Gradient bar where you add and adjust points of the gradient. Start
Time and End Time thumbwheels at the bottom of the Inspector determine the time range the
gradient is mapped into.
Gradient
The Gradient control consists of a bar where it is possible to add, modify, and remove points of the
gradient. Each point has its color. It is possible to animate the color as well as the position of the point.
Furthermore, a From Image modifier can be applied to the gradient to evaluate it from an image.
— Once: When using the Gradient Offset control to shift the gradient, the border colors keep their
values. Shifting the default gradient to the left results in a white border on the left; shifting it to
the right results in a black border on the right.
— Repeat: When using the Gradient Offset control to shift the gradient, the border colors are
wrapped around. Shifting the default gradient to the left results in a sharp jump from white to
black; shifting it to the right results in a sharp jump from black to white.
— Ping Pong: When using the Gradient Offset control to shift the gradient, the border colors ping-
pong back and forth. Shifting the default gradient to the left results in the edge fading from white
back to black; shifting it to the right results in the edge fading from black back to white.
Gradient Offset
Allows you to pan through the gradient.
Time Controls
The Start Time and End Time thumbwheels determine the time range the gradient is mapped into.
This is set in frames. The same effect can be achieved by setting the Gradient to Once and animating
the offset thumbwheel.
For more information on the Keyframe Stretcher Modifier controls, see the Keyframe Stretcher
Node in Chapter 49, “Miscellaneous Nodes” in the Fusion Reference Manual or Chapter 110 in the
DaVinci Resolve Reference Manual.
The value produced by the modifier is extracted from the MIDI event selected in the Mode menu. Each
mode can be trimmed so that only specific messages for that event are processed—for example, only
some notes are processed, while others are ignored. The value of the event can be further scaled or
modified by additional factors, such as Scale, Velocity, Attack, and Decay.
It can be applied by right-clicking a parameter and selecting Modify With > MIDI Extractor.
Inspector
Controls Tab
The Controls tab is used to load the MIDI file, modify its timing, and determine which MIDI messages
and events trigger changes in the Fusion parameter.
MIDI File
This browser control is used to specify the MIDI file that is used as the input for the modifier.
Time Scale
Time Scale is used to specify the relationship between time as the MIDI file defines it and time as
Fusion defines it. A value of 1.0 plays the MIDI events at normal speed, 2.0 plays at double speed,
and so on.
Result Curve
The Result Curve can also be used to adjust the output. However, this adjusts the curve of the result.
By default, for any input MIDI data, the results fall linearly between 0.1 and 1.0 (for example, a velocity
127 note generates 1.0, whereas 63 generates approximately 0.5).
The Result Curve applies a gamma-like curve so that middle values can produce higher or lower
results while still maintaining the full scale.
Mode
This menu provides Beat, Note, Control Change, Poly AfterTouch, Channel AfterTouch, or Pitch Bend,
indicating from which MIDI event the values are being read. Beat mode is slightly different in that it
produces regular pulses based on the tempo of the MIDI file (including any tempo maps).
The Beat mode does not use any specific messages; it bases its event timing on the tempo map
contained in the MIDI file.
Combine Events
This menu selects what happens when multiple events occur at the same time. In Notes mode, this
can happen easily. For other events, this can happen if Multiple Channels are selected.
Use this to take the result from the most recent event to occur, the oldest event still happening, the
highest or lowest valued event, the average, sum, or the median of all events currently occurring.
These values can be used to follow actual sounds in the MIDI sequence or just to create interesting
effects. All time values used in the MIDI Extractor are in seconds.
Channels Tab
The Channels tab is used to select the Channels used in the modifier.
Channels
Channels checkboxes select which of the 16 channels in the MIDI file are actually considered for
events. This is a good way to single out a specific instrument from an arrangement.
ABOUT MIDI
A single MIDI interface allows 16 channels. Typically, these are assigned to different
instruments within a device or different devices. Usually, MIDI data is 7 bits, ranging from
0–127. In Fusion, this is represented as a value between 0–1 to be more consistent with how
data is handled in Fusion.
There are numerous different MIDI messages and events, but the ones that are particularly
useful with this modifier are detailed below.
MIDI MESSAGES
— Note On: This indicates that a note (on a specific channel) is being turned on, has a pitch
(0–127, with middle C being 60) and a Velocity (0–127, representing how fast the key or
strings or whatever was hit).
— Control Change: This message indicates that some controller has changed. There are
128 controllers (0–127), each of which has data from 0–127. Controllers are used to set
parameters such as Volume, Pan, amount of Reverb or Chorus, and generic things like
foot controllers or breath controllers.
MIDI EVENTS
— Channel Aftertouch: This event defines that pressure is being applied to the keys (or
strings or whatever) during a note. This represents general, overall pressure for this
channel, so it simply uses a pressure value (0–127).
— Poly Aftertouch: This event defines that pressure is being applied to the keys (or strings
or whatever) during a note. It is specific to each particular note and therefore contains a
note number as well as a pressure value (0–127).
PITCH BEND
The Pitch Bend controller generally specifies the degree of pitch bending or variation
applied to the note. Because pitch bend values are transmitted as a 14-bit values, this
control has a range between -1 and 1 and a correspondingly finer degree of resolution.
NOTE: Unlike other spline types, Cubic splines have no control handles. They attempt to
automatically provide a smooth curve through the control points.
Usage
Being an animation spline, this modifier has no actual Controls tab. However, its effect can be seen
and influenced in the Spline Editor.
— Offset Angle
— Offset Distance
— Offset Position
Offset Angle
The Offset Angle modifier outputs a value between 0 and 360 that is based on the angle between two
positional controls. The Position and Offset parameters may be static, connected to other positional
parameters, or connected to paths of their own. All offsets use the same set of controls, which behave
differently depending on the offset type used. These controls are described below.
Offset Distance
The Offset Distance modifier outputs a value that is based on the distance between two positional
controls. This modifier is capable of outputting a value based on a mathematical expression applied to
a position.
Offset Position
The Offset Position modifier generates a position (X and Y coordinates) that is based on the
relationship between positional controls. This modifier is the equivalent of a calculation control except
that it outputs X and Y coordinates instead of a value.
It can be applied by right-clicking a control and selecting Modify With > Offset.
Offset Tab
The Inspector for all three Offset modifiers is identical. The Offset tab includes Position and
Offset values as well as a Mode menu for selecting the mathematical operation performed by the
offset control.
Position X and Y
The X and Y values are used by the Position to generate the calculation.
Offset X and Y
The X and Y values are used by the Offset to generate the calculation.
Mode
The Mode menu includes mathematical operations performed by the Offset control. Available
options include:
Time Tab
Position Time Scale
This returns the value of the Position at the Time Scale specified (for example, 0.5 is the value at half
the current frame time).
EXAMPLE
2. Create a node tree consisting of a black background and a Text node foreground
connected to a Merge.
3. In the Text Layout tab, use the Center X control to animate the text from the left side of
the screen to the right.
4. Move to frame 0.
5. In the Text tab in the Inspector, right-click the Size control and select Modify With >
Offset Distance from the contextual menu.
6. This adds two onscreen controls: a crosshair for the position and an X control for the
offset. These onscreen controls represent the Position and Offset controls displayed in
the Modifiers tab.
The size of the text is now determined by the distance, or offset, between the two
onscreen controls.
10. In the Offset on Text size section, right-click over Position and choose Connect To >
PathConnect the position value of the Offset to the existing path by right-clicking the
Position control and selecting Connect To > Path1 Position.
12. Now, the text shrinks near the center of the path (when the distance between the offset
and the path is at its minimum) and grows at its ends (where the distance between the
offset and the path is at its maximum).
Path
The Path modifier uses two splines to control the animation of points: an onscreen motion path
(spacial) and a Time spline visible in the Spline Editor (temporal). To animate an object’s position
control using a Path, right-click the Position control either in the Inspector or in the viewer and select
Path from the contextual menu. This adds a keyframe at the current position. You can begin creating a
path by moving the playhead and dragging the center position control in the viewer. The Spline Editor
shows a displacement spline for editing the temporal value, or “acceleration,” of the path.
Inspector
Controls Tab
The Controls tab for the path allows you to scale, reposition, and rotate the path. It also provides the
Displacement parameter, allowing you to control the acceleration of an object attached to the path.
Size
The size of the path. Again, this allows for later modification of the animation.
X Y Z Rotation
The path can be rotated in all three dimensions to allow for sophisticated controls.
Displacement
Every motion path has an associated Displacement spline in the Spline Editor. The Displacement
spline represents the position of the animated element along its path, represented as a value
between 0.0 and 1.0. Displacement splines are used to control the speed or acceleration of an object’s
movement along the path.
To slow down, speed up, stop, or even reverse the motion of the control along the path, adjust the
values of the points for the path’s displacement in the Spline Editor or in the Inspector.
— A Displacement value of 0.0 in the Spline Editor indicates that the control is at the very beginning
of a path.
— A value of 1.0 indicates that the control is positioned at the end of the path.
— Each locked point on the motion path in the viewer has an associated point on the
Displacement spline.
— Unlocked points have a control point in the viewer but do not have a corresponding point on the
Displacement spline.
Heading Offset
Connecting to the Heading adjusts the auto orientation of the object along the path. For instance,
if a mask’s angle is connected to the path’s heading, the mask’s angle will adjust to follow the angle
of the path.
You can change the default path type used when animating a position or center control to
a path (if this is the preferred type of animation). Open the Preferences window and select
the Global Settings. In the Default category, select the Point With menu and choose Path.
The next time Animate is selected from a Position or Center control’s contextual menu, a
path is used.
For example, to add camera shake to an existing path, right-click the crosshair and choose Insert >
Perturb, and then adjust the Strength down to suit. Alternatively, right-clicking the path’s “Right-click
here for shape animation” label at the bottom of the Inspector lets you apply perturb to the path’s
polyline. This works best if the polyline has many points—for example, if it has been tracked or hand-
drawn with the Draw Append pencil tool. A third usage option is to use the Insert contextual menu to
insert the modifier onto the Displacement control. This causes the motion along the path to jitter back
and forth without actually leaving the path.
NOTE: Perturb can only add jitter; it cannot smooth out existing animation curves.
Inspector
Controls Tab
The Controls tab for Perturb is mainly used for controlling the Strength, Wobble, and Speed
parameters of the random jitter.
Value
The content of this control depends on what type of control the modifier was applied to. If the Perturb
modifier was added to a basic Slider control, the Value is a slider. If it was added to a Gradient control,
then a Gradient control is displayed here. Use the control to set the default, or center value, for the
Perturb modifier to work on.
Jaggedness
(Polylines and meshes only) This allows you to increase the amount of variation along the length of
the polyline or mesh, rather than over time. Increasing Jaggedness gives a squigglier polyline or more
tangled mesh, independent of its movement.
Phase
(Polylines and meshes only) Animating this can be used to move the ripple of a polyline or mesh along
itself, from end to end. The effect can be most clearly seen when Speed is set to 0.0.
Strength
Use this control to adjust the strength of the Perturb modifier’s output, or its maximum variation from
the primary value specified above.
Wobble
Use the Wobble control to determine how smooth the resulting values are. Less wobble implies a
smoother transition between values, while more wobble produces less predictable results.
Speed
Increasing the Speed slider value speeds up the rate at which the value changes. This can increase the
apparent wobbliness in a more predictable fashion than the Wobble control and make the jitter more
frantic or languorous in nature.
It can be applied by right-clicking a parameter and selecting Modify With > Probe.
Inspector
Controls Tab
The Controls tab for the Probe modifier allows you to select the node to probe, define the channel
used to drive the parameter, and control the size of the probed area.
Image to Probe
Drag a node from the Node Editor to populate this field and identify the image to probe.
Channel
Select the channel you want to probe. The usual options are:
— Red
— Green
— Blue
— Alpha
Luma
Once a Probe modifier is present somewhere in your comp, you can connect other node’s values to its
outputs as well. The Probe allows you to connect to its values individually:
— Result
— Red
— Green
— Blue
— Alpha
Position X Y
The position in the image from where the probe samples the values.
Evaluation
Sets how the pixels inside the rectangle are computed to generate the output value.
Options include:
Value Tab
The Value tab controls the range or scale of the modifier adjustment, thereby adjusting the sensitivity
of the Probe.
Scale Input
By default, the Probe generates the Black Value when the probed area results in a value of 0 (i.e.,
black), and it generates its White Value when the probed area results in a value of 1 (i.e., white). By
using this range control, you can modify the sensitivity of the Probe.
Black Value
The value that is generated by the Probe if the probed area delivers the result set in Scale Input Black.
White Value
The value that is generated by the Probe if the probed area delivers the result set in Scale Input White.
To publish a static control, right-click the control and select Publish from the contextual menu.
Controls Tab
The Controls tab shows the published control available for linking to other controls.
Published Value
The display of the published control is obviously dependent on which control is published from
which node.
Resolve Parameter
The Resolve Parameter Modifier is used when creating a transition template in Fusion for use in
DaVinci Resolve’s Edit page or Cut page. When building a transition in Fusion, the Resolve Parameter
modifier is added to any control you want to animate. The Resolve Parameter modifier automatically
animates the parameter for the duration of the transition, allowing you to trim the transition in the
Edit page or Cut page.
1 From the Effects Library, add a Fusion Composition to the Edit page Timeline.
2 In Fusion, add a Dissolve node to the Node Editor.
3 In the Inspector, right-click the Background/Foreground parameter, and then choose Resolve
Parameter from the Modifier contextual menu. Adding the modifier to the Background/
Foreground parameter automatically updates the slider if the transition is modified back on the
Edit/Cut page.
4 In the Node Editor, right-click the Dissolve node and choose Macro > Create Macro.
5 When creating a macro that’s to be used as a Fusion transition it’s important that two inputs and
one output are selected in the Macro Editor. In this example, under the Dissolve heading, enable
the Output, Background and Foreground check boxes.
6 Give the transition a name, then save the macro from the top File menu.
8 Quit and reopen DaVinci Resolve to update the list of transitions in the Effects Library.
9 On the Edit page, open the Effects Library. Navigate to Video Transitions Fusion Transitions, and
the custom Fusion transition will be listed.
Shake
The Shake modifier is used to randomize a Position or Value control to create semi-random numeric
inputs. The resulting shake can be entirely random. The motion can also be smoothed for a more
gentle, organic feel.
To add the Shake modifier to a parameter, select Modify With > Shake from the parameter’s contextual
menu. The Shake modifier uses the following controls to achieve its effect. It can be applied by right-
clicking a parameter and selecting Modify With > Shake.
Inspector
Controls Tab
Random Seed
The Random Seed control contains the value used to seed the random number generator. Given the
same seed, a random number generator always produces the same results. Change the seed if the
results from the randomizer are not satisfying.
Smoothness
This control is used to smooth the overall randomness of the Shake. The higher the value, the
smoother the motion appears. A value of zero generates completely random results, with no
smoothing applied.
EXAMPLE
1. Create a new comp, and then add and view a Text node.
3. In the viewer, right-click over the Center control of the text and choose Modify With >
Shake Position.
4. In the Inspector, select the Modifiers tab and set the smoothing to 5.0.
6. Go to frame 0 and in the Inspector click the Keyframe button to the right of both the
Minimum and the Maximum controls.
7. Go to frame 90 and adjust the Minimum to 0.45 and the Maximum to 0.55.
Track
Although there is a standard Tracker node, you can also use a Tracker modifier to add a tracker
directly to a parameter. To apply the Tracker modifier, in the viewer right-click the Center control of of
any transform, text, mask, or other positionable element. From the contextual menu, choose Object x
Center > Modify With > Tracker.
This adds a modifier in the Inspector with a set of controls almost identical to those found in the
Tracker node itself.
For an in-depth explanation of this node, see Chapter 57, "Tracker Nodes," in the Fusion
Reference Manual.
3. Add an Ellipse mask to the Glow in the shape of one of the eyes.
4. Right-click the center of that mask and select Modify With > Tracker > Position.
Since the track is on the mask, the tracker takes the glow as the image for tracking.
This could cause problems since the eye might be very obscured by the glow. A cleaner
source will be the Loader that feeds the glow.
5. Drag the Loader into the modifier Inspector’s Track Source field.
Vector Result
The Vector Result modifier is used to offset positional controls, such as crosshairs, by distance and
angle. These can be static or animated values.
It can be applied by right-clicking a control and selecting Modify With > Vector.
Inspector
Controls Tab
Origin
This control is used to represent the position from which the vector’s distance and angle values
originate.
Distance
This slider control is used to determine the distance of the vector from the origin.
Image Aspect
This slider control is used primarily to compensate for image aspect differences. A square image
of 500 x 500 would use an Image Aspect value of 1, while a rectangular image of 500 x 1000
would use an Image Aspect value of 2. The default for this value is taken from the current Frame
Format preferences using width/height. It may be necessary to modify this control to match the
current image.
EXAMPLE
1. Create a 100-frame comp.
2. Create a simple node tree consisting of a black background and a Text node foreground
connected to a Merge.
5. In the viewer, right-click the Center control of the Merge and choose Modify With >
Vector Result.
This adds a crosshair onscreen control for the Vector distance and angle. The onscreen
control represents the Distance and Angle controls displayed in the Modifiers tab.
6. In the Modifiers tab of the Inspector, drag the Distance control to distance the text
from the Vector origin.
7. Drag the Angle thumbwheel to rotate the text around the Vector origin.
8. This is different from changing a pivot point, since the text itself is not rotating.
These points are animatable and can be connected to other controls.
9. In the Inspector, right-click the Origin control and choose a path to add a motion path
modifier to the Origin control.
10. Verify that the current frame is set to frame 0 (zero) and use the Origin controls in the
Inspector or drag the Vector Origin crosshair to the bottom-left corner of the screen.
11. On the Vector Angle thumbwheel, click the Keyframe button to animate this control.
13. Go to frame 100 and click at the top-left corner of the screen to move the
Vector Origin crosshair.
To animate a coordinate control using an XY path, right-click the control and select Modify With >
XY Path from the contextual menu.
At first glance, XY paths work like Displacement paths. To describe the path, change frames and
position the control where it should be on that frame, and then change frames again and move the
control to its new position. Fusion automatically interpolates between the points. The difference is that
no keyframes are created on the onscreen path.
Look in the Spline Editor to find the X and Y channel splines. Changes to the control’s values are
keyframed on these splines. The advantage to the XY path is that it becomes very easy to work with
motion along an individual axis.
Inspector
X Y Z Values
These reflect the position of the animated control using X, Y, and Z values.
Center
The actual center of the path. This can be modified and animated as well to move the entire
path around.
Size
The size of the path. Again, this allows for later modification of the animation.
Angle
The angle of the path. Again, this allows for later modification of the animation.
Menu Descriptions
For ease of use navigating this manual, each menu item is listed here, and by clicking on the name of the
menu function, you will be taken to the appropriate part of the manual that describes that function.
Fusion
Show Toolbar – Page 33
Toggles the Fusion toolbar on or off.
Reset Composition
Resets a Fusion composition to its initial state.
Import
Specific file format import for Fusion.
Warranty������������������������������������������������������������������������������������������������������������������� 1712
Regulatory Notices
Disposal of Waste of Electrical and Electronic Equipment Within the European Union.
The symbol on the product indicates that this equipment must not be disposed of with other waste
materials. In order to dispose of your waste equipment, it must be handed over to a designated
collection point for recycling. The separate collection and recycling of your waste equipment at the
time of disposal will help conserve natural resources and ensure that it is recycled in a manner that
protects human health and the environment. For more information about where you can drop off
your waste equipment for recycling, please contact your local city recycling office or the dealer from
whom you purchased the product.
This equipment has been tested and found to comply with the limits for a Class A digital device,
pursuant to Part 15 of the FCC rules. These limits are designed to provide reasonable protection
against harmful interference when the equipment is operated in a commercial environment.
This equipment generates, uses, and can radiate radio frequency energy and, if not installed and
used in accordance with the instructions, may cause harmful interference to radio communications.
Operation of this product in a residential area is likely to cause harmful interference, in which case the
user will be required to correct the interference at personal expense.
Operation is subject to the following two conditions:
Bluetooth®
The DaVinci Resolve Speed Editor is a Bluetooth wireless technology enabled product.
Contains transmitter module FCC ID: QOQBGM113
This equipment complies with FCC radiation exposure limits set forth for an uncontrolled environment.
Contains transmitter module IC: 5123A-BGM113
This device complies with Industry Canada’s license-exempt RSS standards and exception from
routine SAR evaluation limits given in RSS-102 Issue 5.
Certified for Japan, certificate number: 209-J00204. This equipment contains specified radio equipment
that has been certified to the technical regulation conformity certification under the radio law.
This module has certification in South Korea, KC certification number: MSIP‑CRM-BGT-BGM113
Technical Specification for Low Power Radio Frequency Equipment 3.8.2 Warnings
Without permission granted by the NCC, any company, enterprise, or user is not allowed to change
frequency, enhance transmitting power or alter original characteristic as well as performance
to a approved low power radio-frequency devices. The low power radio-frequency devices shall
not influence aircraft security and interfere legal communications; If found, the user shall cease
operating immediately until no interference is achieved. The said legal communications means radio
communications is operated in compliance with the Telecommunications Management Act. The low
power radio-frequency devices must be susceptible with the interference from legal communications
or ISM radio wave radiated devices.
Davinci Resolve Speed Editort is class A digital device. Operation of this product in a residential area, it may
cause radio frequency disturbance, in this case the user will be required to take appropriate measures.
NCC ID number: CCAO21LP1880T3
Cerified for Mexico (NOM), for Bluetooth module manufactured by Silicon Labs, model
number BGM113A Includes transmitter module certified in Mexico IFT: RCBSIBG20-2560
Hereby, Blackmagic Design declares that the product (DaVinci Resolve Speed Editor) is using
wideband transmission systems in 2.4 GHz ISM band is in compliance with directive 2014/53/EU.
The full text of the EU declaration of conformity is available from [email protected]
Blackmagic Design recommends appointing a qualified and licenced electrician to install, test
and commission this wiring system.
Blackmagic Design does not accept responsibility for the safety, reliability, damage or personal
injury caused to, or by, any third-party equipment fitted into the console.
For protection against electric shock, the equipment must be connected to a mains socket outlet with
a protective earth connection. In case of doubt contact a qualified electrician.
To reduce the risk of electric shock, do not expose this equipment to dripping or splashing.
Product is suitable for use in tropical locations with an ambient temperature of up to 40°C.
Ensure that adequate ventilation is provided around the product and that it is not restricted.
When rack mounting, ensure that the ventilation is not restricted by adjacent equipment.
No operator serviceable parts inside product. Refer servicing to your local Blackmagic Design
service center.
The DaVinci Resolve Speed Editor contains a single cell Lithium battery. Keep lithium batteries away
from all sources of heat, do not use the product in temperatures greater than 40°C.
Use only at altitudes not more than 2000m above sea level.
In order to obtain service under this warranty, you the Customer, must notify Blackmagic Design of the
defect before the expiration of the warranty period and make suitable arrangements for the performance
of service. The Customer shall be responsible for packaging and shipping the defective product to a
designated service center nominated by Blackmagic Design, with shipping charges pre paid. Customer
shall be responsible for paying all shipping changes, insurance, duties, taxes, and any other charges for
products returned to us for any reason.
This warranty shall not apply to any defect, failure or damage caused by improper use or improper or
inadequate maintenance and care. Blackmagic Design shall not be obligated to furnish service under
this warranty: a) to repair damage resulting from attempts by personnel other than Blackmagic Design
representatives to install, repair or service the product, b) to repair damage resulting from improper
use or connection to incompatible equipment, c) to repair any damage or malfunction caused by the
use of non Blackmagic Design parts or supplies, or d) to service a product that has been modified or
integrated with other products when the effect of such a modification or integration increases the time
or difficulty of servicing the product.
THIS WARRANTY IS GIVEN BY BLACKMAGIC DESIGN IN LIEU OF ANY OTHER WARRANTIES, EXPRESS
OR IMPLIED. BLACKMAGIC DESIGN AND ITS VENDORS DISCLAIM ANY IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. BLACKMAGIC DESIGN’S RESPONSIBILITY
TO REPAIR OR REPLACE DEFECTIVE PRODUCTS IS THE WHOLE AND EXCLUSIVE REMEDY PROVIDED TO
THE CUSTOMER FOR ANY INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES IRRESPECTIVE
OF WHETHER BLACKMAGIC DESIGN OR THE VENDOR HAS ADVANCE NOTICE OF THE POSSIBILITY
OF SUCH DAMAGES. BLACKMAGIC DESIGN IS NOT LIABLE FOR ANY ILLEGAL USE OF EQUIPMENT BY
CUSTOMER. BLACKMAGIC IS NOT LIABLE FOR ANY DAMAGES RESULTING FROM USE OF THIS PRODUCT.
USER OPERATES THIS PRODUCT AT OWN RISK.
© Copyright 2023 Blackmagic Design. All rights reserved. ‘Blackmagic Design’, “DaVinci’, ‘Resolve’, ‘DeckLink’, ‘HDLink’, ‘Videohub’,
‘DeckLink’, and ‘Leading the creative video revolution’ are registered trademarks in the US and other countries. All other company
and product names may be trademarks of their respective companies with which they are associated. Thunderbolt and the
Thunderbolt logo are trademarks of Intel Corporation in the U.S. and/or other countries. Dolby, Dolby Vision, and the double-D
symbol are registered trademarks of Dolby Laboratories Licensing Corporation.