Nuke 101 Professional Compositing and Visual Effects Second Edition PDF
Nuke 101 Professional Compositing and Visual Effects Second Edition PDF
NEXT
⏭
Title Page
🔎
NEXT
⏭
Recommended / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out Title Page
© 2017 Safari. Terms of Service / Privacy Policy
Find answers on the fly, or master something new. Subscribe today. See pricing options.
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
Cover Page Copyright Page
🔎
Nuke 101
Professional Compositing and Visual Effects
Second Edition
Ron Ganbar
PREV NEXT
⏮ ⏭
Recommended
Cover Page / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out Copyright Page
© 2017 Safari. Terms of Service / Privacy Policy
Find answers on the fly, or master something new. Subscribe today. See pricing options.
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
Title Page Dedication page
🔎
NUKE 101
Ron Ganbar
Peachpit Press
Find us on the Web at www.peachpit.com (http://www.peachpit.com)
To report errors, please send a note to [email protected]
Peachpit Press is a division of Pearson Education
Notice of Rights
Footage from This Is Christmas and Grade Zero directed by Alex Norris,
www.alexnorris.co.uk (http://www.alexnorris.co.uk).
Notice of Liability
Trademarks
Many of the designations used by manufacturers and sellers to distinguish
their products are claimed as trademarks. Where those designations
appear in this book, and Peachpit was aware of a trademark claim, the
designations appear as requested by the owner of the trademark. All other
product names and services identified throughout this book are used in
editorial fashion only and for the benefit of such companies with no
intention of infringement of the trademark. No such use, or the use of any
trade name, is intended to convey endorsement or other affiliation with
this book.
ISBN-13: 978-0-321-98412-8
ISBN-10: 0-321-98412-9
987654321
PREV NEXT
⏮ ⏭
Recommended
Title Page / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out Dedication page
© 2017 Safari. Terms of Service / Privacy Policy
⬆
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
Copyright Page Contents
🔎
PREV NEXT
⏮ ⏭
Recommended
Copyright Page/ Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out Contents
© 2017 Safari. Terms of Service / Privacy Policy
Find answers on the fly, or master something new. Subscribe today. See pricing options.
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
Dedication page Introduction
🔎
Contents
Introduction
Nodes
The Viewer
Merging Images
Changing Properties
Rendering
Manipulating Passes
⬆
Understanding Nuke’s Approach to Color
CHAPTER 5: 2D Tracking
Tracker Node Basics
Stabilizing a Shot
CHAPTER 6: RotoPaint
Introducing RotoPaint’s Interface
RotoPaint in Practice
CHAPTER 7: Keying
Introducing Nuke’s Keying Nodes
HueKeyer
Keylight
3D Tracking in Nuke
Wrapping in Groups
Index
PREV NEXT
⏮ ⏭
Recommended
Dedication page
/ Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out Introduction
© 2017 Safari. Terms of Service / Privacy Policy
⬆
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
Contents 1. Getting Started with Nuke
🔎
Introduction
The Foundry’s Nuke is fast becoming the industry leader in compositing
software for film and TV. Virtually all the leading visual effects studios—
ILM, Digital Domain, Weta Digital, MPC, Framestore, The Mill, and Sony
Pictures Imageworks—now use Nuke as their main compositing tool. This
is not surprising, as Nuke offers a flexible node-based approach to
compositing, has a native multichannel workflow, and boasts a powerful
integrated 3D compositing environment that delivers on the artist’s
needs.
basics, which are important for understanding where things are and how
In the book you will find explanatory text and numbered steps. Ideally,
you should complete each numbered step exactly as it is written—
without doing anything else (such as adding your own steps). Following
the steps exactly as written will give you a smooth experience. Not going
through the steps as they are written might result in the next step not
working properly, and could well lead to a frustrating experience. Each
series of steps is also designed to introduce you to new concepts and
techniques. As you perform the steps, pay attention to why you are
clicking where you are clicking and doing what you are doing, as that will
truly make your experience a worthwhile one.
You can use this book on your own through self-study or in a classroom.
Using the book for self-study: If you’re reading this book at your
own pace, follow the instructions in the previous paragraph for your first
read-through of the chapters. However, as you are not limited by any time
frame, I recommend going through chapters a second time, and trying to
do as much of the work as possible without reading the steps. Doing so
can help you better understand the concepts and tools being taught. Also,
the book leaves a lot of room for further experimentation. Feel free to use
the tools you’re learning to take your compositions further the second
time you run through a chapter.
Using the book in a classroom setting: You can use this book to
teach Nuke in a classroom. As a course, the material is designed to run for
roughly 40 hours, or five eight-hour days. I suggest that the trainer run
through a chapter with the students listening and writing down notes; the
trainer should explain the steps to the class as they are shown on screen
while taking questions and expanding on the text where necessary. Once a
chapter has been presented from start to finish, the instructor should give
students time to run through the same chapter on their own in the
classroom in front of a computer, using the book to read the instructions
and follow the steps. This second pass will reiterate everything the trainer
has explained and, through actual experience, show the students how to
use the software while the trainer is still there to answer questions and
help when things go wrong.
INSTALLING NUKE
While this book was originally written for Nuke version 8.0v1, The
Foundry updates Nuke on a regular basis and the lessons can be followed
using more recent updates. Small interface and behavior updates might
slightly alter the Nuke interface from version to version, especially for so-
called “point” updates (for example, if Nuke version 8.1 was released). I
recommend using this book with Nuke version 8.0v1 if you haven’t
already downloaded the most current version and you want the exact
results that are shown in the book.
You can download Nuke in a variety of versions from The Foundry’s web
site, www.thefoundry.co.uk (http://www.thefoundry.co.uk), as discussed in the
next sections.
1. Nuke PLE (Personal Learning Edition): This license (or lack of) is
free—as in, you pay nothing. You can install Nuke on your computer and
not purchase a license. With the PLE you can use Nuke as much as you
want, although certain limitations apply. These include the placement of a
watermark on the Viewer and on renders, and the disabling of WriteGeo,
Primatte, FrameCycler, and Monitor Output. Keep in mind that Nuke
project files saved with the PLE version can be opened only with the PLE
version.
2. Nuke: This is regular Nuke—the flavor this book covers. Nuke requires
⬆
a trial license or regular paid license, which should cost about $4,200.
3. NukeX: This license includes all the regular Nuke features with a few
additional high-end tools. These tools include, among other things, the
Camera Tracker, Particles System, PointCloudGenerator, LensDistortion,
DepthGenerator, FurnaceCore plug-ins, and PRman-Render (allowing for
RenderMan integration). NukeX costs around $8,150. Chapter 10 covers
the Camera Tracker and shows how to use it under the NukeX license;
however, the exercises in the chapter can also be done without a NukeX
license.
Downloading Nuke
To download Nuke, follow these steps.
1. Go to http://www.thefoundry.co.uk/products/nuke-product-
family/nuke (http://www.thefoundry.co.uk/products/nuke-product-family/nuke).
4. Select the latest copy of Nuke for your operating system (Mac,
Windows, or Linux). You can also download older versions of Nuke if
necessary.
1. Go to http://www.thefoundry.co.uk/products/nuke-product-
family/nuke/trial (http://www.thefoundry.co.uk/products/nuke-product-
family/nuke/trial).
2. In the Free Trial page, fill in the form and click Continue.
The System ID, which is the first entry to fill in on this next page, is the
unique code of your computer—the free license will be locked to that
computer. The section below the entry field called Finding The System ID
explains where to find this number on your computer.
3. After you complete the form and click Continue, follow the rest of the
instructions on The Foundry’s web site for how to install the license on
your operating system.
To access the download and install the files on your computer, follow
these steps:
⬆
1. Register your book at www.peachpit.com/nuke1012E
(http://www.peachpit.com/nuke1012E). If you don’t already have a Peachpit
account, you will be prompted to create one.
2. Once you are registered at the Peachpit web site, click the Account link,
select the Registered Products tab, and click the Access Bonus Content
link that appears next to the Nuke 101 book image.
A new page opens with the download files listed as individual zip files
ordered by chapter number.
3. Download each zip file and copy them to your hard drive.
4. Unzip the lesson files for each chapter. Each chapter has its own
directory. Some chapters use files from other chapters, so you need to
unzip all the files.
6. Drag the chapter folders you unzipped into the NukeChapter directory
(after you have done so, you can delete the zip files from your system).
ACKNOWLEDGMENTS
I’ve been teaching compositing since 2001. When Nuke started becoming
the tool of choice for a lot of the studios around me, I decided to write a
course that focused on it. I started writing the course in the spring of
2009 with help from The Foundry, whose staff was very kind and
forthcoming. I would specifically like to thank Vikki Hamilton, Ben
Minall, Lucy Cooper, John Wadelton, and Matt Plec.
About a year after that, I approached Peachpit Press with the idea of
turning the course into a book. Karyn Johnson, the book’s senior editor,
took on the project and after a long digestion period I sat down and
started adapting the course into a book. Karyn made sure I had the best
support I could possibly have, and with the help of the wonderful team at
Peachpit, including Corbin Collins and Kelly Kordes Anton, I managed to
complete the book to the high standard that Peachpit expects of their
writers. Thanks also go out to the kind friends and colleagues who gave
me materials to use for the book: Alex Orrelle, Alex Norris, Hector
Berrebi, Dror Revach, Assaf Evron, Menashe Morobuse, and Michal
Boico.
For the second edition, started at the end of 2013, Karyn again stepped up
and pushed me to make the book even more than it was before—striving
for perfection. Rebecca Rider taught me a lot about how badly I write,
fixing my English as much as I would let her. I also got help from a lot of
friends and colleagues again (to add to the previous list): Oded Binnun,
Dor Shamir, Or Kantor, Jonathan Carmona, Itay Greenberg, Paul Wolf,
and Shani Hermoni. Wow. I owe a lot to a lot of people.
Alex Norris gave permission to use footage from two of his short film
productions: This Is Christmas and Grade Zero. This is used in Chapters
1, 6 , and 8 .
The footage in Chapter 3 is taken from a personal short film called Goose
by Dor Shamir (dorshamir.blogspot.com (http://dorshamir.blogspot.com)), Shai
Halfon, and Oryan Medina (oryanmedina.com (http://oryanmedina.com)).
Special thanks to Geronimo Post&Design. Jonathan Carmona did the 3D
rendering specifically for this book.
The footage in Chapter 7 is taken from a short film called Aya directed by
Mihal Brezis and Oded Binnun (who was also DP), starring Sarah Adler
and Ulrich Thomsen, with thanks to production companies Divine
Productions and Cassis Films.
The bullet in Chapter 8 was rendered especially for this book by Dror
Revach.
The camera and geometry for Chapter 11 was solved by Michal Boico.
⬆
And finally: The second edition was a breeze to write compared to the first
one. Maybe that was because I knew what I was getting into—but more
important, my family knew what I was getting them into, and so things
were a lot more relaxed. So thank you to my wife, Maya, and my two sons,
Jonathan and Lenny, who had to bear with my long days and long nights
of writing, gave me the quiet and solitude I needed, and believed (and
prayed) that I would finish this second edition quickly so life could get
back to normal. And so I have.
PREV NEXT
⏮ ⏭
Recommended
Contents / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out 1. Getting Started with Nuke
© 2017 Safari. Terms of Service / Privacy Policy
⬆
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
Introduction 2. Touring the Interface with a Basic Composite
🔎
In this lesson, I explain the nuts and bolts of the Nuke interface so you
will feel at ease clicking where you need to. It may seem boring, but it’s a
must. You need to know the components of the Nuke interface because
that is the foundation of all the cool stuff you’ll do later.
The default layout is split into four key areas, called panes, populated by
six panels. Yes, that’s right, panes are populated by panels. Confusing
⬆
terminology, I agree. The first pane, the strip at the very left, is populated
by the Nodes Toolbar panel. The black pane that takes up the top half of
the screen is populated by the Viewer. Beneath that, there’s the pane
populated by the Node Graph, which is also called the DAG (Directed
Acyclic Graph), the Curve Editor, and the Dope Sheet. The large empty
pane on the right is populated by the Properties Bin.
At the top left of every pane there’s a tab with the name of the panel on it
(except for the Nodes Toolbar). Remember, the pane containing the Node
Graph panel also contains the Curve Editor panel and the Dope Sheet
panel. You can click their respective tabs to switch between the three.
You should become familiar with the Content menu. It enables you to split
the current pane either vertically or horizontally, creating another pane in
the process. It also lets you detach the pane or tab from the rest of the
interface, which allows it to float above the interface (I cover several uses
for this technique later on). You can also use the Content menu to
populate the associated pane with any panel, whether it is a Curve Editor,
a Node Graph, a Script Editor, and so on.
When you hover your mouse pointer between the Node Graph and the
Viewer, the cursor changes to show that you can move the divide between
the two panes to make the Viewer bigger or the Node Graph bigger. You
can drag any separating line to change the size of the panes.
Hover your mouse pointer in the Node Graph and press the spacebar on
your keyboard to turn the whole window into the Node Graph. Click again
to get the rest of the interface back. You can do this with any pane; simply
hover your mouse pointer in that pane. This procedure is very useful if
you want to look at only the Viewer.
Nodes Toolbar contains all the different nodes you can use to drive
Nuke. These are split into several sections or toolboxes represented by
little icons.
Pixel Analyzer helps you pick out and analyze colors from the ⬆
Viewer.
Script Editor is a text window where you can write Python scripts to
automate various features of Nuke.
New Viewer opens a new Viewer where you can view, compare, and
analyze your images.
Using the Content menu, you can change the interface to fit the specific
needs and preferences of different users.
Using the Content menu, you can customize the user interface. You can
then use the Layout menu to save and retrieve the layout configuration.
Let’s practice this in order to place the Progress Bars panel at the bottom
right of the interface.
1. Launch Nuke.
2. Click the Content menu next to the Properties tab near the top right of
the screen (top of the Properties panel).
You now have another pane, which holds no panel, at the bottom right of
the interface (FIGURE 1.5). Let’s populate it with the Progress Bars
panel (FIGURE 1.6).
⬆
FIGURE 1.5 The newly created pane holds no panel yet.
4. Click the Content menu in the newly created pane and choose Progress
Bar.
The Progress Bars panel has been created and is populating the new pane
(Figure 1.6). You don’t need too much space for this panel, so you can
move the horizontal separating line above it down to give more space to
the Properties Bin.
5. Click the line separating the Properties Bin and the Progress Bars panel
and drag it down.
I like having the Progress Bars panel docked in a pane at the bottom right
of the interface. It means I always know where to look if I want to see a
progress report. And because it is docked, the Progress Bars panel doesn’t
jump up and interfere when I don’t need it. If you like this interface
configuration, you can save it. You can use the Layout menu bar item to
do that.
The layout has now been saved. Let’s load the default layout and then
reload this new one to make sure it has been saved. ⬆
7. From the Layout menu, choose Restore Layout 1.
You can now see the default window layout. Now let’s see if you can reload
the window layout you just created.
8. From the Layout menu, choose Restore Layout 2.
Presto! You now have full control over Nuke’s window layout.
Hot keys
Like most other programs Nuke uses keyboard shortcuts or hot keys to
speed up your work. Instead of clicking somewhere in the interface, you
can press a key or combination of keys on your keyboard. You are
probably familiar with basic hot keys such as Ctrl/Cmd-S for saving.
NOTE
NODES
Nodes are the building blocks of the process tree. Everything that happens
to an image in Nuke happens by using nodes.
Creating a node
There are several ways to create nodes. Probably the best way is to choose
a node from the Nodes Toolbar. The Nodes Toolbar is split into several
toolboxes, as shown in TABLE 1.1.
⬆
TABLE 1.1 The Various Toolboxes in the Nodes Toolbar
As mentioned, there are other ways to create nodes. You can choose a
node from the Node Graph’s contextual menu, which mirrors the Nodes
Toolbar (FIGURE 1.10).
The easiest way to create a node, if you remember its name and you’re
quick with your fingers, is to press the Tab key while hovering the mouse
pointer over the Node Graph. This opens a dialog box in which you can
type the name of the node. As you type, a “type-ahead” list appears with
matching node names, beginning with the letters you are typing. You can
then use the mouse, or the up and down arrows and the Enter key, to
create that node (FIGURE 1.11).
⬆
FIGURE 1.11 Pressing the Tab key in the Node Graph allows
you to create and name a new node.
Let’s practice creating a node by importing a bit of footage into Nuke. This
will also give you something to work with. Do either one of the following:
Click the Image toolbox in the Nodes Toolbar and then click the Read
node.
Hover the mouse pointer over the Node Graph, or DAG, and press the
R key (FIGURE 1.12).
The Read node is a bit special: When creating it, you first get the File
Browser instead of just a node.
Nuke doesn’t use the basic operating system’s file browser, as it requires
extra features such as video previews and it needs to be consistent across
all operating systems.
⬆
NOTE
1. Browse to the Nuke101 folder that you copied onto your hard drive and
click the Chapter01 folder.
2. Click the little arrow at the top-right corner of the browser to bring up
the File Browser’s viewer.
4. Scrub along the clip by clicking and dragging the timeline at the bottom
of the image.
This Viewer makes it easy to see what you are importing before you
import it in case you are not sure what, for example, Untitled08 copy 3.tif
is.
There is now a node in your Node Graph called Read1 (FIGURE 1.15)!
Hurrah!
FIGURE 1.15 The new Read node is now in your DAG near
your Viewer1 node.
THE VIEWER
Another important part of the Nuke user interface is the Viewer. Without
it, you will be lost as far as compositing goes. You use the Viewer to look
at your work as you go, to receive feedback when you are editing nodes, to
⬆
compare different images, and to manipulate properties on the image.
You will explore each of these as you work through this book.
Notice that aside from the Read1 node you created, there’s also another
node in your Node Graph called Viewer1. Also notice that your Viewer is
called Viewer1. Every Viewer node represents an open Viewer panel. To
view an image in Nuke, simply connect the Viewer node’s input to the
output of the node you want to view. It will then appear in the Viewer
itself.
You can click the Viewer node’s little input arrow and drag it from the
input of the Viewer node to the node you want to view.
You can do the reverse and drag the output of the node you want to
view to the Viewer node.
Either method connects the node to the Viewer node’s first input called
input 1.
The connecting line that appears between the two nodes is called a pipe. It
simply represents the connections between nodes in the Node Graph
(FIGURE 1.17).
Another way to connect a node to the Viewer node’s input 1 is to select the
node you want to view and press the number 1 on the keyboard.
2. Keep hovering over the Node Graph and press 1 on the keyboard.
Viewer1 now shows the image you brought into Nuke—an image of a man
walking under an arch. FIGURE 1.18 illustrates the Viewer’s anatomy.
The buttons are explained in more detail in further chapters.
⬆
TABLE 1.2 How to Navigate the Viewer
NOTE
All these hot keys will also let you navigate the DAG and
Curve Editor.
1. While hovering the mouse pointer over the Viewer, press the R key to
view the red channel.
The channel display box on the far left is now labeled R. You can also
change which channel you are viewing by clicking in the channel display
drop-down menu itself (FIGURE 1.19).
FIGURE 1.19 The channel display box shows R for the red
channel.
You’re now back to viewing all three color channels, and the channel
display box changes back to RGB (FIGURE 1.20).
3. Hover your mouse pointer over the Node Graph and press R on the
keyboard. The File Browser opens again.
Notice how pressing R while hovering the mouse pointer in the Viewer
and in the Node Graph produces different results.
You now have another Read node in your interface. If one Read node is
overlapping the other, you can click and drag one of them to make space.
Let’s view the new Read node as well.
5. Select the new Read node by clicking it once and pressing 1 on the
keyboard.
You now see the new image, which is a graded (color-corrected) version
of the previous image. Other than that, the two images are the same.
⬆
Viewer inputs
Any Viewer in Nuke has up to 10 different inputs and it’s very easy to
connect nodes to these inputs in order to view them. Simply by selecting a
node and pressing a number on the keyboard (like you pressed 1 before),
you’re connecting the node to that number input of the Viewer. This
results in several different parts of the process tree being connected to the
same Viewer. You are then able to change what you’re looking at in the
Viewer with ease.
Let’s connect the first Read node to the first Viewer input and the second
Read node to the second Viewer input.
1. Select the first Read node (Read1) and press 1 on the main part of the
keyboard.
2. Select the second Read node (Read2) and press 2 on the keyboard
(FIGURE 1.21).
To view the different inputs of a Viewer, hover the mouse pointer over the
Viewer itself and press the corresponding numbers on the keyboard.
3. Hover your mouse pointer over the Viewer and press 1 and 2 several
times. See how the images change from one input to another.
Using this method you can keep monitoring different stages of your
composite as you are working. This is also a good way to compare two
images.
Playing a clip
Playing a clip is kind of mandatory in a compositing package. Let’s see
this piece of moving image playing in realtime.
This aside, you can strive to play clips in realtime. You will need to define
what realtime is for your footage. This footage is 25fps. Let’s set the
Viewer to 25fps. You do this using the fps Input field above the Timebar.
When playing, this Input field shows how fast playback really is instead of
how fast it is supposed to be playing.
Now when you click Play, Nuke loads each frame from the disk, applies
any calculations to it, caches the result in RAM, and presents it in the
Viewer. It will then move on to the next frame. Once Nuke caches all the
frames, it starts to play the cached frames in the Viewer instead of going
⬆
back to the originals. This allows Nuke better speed in playback. Nuke
now attempts to play the given frame rate. The fps field displays the actual
frame rate that is playing, whether it’s realtime or not.
Frames that have been cached in RAM are represented as a green line
under the playhead in the Timebar as shown in Figure 1.23. Once the
whole Timebar is green, you should be able to get smooth playback.
TIP
The hot keys for playing in the Viewer are easy to use and
good to remember. They are exactly the same as in Avid
Media Composer and Final Cut Pro: L plays forward, K
pauses, and J plays backward. Pressing L and J one after the
other enables you to easily rock ’n’ roll your shot.
2. Click the Play Forward button on the Viewer controls. Let it loop a
couple of times to cache, and then see how fast it is actually playing.
3. Click Stop.
Chances are it played pretty well and close to, if not exactly, 25fps. Let’s
give it something more difficult to attempt.
5. Click Play and again watch how fast Nuke is playing the footage.
Nuke probably isn’t reaching 1000fps, but it should be telling you what it
is reaching. How thoughtful. My system reaches around 150fps at this
resolution.
At the end of each chapter, quit Nuke to start the next chapter with a fresh
project. You can play around more if you like, but when you’re finished...
6. Quit Nuke.
That’s it for now. We will do actual compositing in the next lesson. Thanks
for participating. Go outside and have cookies.
PREV NEXT
⏮ ⏭
Recommended
Introduction / Queue / History / Topics / Tutorials / Settings / Get the App / Sign
2. Out
Touring the Interface with a Basic Composite
© 2017 Safari. Terms of Service / Privacy Policy
⬆
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
1. Getting Started with Nuke 3. Compositing CGI with Bigger Node Trees
🔎
Usually, you start a composite with one or more images you have brought
in from disk. You manipulate each separately, connect them together to
combine them, and finally render the desired result back to disk. By
taking these steps, you build a series of processors, or nodes, which when
viewed together, look like a tree, which is why it’s called a process tree.
Nuke uses a second analogy to describe this process: that of flowing water.
A tree can also be called a flow. As the image passes from one node to
another, it flows. This analogy uses terms such as downstream (for nodes
after the current node) and upstream (for nodes before the current
mode). Nuke uses these two analogies interchangeably.
⬆
FIGURE 2.1 This is what a basic Nuke tree looks like.
In this figure, you can see a relatively basic tree that is made up of two
images: the smiling doll on the top left and the orange image on the top
right. The images pass through several nodes—a resolution-changing
node called Reformat1, a color-correction node called Premult1, and a
transformation node called Transform1—until they merge together at the
bottom of the tree with another node called Merge1 to form a composite.
The lines connecting the nodes to each other are called pipes.
Trees usually flow in this way, from top to bottom, but that is not strictly
the case in Nuke. Trees can flow in any direction, though the tendency is
still to flow down.
The black boxes are all images being processed and connected together
with a large variety of nodes. The flow of this tree is down and to the left.
At the very end of this tree is a yellow box, which is where this tree ends.
This makes for a pretty daunting image. However, when you are the one
building this flow of nodes, you know exactly what each part of the tree is
doing.
⬆
FIGURE 2.3 The node’s anatomy
The tree flows from the output of one node to the input of the next. Not all
nodes have all the elements shown in Figure 2.3. A Read node, which you
will use again in a moment, has an output only because it is the beginning
of the tree and has no need for an input. Some nodes don’t have a Mask
input (explained in Chapter 3), and some nodes have more than one
input.
In Chapter 1 you learned how to read images from disk using a Read node,
so let’s do that again and learn a few more options along the way.
1. Launch Nuke.
2. While hovering the mouse pointer over the DAG (Directed Acyclic
Graph, also called the Node Graph), press the R key.
• Select one file name, then click the Next button at the bottom right,
select another file, and click the Next or Open button again. This is the
way you’ll multiple-select in this exercise.
4. Select the file called maya.png and click the Next button, then select
background.####.png and click the Open button or press the
Enter/Return key on your keyboard.
You now have two new Read nodes in the DAG (FIGURE 2.4). For some
reason, Nuke reverses the order of the Read nodes. The second image you
brought in is labled Read1 and should also be labeled
background.0001.png. The first, Read2, should also be labeled maya.png.
This second line of the label is generated automatically by Nuke and
shows the current image being loaded by the Read node.
⬆
FIGURE 2.4 After bringing in two files, your DAG should
look like this.
NOTE
If the file called maya.png is Read1, not Read2, you did not
complete the steps as presented. This is not necessarily a
problem, but you’ll have to remember that my Read2 is your
Read1.
5. Click Read2 (maya.png), hover the mouse pointer over the DAG, and
press 1 on the keyboard.
⬆
This image is part of a file sequence. It’s a rather dark image of a painter’s
toolbox. I shot this image in HD and, indeed, if you look at the same
corners of the image in the Viewer, you can see that the resolution for this
sequence is 1920×1080. Since this is a defined format (more on this in
Chapter 8), the bottom-right corner displays the name of the format
rather than the resolution. In this image, the bottom-right corner shows
HD.
Your goal in this chapter is to place the doll image inside the artist’s
toolbox—and for it to look believable. Let’s start by placing the doll image
over the background image.
MERGING IMAGES
The definition of compositing is combining two or more images into a
seamless, single image. You definitely need to learn how to do this in
Nuke. In layer-based systems, such as After Effects and Flame, you simply
place one layer on top of another to achieve a composite. In Nuke, you
combine two images by using several different nodes—chief among which
is the Merge node.
TIP
You can also use the hot key M to create a Merge node.
1. Select Read2 (maya.png) and choose Merge from the Merge toolbox.
The Merge node has a slightly different anatomy than a standard node. It
has two inputs rather than just the one (FIGURE 2.6).
FIGURE 2.6 The Merge node you created has a free, still
unconnected, input.
NOTE
The Merge node can connect two or more images together using various
layering operations such as Overlay, Screen, and Multiply. However, the
default operation called Over simply places a foreground image with an
alpha channel over a background image. The A input, already connected,
is the foreground input. The unconnected B input is the background
input. Let’s connect the B input.
2. Click and drag Merge1’s B input toward Read1 and release it over Read1
(FIGURE 2.7).
⬆
FIGURE 2.7 Both inputs are now connected, creating a
composite.
3. Select Merge1 and press 1 on the keyboard to view the result in the
Viewer (FIGURE 2.8).
The image, however, looks wrong—it is washed out in an odd light purple
color.
5. While hovering your mouse pointer over the Viewer, press the A key to
view the alpha channel (FIGURE 2.9).
⬆
FIGURE 2.9 The doll’s alpha channel
You can clearly see an alpha image here. It represents the area where the
doll is. The black area represents parts of the doll image that should be
discarded when compositing. So why aren’t they being discarded?
6. Press the A key again to switch back to viewing the RGB channels.
Notice that the black areas in the alpha channel are a light purple color,
the same as the discoloration in your composite. Maybe this is the source
of your problems?
If black areas in the alpha channel are not black in the RGB channels,
the image is straight.
⬆
If you are creating the alpha channel (using a key, a shape-creation
node, or another method), you should check whether your operation
created a premultiplied image or not. As you learn the alpha-channel
creation tools, I will cover premultiplication in more detail.
Another reason you need to know the premultiplication state of your
image is because of color correction. Why? Because you can’t color
correct premultiplied images. You are probably not aware of this because
other software packages hide it from you. In Nuke, however,
premultiplication is something you need to take care of on your own. To
get your image to a state where you can color correct it, you will need to
unpremultiply it. You do this by dividing the premultiplied RGB channels
with the alpha, thus reversing the multiplication. This produces a straight
image that you can then color correct; after you’ve done so, you can then
reapply the multiplication. You will learn how to do this later in the
chapter.
Make sure you supply the Merge node with a premultiplied image as
the foreground image for its Over (most basic and default) operation.
The rule says that if an image’s RGB channels aren’t black where the alpha
channel is black, then it isn’t a premultiplied image. When you looked at
Read2 you noticed exactly this. The black areas in the alpha channel were
a light purple color in the RGB channels (FIGURE 2.10). This means
this image is a...what? A straight image!
FIGURE 2.10 This image shows both the alpha and RGB
channels.
1. While Read2 is selected, press Tab and type pre. A drop-down menu
appears with the Premult option. Use the down arrow key to navigate to
it and press Enter/Return. Alternatively, you can choose the Premult node
from the Merge toolbox (FIGURE 2.11).
⬆
FIGURE 2.11 The Premult node sits between Read2 and
Merge1.
There’s now a Premult node called Premult1 connected to Read2. The area
outside the doll’s outline is now black (FIGURE 2.12), indicating that it
is now a premultiplied image. You can proceed to place this image over
the background.
FIGURE 2.12 The area that was light purple is now black.
All that purple nastiness has disappeared. You are now seeing a composite
of the foreground image over the background (FIGURE 2.13).
NOTE
When you disable a node, the flow of the pipe runs through
the node without processing it. You can disable any node, and
doing so is a very handy way of measuring what you did.
What you now see is a classic Nuke tree: two streams flow into one. The
foreground and background connect through the Merge node.
Before you go further, and there’s much further to go, save your Nuke
script. What’s a script? You’re about to find out.
All Nuke script-related functions are accessible through the File menu and
respond to standard hot keys for opening and saving files.
2. In the File Browser that opens, navigate to the Nuke101 directory (that
you copied onto your hard drive from the downloadable files that come
with this book).
3. Click the Create New Folder icon at the upper left of the browser.
5. Name the script by adding the name chapter02_v01 to the end of the
path in the path Input field at the bottom of the File Browser. Nuke adds
the file extension automatically.
6. Press Enter/Return.
Nuke just saved your script. You can quit Nuke, go have a coffee, come
back, open the script, and continue working. ⬆
NOTE
By default Nuke autosaves your project every 30 seconds, or if you are not
moving your mouse, it will save once after 5 seconds. But it doesn’t
autosave if you haven’t saved your project yet. You have to save your
project the first time. That’s why it is important to save your project early
on.
Another great feature in Nuke is the Save New Version option in the File
menu. You will save different, updated versions of your script often when
compositing, and Nuke has a smart way of making this easy for you. If you
add these characters to your script name, “_v##”, where the # symbol is a
number, using the Save New Version option in the File menu adds 1 to
that number. So if you have a script called nuke_v01.nk and you click File
> Save New Version, your script will be saved as nuke_v02.nk. Very
handy. Let’s practice this with your script, which you already named
chapter02_v01.nk before.
TIP
If you are not sure what a hot key does or you don’t
remember the hot key for something, you can choose Key
Assignment from the Help menu.
The Current Info panel shows that the name of your current script
contains “_v01”. Let’s save this as “_v02”.
10. Press the Q key again. Notice that your script is now called
Chapter02_v02.nk.
This same treatment of versions, using “_v01”, is also available when you
are working with image files. For example, you can render new versions of
your images in the same way. You will try this at the end of this chapter.
Because you want to reposition only the foreground image, you need to
connect the Transform node somewhere in the foreground (the doll)
stream. Remember from the explanation of premultiplication earlier in
this chapter that you should always strive to transform premultiplied
images. Now you’re going to insert the Transform node after the image
has been premultiplied. It is important, when compositing in the tree, to
think about the placement of your nodes, as in this example.
1. Click Premult1 once and then press the T on the keyboard to insert a
Transform node after it (FIGURE 2.16).
Sometimes what you want to do is to start a new branch from the output
of an existing node. That lets you manipulate it in a different way, and
then later, most likely, connect it back to the rest of the tree. This is called
branching. To branch, select the node you want to branch from and then
hold Shift while creating the new node. You don’t need a branch now, but
let’s practice it anyway.
But hold on, I was mistaken. I didn’t want you to create another
Transform node, I wanted you to create a Blur node. This happens a lot.
You create a node by mistake and you need to replace it with another
node. This is called replacing. To do this, select the node you want to
replace and then hold Ctrl/Cmd while creating a new node. Let’s try this.
3. Select Transform2 and, while holding down Ctrl/Cmd, click Blur from
the Filter toolbox in the Node’s toolbar (FIGURE 2.18).
⬆
FIGURE 2.18 You have now replaced Transform2 with
Blur1.
To insert a node, select a node in the DAG and create a new node.
(Note that creating a node means creating an unconnected node. You do
this by not having anything selected in the DAG before you create a node.)
The next section covers a few more things you need to know about how to
manipulate nodes in the DAG.
Connecting nodes
If you have an unconnected node that you want to connect, or if you have
a node you want to reconnect in a different way, here’s how:
To move a pipe from one node to another, click the end you want to
connect (input or output) and drag it to the node you want to connect it to
(FIGURE 2.19).
⬆
FIGURE 2.20 Holding Shift and dragging the input end of
the pipe
Selecting nodes
You’ve been selecting nodes for a while now, but I never really explained
this properly. Well done for succeeding without detailed instructions.
Arranging nodes
It is also good practice to keep a tree organized. The definition of
organized is subjective, but the general idea is that looking at the tree
makes sense, and following the flow from one node to the next is easy.
Here are a few ways to keep an organized tree:
Nodes snap to position when they get close to other nodes both
horizontally and vertically. The snap makes keeping things in line easy.
To arrange a few nodes together, select them, and then press L on the
keyboard. Use this with caution—you can get unexpected results because
Nuke uses simple mathematical rules instead of common sense. You
might expect that the Read nodes would be on top, but sometimes they’re
on the side.
You can also create backdrops around part of your tree that remind
⬆
you where you did certain things—a kind of group, if you will. Access
these Backdrop nodes through the Other toolbox.
Now that you know how to do all these things, let’s delete the unneeded
Blur1 node.
Your tree should now have only the added Transform1 node and should
look like FIGURE 2.22.
Now let’s get back to using the Transform node to place the doll in the
correct place in the artist’s toolbox.
CHANGING PROPERTIES
Notice that the Viewer now displays a new set of controls. Some nodes,
such as the Transform node, have on-screen controls, which mirror
properties in the node’s Properties panel but are more interactive and
easier to use. On-screen controls display when a node’s Properties panel is
loaded in the Properties Bin. When a node is created, its Properties panel
is loaded automatically into the Properties Bin. The controls that are now
on screen belong to the newly created Transform1 node. You can use these
controls to change the position, scale, rotation, skew, and pivot of the
image, as shown in FIGURE 2.23.
When you are finished playing, reset the node’s properties so you can
start fresh. You do that in the node’s Properties panel. ⬆
2. Right-click (Ctrl-click) an empty space in Transform1’s Properties
panel and then choose Set Knobs to Default from the contextual menu
(FIGURE 2.24).
FIGURE 2.24 This is how you reset a node’s properties.
Now that you have reset the doll, you can proceed to position the doll at
the front, bottom-right of the painter’s toolbox as shown in FIGURE
2.25.
FIGURE 2.25 This is where you want the doll image to end
up.
TIP
• Translate.x = 1048
• Translate.y = –20
• Scale = 0.6
The preceding note poses a question: How, or rather, where, are you going
to color correct the doll image? One way to color correct it is to unpremult
the image after the transformation, color correct it, and then premult it
again. However, there is really no reason to do that. The great thing about
the node/tree paradigm is that you have access to every part of the comp
all the time. You can simply color correct the doll image before you
premult it—right after the Read node.
1. Select Read2.
The Grade node gives you control over up to four channels using the
following properties: Blackpoint, Whitepoint, Lift, Gain, Multiply, Offset,
and Gamma. These properties are covered in Chapter 4.
You are now going to learn how to manipulate various types of properties
in a Properties panel. Grade1’s properties will come to your aid.
You should have Grade1’s and Transform1’s Properties panels loaded into
the Properties Bin. This happened when you first created the nodes.
However, if you closed it by mistake, or want to learn how to close it, keep
reading (FIGURE 2.27).
To load the node’s Properties panel in the Properties Bin, all you need
to do is double-click the node. Newly created nodes’ Properties panels
load automatically.
To remove a node’s Properties panel from the Properties Bin, click the
Close Panel button. You do this for various reasons, chief of which is to
get rid of on-screen controls that are in the way.
NOTE
You can open more than one Properties panel in the Properties Bin at a
time. How many depends on the Max Panels number box at the top left of
the Properties Bin.
The Lock Properties Bin button locks the Properties Bin so that no new
panels can open in it. If this icon is locked, double-clicking a node in the
DAG displays that node’s Properties panel as a floating window.
Using these options, you can create a convenient way to edit properties in
the Properties Bin.
When changing a slider, for example, you are changing the knob (the
property’s value). A good thing about Nuke is that all values are
consistent. All transformation and filtering values are in pixels, and all
color values are in a range of 0 for black to 1 for white.
Place the cursor in the Input field and use the up and down arrows to
nudge digits up and down. The magnitude of change depends on the
initial position of your cursor. For example, to adjust the initial value of
20.51 by 1s, insert your cursor to the left of the 0.
Use the virtual slider by clicking and holding the middle mouse button
and dragging left and right in the numeric box. To increase the strength of
your drag, hold Shift. To decrease it, hold Alt/Option.
Hold down Alt/Option and drag up and down with the left mouse
button. In the same way as when you use the arrow keys, the magnitude
depends on where in the Input field the cursor was when you clicked it.
The next two options refer only to color-related properties that have a
Color swatch and a Color Picker button.
Use the Color swatch to sample colors from the Viewer. Sampling ⬆
colors changes the value of the Color property. To do so, click the Color
swatch to turn it on, then hold Ctrl/Cmd and drag the screen to pick a
color. This will change the value of your property to mirror the value of
the picked color. When you are finished picking colors, click the Color
swatch to turn it off (to make sure you don’t accidentally pick colors from
the Viewer).
Use the Color Picker button to load the In-panel Color Picker.
NOTE
You can pick color using three different methods: the RGB (red, green,
blue), HSV (hue, saturation, value), or TMI (temperature, magenta,
intensity). Three areas represent the three methods. On the left, the color
wheel represents HSV. Use it by manipulating these three elements:
Change hue by moving the triangle widget around the color wheel, or
by holding down Ctrl/Cmd and clicking and dragging left or right on the
color wheel.
Change saturation by moving the dot from the center of the color wheel
toward the edge, or by holding down Shift and clicking and dragging left
or right on the color wheel.
To change value, hold down Shift-Ctrl/Cmd and click and drag left or
right on the color wheel.
In the middle, you have the horizontal RGB sliders and Fine Tune buttons
that control RGB.
The controls for TMI are on the right. You have the three sliders and
that’s it.
2. Go ahead and play with the controls. When you’re finished, close it by
clicking the Color Picker button again.
Notice you no longer have a slider. Instead, you have four numeric Input
fields (FIGURE 2.30).
Where has your slider gone? Every property in the Grade node can control
all four channels, RGB, and alpha separately, but how can this happen
with just one slider? As long as all four values remain the same, you don’t
need four different Input fields, so just one Input field with a slider is
enough. However, when the colors for the different channels are different,
the slider is replaced by four numeric Input fields—one for each channel.
You can switch between the one slider and the four fields using the ⬆
Individual Channels button to the right of the Color Picker button. If you
have four fields and click this button, the interface switches back to
having only a single field. The value you had in the first field is now the
value of the new single field; the other three values are lost.
Using the Animation menu
The Animation button/menu on the right side of any property deals with
animation-related options. Choose options from this menu to create a
keyframe, load a curve onto the curve editor, and set the value of a
property to its default state. The Animation menu controls all values
grouped under this property. So, in the following example, the Animation
menu for the Gamma property controls the values for R, G, B, and alpha.
If you right-click/Ctrl-click the Input field for each value, you get an
Animation menu for just that value (FIGURE 2.31).
1. Click the Animation menu at the far right of the Gamma Property’s
knob and choose Set to Default.
2. Click the Individual Channels button to the left of the Animation menu
to display the four separate Input fields.
• R = 0.778
• G = 0.84
• B = 0.63
•A=1
Entering these numbers into the Input fields makes the doll image darker
and a little more orange. This makes the doll look more connected to the
background, which is dark and has an orange tint to it.
RENDERING
You should be happy with the look of your composite now, but you have a
way to go. To be safe, however, let’s render a version of the Nuke tree to
disk now. To render means to process all your image-processing
instructions into images that incorporate these changes.
1. Click Merge1 at the bottom of the tree and press the W key. You can
also choose Write from the Image toolbox in the Nodes Toolbar or use the
Tab key to create a Write node by starting to type the word write
(FIGURE 2.32).
⬆
FIGURE 2.32 Inserting a Write node at the end of your tree
You need to define the path you’re going to write to, the name of the file,
and the type of file you’re going to write to disk. You use the same
student_files folder you made earlier in your Nuke101 folder and render a
TIFF image sequence there called doll_v01.
2. In the Write1 Properties panel, look at the File property line. At the far
right is an icon of a folder (the top one; the bottom one belongs to
something else called Proxy). Click it (FIGURE 2.33).
By navigating to this path, you have chosen the path to which you want to
render. The first step is complete.
In the field at the bottom, you need to add the name of the file you want to
create at the end of the path. When you do so, make sure you do not
overwrite the path.
You are going to render a file sequence instead of a movie file such as a
QuickTime file. File sequences are generally faster for Nuke to process
because Nuke doesn’t have to unpack the whole movie file to load in just
one image.
But how will you call this file sequence? How are you going to define the
number of digits to use for frame numbers? The next section explains
these issues.
When you tell Nuke to write a file sequence, you need to do four things:
Give the file a frame padding structure—for example, how many digits
will be used to count the frames? You can tell Nuke how to format
numbers in two ways. The first is by using one # symbol for each digit. So
#### means four digits and ####### means seven digits. The second
method is using %0xd, where 0x means the number of digits to use. If you ⬆
want two digits, for instance, you write %02d. I find this second method
easier to decipher. Just by looking at %07d, I can tell that I want seven
digits. If I use the other method, I actually have to count. Please note that
you have to add frame padding yourself; Nuke won’t do this for you. Nuke
uses the # symbol by default when it displays file sequence names.
Give your file an extension such as png, tif, jpg, cin, dpx, iff, or exr
(there are more to choose from).
Separate these three parts of the file name with dots (periods).
The first bit can be anything you like, though I recommend not having any
spaces in the name due to Python scripts, which have a tendency not to
like spaces (you can use an underscore if you need a space). The second
bit needs to be defined in one of the ways mentioned previously, either the
%0xd option or the #### option. The last bit is the extension to the file
type you want to render, such as jpg, exr, sgi, tif, and so on.
To render a single file, such as a QuickTime file, simply give the file a
name and the extension .mov. No frame padding is necessary.
1. In the field at the bottom, at the end of your path, add the name of your
file sequence. For this example, use doll_v01.####.tif. Don’t forget the
dots and the v01 (FIGURE 2.34).
2. Click Save.
You now have the path and file name under the File property. You might
have noticed that the Write1 Properties panel changed a little. It now
accommodates a property called Compression, which is, at the moment,
set to a value called Deflate.
You are now ready to render using the Render button on Write1’s
Properties panel (FIGURE 2.36). You can also use the Render All and
Render Selected commands in the Render menu in the menu bar.
Call me superstitious, but even with the autosave functionality, I still like
to save on occasion, and just before a render is one of those occasions.
The Render panel lets you define a range to render. The default setting is
usually fine.
The render starts. A Progress Bar panel displays what frame is being
rendered and how many are left to go (FIGURE 2.37).
TIP
Using the Content menu, you can place the Progress Bar
panel in a specific place in the interface. I prefer the bottom
right, where you placed it in Chapter 1. This way the progress
bar doesn’t pop up in the middle of the screen and hide the
image, which I find annoying.
You now need to wait until the render is finished. Once it is, you probably
want to look at the result of your render. Let’s do that.
Every Write node can double as a Read node. If the Read File check box is
selected, the Write node will load the file that’s written in its File property
(FIGURE 2.38).
FIGURE 2.38 The little Read File check box turns the Write
node into a Read node.
TIP
All nodes can have thumbnails. You can turn them on and off
by selecting a node and using the Alt/Option-P hot key.
7. In Write1’s Properties panel, click the Read File check box to turn it on.
Your Write node should now look like the one in FIGURE 2.39. It now
has a thumbnail like a Read node does.
8. Make sure you are viewing Write1 in the Viewer and then click the Play
⬆
button to play forward.
Let the green line along the Timebar fill up; when it does, you should be
able to see realtime playback.
9. When you’re finished, remember to click Stop and deselect Read File.
So, not too shabby. However, you still have a little work to do. If you look
carefully, you might notice that the doll’s feet are actually on top of the
front edge of the artist’s toolbox instead of behind it, so the doll does not
yet appear to be inside the toolbox. Another problem is that around
halfway through the shot, the background darkens, something that you
should mirror in the doll. Let’s take care of the doll’s feet first.
2. In the File Browser that opens, navigate to the chapter02 folder and
double-click mask.tif to import that file.
3. Select Read3 and press 1 on the keyboard to view this image in the
Viewer.
What you see in the Viewer should be the same as what appears in
FIGURE 2.40. You now have a red shape at the bottom of the image.
Mattes are usually white, not red. How will you use this? Do you need to
key it, perhaps? Let’s take a better look at this image.
4. While hovering the mouse pointer over the DAG, press the R key to
view the red channel (FIGURE 2.41).
This is what I usually expect a matte to look like: white on black. Let’s see
what the other channels look like.
5. While hovering the mouse pointer over the DAG, press the B key to
view the blue channel, then the G key to view the green channel, then the
A key to view the alpha channel, and finally the A key again to view the
RGB channels.
Did you notice that all the other channels are black? This image was saved
like this to conserve space. There is information in only one channel, the
red channel, rather than the same information being in all four channels,
which would add nothing—it would just make a bigger file. Just
remember that your matte is in the red channel.
The Merge node’s default layering operation is Over, which places one
image over another. But Merge holds many more layering operations. You
look at a few throughout this book. ⬆
Now you will use another Merge node to cut a hole in the doll’s branch
before it gets composited over the background. Because you want to cut
this hole after the doll has been repositioned—but before the composite
takes place—place the new Merge node between Transform1 and Merge1.
6. Select Read3 and press M on the keyboard to create another Merge
node (FIGURE 2.42).
Merge2 has been created with its A input connected to Read3. You need to
connect Merge2’s B input to Transform1 and Merge2’s output to Merge1’s
A input. You can do this in one step.
The Stencil operation does exactly that: It creates a hole in the B input
where the alpha channel of the A input is white. Let’s change the
operation to Stencil.
⬆
FIGURE 2.45 The doll’s feet are still in front.
The Shuffle node looks like a matrix built from a source at the top to the
destination on the right. To understand the flow of the Shuffle node,
follow the arrows from the In property at the top until you reach the Out
property on the right.
You need to move the red channel of the mask image, so you’ll have a copy
of it in the alpha channel. Using the check boxes, you can tell the Shuffle
node to output red into the alpha.
A new node called Shuffle1 has been inserted between Read3 and Merge2
(FIGURE 2.47). Shuffle1’s Properties panel was loaded automatically
into the Properties Bin.
⬆
FIGURE 2.47 Shuffle1 is now inserted after Read3.
2. In Shuffle1’s Properties panel, check all the red boxes on the very left
column, as shown in FIGURE 2.48. This places the R channel in the R,
G, B, and alpha channels.
FIGURE 2.48 Setting up the Shuffle node to copy the red
channel into all the four channels
Now that you copied the red channel into the alpha channel, Merge2
works and the doll’s feet now appear to be behind the wooden box
(FIGURE 2.49).
FIGURE 2.49 That’s more like it. The doll’s feet are now
behind the front of the toolbox.
1. Select Merge1 and press 1 on the keyboard to make sure you are viewing
it in Viewer1.
2. Press the Play button in the Viewer. Let the clip cache once, and then
enjoy your handiwork.
Did you notice the dark flash that occurred during the playback? It starts
at frame 42. You will need to make the doll mimic this light fluctuation.
So far, all the values you set for various properties were constant values—
they did not change from frame to frame. Now you need to change those
values over time. For that purpose you have keyframes.
⬆
NOTE
In Nuke, practically every property can be keyframed. Here, you are going
to create another Grade node and use that to change the brightness of the
doll branch to match the changes to the lighting in the background.
Your tree should now have a new Grade node in it called Grade2
(FIGURE 2.51).
TIP
If, unlike my tree, your tree is really messy, it will be hard for
you to understand the flow of information and read your tree.
Sure, this is a simple tree. But if it were bigger, and you saved
it and went back to it over the weekend, things might no
longer make sense. Keeping an organized tree is always a
good idea.
Frame 42, which you are on at the moment, is the last frame of bright
lighting you have before the background starts to darken. This will be the
location of your first keyframe, where you lock the brightness of the doll
to the current brightness level.
⬆
FIGURE 2.52 Set Key holds the current value at the current
frame in place.
This creates a keyframe for the four values (R, G, B, and Alpha) associated
with the Gain property. Notice that the Input field turns a blue color—this
is to show that a keyframe is present on this frame for this property
(FIGURE 2.53).
FIGURE 2.53 The blue Input field indicates this frame has
a keyframe.
6. Advance one frame by hovering over the Viewer and pressing the right
arrow on your keyboard.
Notice that the color of the property’s Gain field is now showing a subtler
blue color (FIGURE 2.55). This indicates that there is animation for the
property, but there is no keyframe at this point in time.
7. Play with the Gain slider until you reach a result that matches the doll’s
brightness to that of the background. I stopped at 0.025.
8. Advance another frame forward to frame 44 and adjust the gain again.
I stopped at 0.0425.
TIP
⬆
9. Repeat the process for frames 45 and 46 as well. I had 0.41 and then
1.0.
We have now created several keyframes for the Gain property, resulting in
animation. The animation can be drawn as a curve in a graph called an
Animation curve. The X axis will represent time and the Y axis value.
Let’s set the Animation curve for this property in the Curve Editor.
10. Choose Curve Editor from the Gain property’s Animation menu
(FIGURE 2.56).
You can see the curve for the animation you just created (FIGURE 2.57).
The Curve Editor is explained in more detail in Chapter 6.
Look at Grade2. What’s that on the top right? Notice it has a little red
circle with the letter A in it (FIGURE 2.58). Do you wonder what that’s
for? It’s an indicator, which I explain in the next section.
Indicators on nodes
Several indicators may appear on Nodes in the Node Graph, depending on
what you are doing. TABLE 2.1 describes what each indicator means.
⬆
TABLE 2.1 Node Indicators
Having little “tells” like these indicators on the nodes themselves really
helps you read a tree. The A indicator, for example, can help you figure
out which of your two Grade nodes is the one you added animation to.
You should now be happier with your comp (FIGURE 2.59). The doll
appears to be stading inside the artist’s toolbox, and the light change is
matched. Let’s save and render a new version of the composite.
Nuke’s versioning system, which you used at the beginning of this chapter
to save a new version of the script, also works with Read and Write nodes.
Remember from earlier in this chapter how you set up your Write node to
have a “_v01” in the file name? Well, that’s what’s going to change. Using
a hot key, change this to one version up, meaning “_v02”.
⬆
1. Select Write1 and press the Alt/Option-Arrow Up key to change the file
name to v02.
When the Render is finished, compare the previous render to this one in
the Viewer. Let me walk you through this.
1. While hovering your mouse pointer over the DAG, press R on the
keyboard to display a Read File browser.
You now have Read4 with the previous render; Write1 can become the
new render by turning it into a Read node.
3. Double-click Write1 and, in the Properties panel that opened, click the
Read File check box.
To compare the two versions, you will now load each one to a different
buffer of the Viewer and then use the Viewer’s composite controls to
compare them.
With the Composite controls set in this way, there’s a new axis on the
Viewer—the image to the left of this axis is Write1 and the image to the
right of this axis is Read4.
You can move the axis using the controls shown in FIGURE 2.62.
⬆
FIGURE 2.62 The Viewer Composite control’s axis
You can clearly see that the half doll on the left has been darkened, while
the half doll on the right is still bright. You can even compare the two
halves while playing in the Viewer.
Look at them roll! The two streams are playing side by side, and you can
see that only the stream on the left shows the doll darkening when
needed. Also, only the doll on the left has its feet appearing behind the
artist’s toolbox. Well done! See how much you advanced? And it’s only
Chapter 2!
10. Use the hot key Ctrl/Cmd-S to save your script for future reference.
This ends the practical introduction. You should now start to feel more
comfortable working with the interface to get results. In the next chapter,
you work with a much bigger tree. Get ready to be dropped into the pan.
⬆
PREV NEXT
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
2. Touring the Interface with a Basic Composite 4. Color Correction
🔎
Once the layers are composited, it is very easy for the compositor to
change the look of the beauty pass, because he or she has easy access to
anything that makes up the look of the render. For example, because the
light is separate from the color, it’s easy to color correct it so it is brighter
—meaning adding more light—or to change the reflection so it is a little
blurry, making the object look like its material is a little worn, for
example. Rendering just a single final image from the 3D software, on the
other hand, means that changing the look (such as adding blur) will be
more difficult.
All channels in a script must exist as part of channel set (also called a
layer). You’re probably familiar with the default channel set—RGBA—
which includes the channels with pixel values for red, green, blue, and
also the alpha channel for transparency. Channel names always include
the channel set name as a prefix, like this: setName.channelName. So the
red channel is actually called rgba.red.
Most image file types can hold only one channel set—RGBA. The PNG and
JPEG file formats hold only RGBA; however, TIFF and PSD can hold
more channels. All the layers you create in Photoshop are actually other
channel sets. One of the better multilayer file formats out there is called
OpenEXR. It can support up to 1,023 channels, is a 32-bit float format
(meaning it doesn’t clip colors above white and below black), and can be
saved in a variety of compression types. 3D applications are using this file
format more and more. Luckily, Nuke handles everything that comes in
with OpenEXR very well. OpenEXR has the .exr extension and is simply
called EXR for short.
Bringing in a 3D render
To start the project, bring in a 3D render from your hard drive (files
downloaded per the instructions in this book’s introduction).
3. Click the Play button in the Viewer to look at the clip you brought in.
Let it cache by allowing it to play once; it will then play at normal speed.
4. Click Stop and use the Timebar to go to the last frame: 65 (FIGURE
3.1).
This shot is part of a short film called Goose by three talented people: Dor
Shamir, Shai Halfon, and Oryan Medina. Goose was created as a personal
project while the three were working at Geronimo Post&Design. You can
(and should!) view the whole thing here: www.vimeo.com/33400042
(http://www.vimeo.com/33400042).
5. By pressing Ctrl/Cmd-S, save your script (Nuke project files are called
scripts, remember) in your student_files folder. Name it
chapter03_v01.
The three buttons at the top left of the Viewer are the Channel buttons.
The button on the left shows the selected channel set; by default the
RGBA set is chosen. The button in the middle shows which channel to
display when viewing the alpha channel; by default it is rgba.alpha. The
button on the right shows which channel from the set you have currently
chosen to view; by default, it is RGB.
If you want to view the second channel of a channel set called Reflection
Pass, for example, you need to change the leftmost button to show the
Reflection Pass channel set, and then set the third button to the green
channel (G is the second letter in RGBA—hence, the second channel).
⬆
1. Click the Channel Set Viewer button (the one on the left) and view all
the channel sets you can now choose from (FIGURE 3.3).
FIGURE 3.3 The list of all the channel sets available in the
stream being viewed
This list shows the channels available for viewing for the current stream
loaded into the Viewer. In this case, Read1 has lots of extra channel sets
besides RGBA, so they are available for viewing.
As a side note, the Other Layers submenu at the bottom shows channels
that are available in the script, but not through the currently viewed node.
For example, if we had another Read node with a channel set called
Charlie, it would show up in the Other Layers submenu.
2. Switch to the Col channel set (short for Color) by choosing it from the
Viewer drop-down menu.
Figure 3.4 and your Viewer show this pass, simply called the color pass,
which represents the unlit texture as it’s wrapped around the 3D object.
Essentially it is the color of the object before it’s been lit.
There are many ways to render separate passes out of 3D software. Not all
of them include a color pass, or any other pass you will use here. It is up to
the people doing the 3D rendering and compositing to come up with a
way to render that makes sense to the production, whatever it may be.
Having a color pass makes sense for this production because we need to
be able to play with the lighting before we apply it to the texture.
⬆
TABLE 3.1 Lemming Render Passes
Table 3.1 shows a list of the different render passes incorporated in this
EXR file sequence. We won’t necessarily use all the passes available to us.
You can use the LayerContactSheet node to look at all the passes you
have.
1. While viewing the RGBA channel set, select Read1 and attach a
LayerContactSheet node from the Merge toolbox.
FIGURE 3.5 Clicking the Show Layer Names check box will
display the channel set names in the Viewer.
You can immediately see all the different channel sets laid out, with their
names. This makes life very easy. The LayerContactSheet node is a very
good display tool, and you can keep it in the Node Graph and refer to it
when you need to (FIGURE 3.6).
3. Delete LayerContactSheet1.
⬆
USING THE BOUNDING BOX TO SPEED UP
PROCESSING
The bounding box is an element that Nuke uses to define the area of the
bounding boxes all over them—you just may not have noticed them. To
understand the bounding box, let’s first look at the image properly.
4. Grab the top edge of the Crop controls and drag it to frame the pilot.
5. Close all Properties panels by clicking the Empty Properties Bin button
to hide the controls.
The dotted line that formed where you placed the top edge of the Crop
controls is the bounding box (FIGURE 3.7). The numbers at the top
right in Figure 3.7 indicate the top-right location of the bounding box, in
pixels, on the X and Y axes. The resolution of the image itself didn’t
change, but pixels outside the bounding box will not be considered for
processing.
The Auto Crop function is now looking for pixels that are black. Black
pixels surrounded by nonblack pixels will remain inside the new bounding
box. However, black pixels that don’t have any nonblack pixels between
them and the edge of the frame will be considered unimportant, because
they are adding nothing to the image, so the new bounding box will not
include them.
10. When the process finishes (it may take some time), click the
AutoCropData tab.
Here you can see the four values (X, Y, Right, and Top) that are now
changing from frame to frame according to the location of nonblack
pixels. Keyframes were created on every frame, as you can tell by the
bright blue color of the Input fields (FIGURE 3.9).
You can type an expression by hand (in fact, you will do this in Chapter 5)
or you can make one by clicking and dragging, as you will do now:
Curve1’s Box property’s four Input fields turn light blue, which means you
have successfully linked Crop1’s property to CurveTool1’s property
(FIGURE 3.11).
⬆
FIGURE 3.11 The light blue color of the Input fields shows
there is animation.
Let’s learn a little bit more from this and see what this expression looks
like.
The Expression panel shows the four expressions for the four
subproperties of the Box property (X, Y, R, and T).
• The second part tells which node, in this case CurveTool1, should look
for a property.
You can see this is a very simple command, one that you can easily enter
yourself. Simply use the node name and then the property name you need
to copy values from.
Let’s see the result of this expression in the Viewer—but first close the
open Expression panel.
5. Clear the Properties Bin by using the button at the top left of the
Properties Bin.
In Figure 3.13 you can see that the bounding box has changed
dramatically. If you move back and forth a couple of frames, you will see
that the bounding box changes to engulf only the areas where something
is happening. Having a bounding box that engulfs only the pilot, thereby
reducing the size of the image that is being processed, speeds up
processing.
1. Using a Read node, read another sequence from your hard drive:
chapter03/sky.####.jpg.
2. Load Read2 into the Viewer, make sure you are viewing the RGB
channels, and click the Play button. When done, stop, and go to frame 1 in
the Timebar (FIGURE 3.14).
This is your background, some sky for the daredevilish flight of our little
pilot.
When working with 3D renders, a good first step is to start with what’s
technically called a slap comp. Slap comps are exactly what they sound
like—a quick comp slapped together to see if things are working correctly.
Slap comps tell you whether the animation is working with the
background, whether the light direction is correct, and so on.
5. While hovering the mouse pointer over the Viewer, click Ctrl/Cmd-P to
switch to Proxy mode.
6. Click Play in the Viewer to view your slap comp (FIGURE 3.15).
You can see that the result is already not bad. The good thing is that the
pilot and the sky are moving in a similar way. However, the pilot still isn’t
looking as if he belongs in the scene. His color is all wrong for the
background and the scene simply doesn’t feel as if he’s there, shot on
location, which is what compositing magic is all about.
7. Click the Stop button in the Viewer and go back to frame 1 in the
Timebar.
8. You no longer need the slap comp, so select Merge1 and delete it.
You build the passes like this: First combine the GI and Lgt passes using a
Merge node’s plus operation; then multiply the result with the Col pass
using another Merge node. Then merge with a plus operation in this
order: the Ref, Spc, and SSS passes. This is the way this image was put
together in the 3D software that created it, so this is how you recombine it
in Nuke.
⬆
There are two ways to work with channel sets in Nuke. The first is by
working down the pipe, and the second is by splitting the tree.
Working down the pipe
To start layering passes, first combine the Lgt and GI passes with a plus
operation. Let’s connect a Merge node.
1. Select Crop1 and insert a Merge node after it by pressing the M key.
Merge1 has two inputs. The B input is not connected. You want to
combine two different channel sets that live inside the Read1 branch, so
you simply connect the B input to the same output that the A input is
connected to—the output of Crop1.
So how are you going to tell Merge1 to use different channel sets for this
operation? So far you have been combining the RGBA of images, which is
the default state of the Merge node. However, you can change that using
the pull-down menus for each input (FIGURE 3.17).
Let’s go ahead and use these pull-down menus to combine the GI and Lgt
passes.
4. From the A channel’s pull-down menu, choose the Lgt channel set.
6. Since you need to add the two images, change the Operation property,
using the pull-down menu at the top, to Plus (FIGURE 3.18).
The output pull-down menu for Merge1 was set to RGBA (the default).
You can change that if you want the output to be placed elsewhere. The
combination of the Lgt and GI passes is now the RGBA. You want to
multiply that with the Col pass.
9. From the A channel’s pull-down menu, choose the Col channel set.
10. Since you want to multiply this pass with the previous passes, choose
Multiply from the Operation pull-down menu (FIGURE 3.21).
What you just created is a diffuse pass. A diffuse pass is the combination
of the light and color of an object. To see what you did, use the mix slider
to mix the A input in and out (FIGURE 3.22).
⬆
FIGURE 3.22 Using the mix slider, you can make the A
input transparent.
11. Play with the mix slider to see the Col pass fade in and out, and to see
what it does to the image. When you’re finished, set the mix slider to 1.
You should now have something that looks similar to FIGURE 3.23.
You can continue doing this until you combine all the passes. However, I
find working in this way restricts the advantage of the tree—having easy
access to every part of it. I prefer to have the Merge node just combine
two streams and then have the streams available outside the Merge node.
Shuffling the channel sets inside the Merge node restricts that. Having
everything out in the open, as it will be as you build this in the next
section, makes everything a lot more visual and apparent, and this gives
you easier access and better control.
12. If you want, save this script using File > Save As, and give it a new
name.
13. Click and drag to create a marquee to select Merge1 and Merge2, and
then press the Delete key to delete them.
One of the interface elements you’re going to use a lot is the Dot. The Dot
is a circular icon that enables you to change the course of a pipe, making
for a more organized tree.
2. Select the newly created Dot, and then insert a Shuffle node from the
Channel toolbox.
3. Make sure you are viewing Shuffle1 in the Viewer, and then change the
In 1 property to GI.
5. Select the In 2 alpha to alpha check box to direct the alpha from input 2
to the alpha output (FIGURE 3.24).
Because you will do this a lot, it is a nice reminder if you name your
Shuffle1 node according to the name of the input channel set. You can
simply change the name of the node, but that is less advised. Instead, you
have a label for each node, accessed via the Node tab in each node’s
Properties panel.
NOTE
Whatever you type in the Label Input field will appear on the node in the
DAG.
You can simply type this for every pass. However, you can also use a little
scripting to automate this process.
Breaking down what you typed, the brackets mean you are writing a TCL
script. The word knob means you are looking for a knob (knob =
property). The word in is the name of the knob (the pull-down knob, in
this case).
The result of this script shows the value of the property called In.
Therefore, you will see that the node in the DAG still appears as GI.
9. To make this a little more readable, add a space and the word pass
after the script, so it reads like this: [knob in] pass (FIGURE 3.26).
The word pass is just a word—because it’s outside the TCL brackets, it will
simply appear as the word (it’s not part of the script). The node in the
DAG now shows the label GI pass (FIGURE 3.27).
⬆
FIGURE 3.27 Shuffle1 in the DAG will display the new label
with the TCL script resolved.
Now, just by looking at the Node Graph, you can see that this is your GI
pass branch. You will have a similar setup for your other passes.
10. Insert an Unpremult node from the Merge toolbox after Shuffle1.
Use the GI pass as your background for all the other passes. It will serve
as the trunk of your tree. The rest of the passes will come in from the right
and connect themselves to the trunk of the tree. To do the next pass, you’ll
first create another Dot to keep the DAG organized.
11. While nothing is selected, create a Dot by pressing the . (period) key
on your keyboard.
12. Connect the newly created Dot’s input to the previous Dot.
13. Drag the Dot to the right to create some space (FIGURE 3.28).
14. Hold down Ctrl/Cmd and drag the yellow Dot in the middle of the
pipe between the two Dots to create a third Dot. Drag it to the right and
up so it forms a right angle between the two Dots (FIGURE 3.29).
Now you will create the content for this new branch by copying ⬆
everything, changing a few properties, and rearranging a few nodes.
15. Select both Shuffle1 and Unpremult1, and press Ctrl/Cmd-C to copy
them.
18. Double-click Shuffle2 to display its Properties panel and choose Lgt
from the drop-down menu (FIGURE 3.31).
Notice how the label changed to reflect this in the DAG, thanks to our TCL
script (FIGURE 3.32).
21. Make sure Merge1’s Properties panel is open and change the
Operation property’s drop-down menu from Over to Plus (plus means to
add).
The two lights are added together much in the same way as light works in
real life (FIGURE 3.34).
FIGURE 3.34 The Lgt pass added. The highlighted line will
be duplicated for each additional pass.
Now for the next pass—the Col pass (short for color pass).
23. Select the horizontal line of nodes that starts with the Dot and ends
with Merge1; press Ctrl/Cmd-C to copy it (it’s highlighted in Figure 3.34).
NOTE
24. Select the Dot at the bottom right of the tree and press Ctrl/Cmd-
Shift-V to paste. Notice you should hold Shift as well.
25. Drag the newly pasted nodes down a little and connect Merge2’s B
input to Merge1.
You have just multiplied the Col pass with the composite. This resulted in
the diffuse pass, which is the lit textured object. FIGURE 3.35 shows the
tree making up the diffuse pass. Next you’ll finish adding the rest of the
passes.
Repeat this process three more times to connect the reflection (Ref),
specular (Spc), and sub surface scattering (SSS) passes. You will copy the
same branching line of nodes, and then paste it back into the last bottom
right Dot, connecting the Merge node and changing the Shuffle node, and
sometimes the Merge node’s Operation property.
28. Repeat the process. Select the last line starting from the bottom-right
Dot and ending in the bottom-most Merge node, copy, and then branch-
paste to the bottom-right Dot. Connect the new Merge’s B input to the
previous Merge node.
⬆
29. This time, change the Shuffle node’s In property to Ref. From
Merge3’s Operation property, choose Plus.
30. Repeat the process, this time for the Spc pass. The Merge operation
for the Spc pass needs to be Plus.
31. Go through the process a third time. Choose SSS for the Shuffle node’s
In property, and make sure the Merge operation is set to Plus.
32. Make sure you are viewing the output of the last Merge node, which
should be called Merge5 (FIGURE 3.36).
Notice the main trunk of your tree always follows the B input. This is
important, as this is the way the Merge node operates. Now try this:
In the Viewer, you can see that the image doesn’t change; however, let’s
see what happens when we change the Mix property in Merge5.
You’d expect for the SSS pass to fade in and out. Instead, the entire image
with the exception of the SSS pass fades in and out. This is because the
Merge node treats the B input as the background image—the one that’s
not changing—and the A input as the added material.
36. With Merge5 selected, click Shift-X again to swap back the inputs.
37. Play with Merge5’s Mix property again. When you’re done, leave it on
1.
The order of inputs—A and B—is important for another reason: the math
order of operations. For the Plus operation, there’s no difference as 1 + 3
is the same as 3 + 1. But for the Minus operation, the order means A
minus B, which will work on some occasions, but most of the time, you
want to subtract A from B, as B is your untouched background. The
operation From does exactly what Minus does, only the other way around:
It subtracts A from B.
Using these passes, you completed the basic building of the pilot’s beauty
pass. Your tree should look like FIGURE 3.37.
FIGURE 3.37 The tree at the end of the beauty pass build
⬆
FIGURE 3.38 Nasty edges
If you look carefully, you might notice that as you added and multiplied
various images together, you were also doing the same to the alpha
channel. This means the alpha channel you now have in the pipe is a
massively degraded one. You need to revert to the original unchanged
one. The original alpha exists elsewhere in the tree—on the right side of
the tree, where all the Dot nodes are, where you source all your branches.
To copy the channel from one branch to another, you use a node that’s
very similar to the Shuffle node you’ve been using. This one is called
ShuffleCopy, and it allows you to shuffle channels around from two inputs
instead of just one. You need to copy the RGBA pass’s alpha channel from
the branch on the right to the RGBA’s alpha channel to your trunk on the
left.
1. Select Merge5 (the one with the SSS pass in the A input) and insert a
ShuffleCopy node after it from the Channels toolbox.
By default, the ShuffleCopy node is set to copy the alpha channel from the
selected channel set in input 1 and the RGB from the selected channel set
in input 2, which is what you’re after.
Since you started this tree by unpremultiplying all the passes, it’s time to
premultiply again now that you have a correct alpha channel.
4. View Premult1 in the Viewer. Make sure to display the last frame.
You have now rebuilt the beauty pass with the correct alpha. You also
ensured that you have easy access to all the various passes for easy
manipulation (FIGURE 3.40).
3. Make sure you are viewing Merge6 to see the whole tree’s output
(FIGURE 3.41).
FIGURE 3.41 The pilot over the background
The pilot doesn’t look like he’s shot with the same camera and at the same
time as the background. There’s a lot we can do to fix this. Let’s start with
some basic tree manipulation.
MANIPULATING PASSES
Now, by playing with the various branches, you can change the colors of
elements that make up the pilot image—such as the amount of light falling
on the objects, how bright or sharp the specular highlights are, and so
forth.
The overall feeling here is that the pilot is too warm in relation to the sky.
Let’s color correct the Lgt pass and the Spc pass to produce cooler colors.
Because you’re after a cooler color that better matches the sky, it would be
a good idea to start by picking the color from the Viewer and then making
it a little brighter.
2. Click the Color swatch next to the gain slider (FIGURE 3.42).
3. Hold Ctrl/Cmd-Shift in the Viewer and drag a box around the clear
blue sky area between the clouds.
NOTE
Three modifier keys change the way sampling colors from the
Viewer works. The Ctrl/Cmd key activates sampling. The Shift
key enables creating a box rather than a point selection. The
resulting color is the average of the colors in the box. The
Alt/Option key picks the input image rather than the output
image—meaning, picking colors before the Grade node
changes them.
5. To make the color brighter, click the Color Picker button for the Gain
property to display the In-panel Color Picker (FIGURE 3.43).
6. Drag the far-right slider (the I from TMI) up so that the Green slider
reaches about 1.0.
7. Hold down Shift-Alt and click and drag in the color wheel on the left to
push saturation up a little, then close the In-panel Color Picker.
Let’s do this to the specular pass as well. It contributes a lot to the color of
the light falling on the object as the object accepts a lot of specular
highlight. I also want to reduce the overall strength of the specular pass so
I’ll make it darker overall.
10. Manipulate the Gain properties until you are happy with the result. I
have R = 0.8, G = 0.825, and B = 1.
Next, add a little softening to the specular pass. You’re going to use a blur,
so you need to do this before the Unpremult operation. Remember:
Always apply filters to premultiplied images.
11. Move Grade2 and Unpremult5 to the left to clear up some room
between them and the Spc pass Shuffle node (should be Shuffle5).
To make the specular pass a little weaker, use Merge4’s Mix property to
mix it back a little.
14. Make sure Merge4’s (the Merge that’s adding the specular pass)
Properties panel is open by double-clicking it.
The specular pass is softer now and sits better with the other passes. The
specular part of your tree should look FIGURE 3.44.
⬆
FIGURE 3.46 This is the Mask input. It says so when you
click and drag it.
The Mask input limits the area where the node operates. It receives a
single channel and makes the node operate only where that channel is
white. Gray areas in the mask image are going to make the node operate
in that percentage—a 50% gray means a 50% effect, and black means no
effect. This makes it really easy to make a node affect only a specific area.
NOTE
1. With nothing selected, create a Noise node from the Draw toolbox and
view it in the Viewer.
The Noise node creates noise that can be controlled. You can make lots of
different things with this node, from cloud-looking elements to TV noise
elements. Let’s use this node to make a moving cloud-like texture.
2. Let’s start with the scale of the clouds. I’d like them bigger and wider.
So click the 2 button on the right of the X/Ysize property in Noise1’s
Properties panel.
3. In the two input fields that are exposed, type 700 in the first and 350
in the second (FIGURE 3.47).
4. Click once inside the Z property’s Input field, then press = on the
keyboard to bring up the Expression panel.
⬆
FIGURE 3.48 Your noise-generated clouds should look
something like this now.
This expression will grow every frame by 1/20. How do I know? Because I
used the term frame in the expression. Frame means the current frame
number, and so in the next frame, the value will be one more. The number
20 simply controls the speed. Changing the number to 2 will make for a
much faster-moving noise.
Finally I want the clouds to be moving from left to right. We’ll create
keyframes to create the animation. You need to switch to the Transform
tab.
9. While on frame 1, right-click the Translate.x Input field and choose Set
key.
10. Go to the last frame, frame 65, and type 2500 in the Translate.x
Input field to create another keyframe.
11. In the Viewer, click Play to watch the cloud element you just created.
When you’re done, click Stop and go back to frame 1.
What’s left now is to use this image to darken the sunlight. You use
another Grade node for that, of course.
12. Click to select Grade1. That’s the one we used to make the sunlight
cooler.
14. Drag the Mask input out from the right-hand side of Grade3 to Noise1
and let go.
Look toward the bottom of Grade3’s Properties panel. You can see the
Mask input properties there. At the moment, they show the Mask is active
and the channel used is rgba.alpha, which is fine by me. FIGURE 3.49
shows this as well.
It’s a good idea to always keep your tree organized. Do that now.
16. Move Grade3 so it’s to the left of Grade1, and move Noise1 above
Grade3, much like in FIGURE 3.50.
17. Click Merge6, then press 1 to view your whole tree in the Viewer. Click
Play.
Can you see how the light moves on the side of the plane? Nice.
Let’s brighten the eyes of the pilot—the whole area that’s visible through
his glasses. First let’s see if I have a matte for that as one of my ID
channels.
3. While holding the mouse in the Viewer area, click R, then G, then B,
and finally B again.
The three mattes are real black and white mattes, in the same way an
alpha channel is a matte, it’s just that they’re held in the RGB channels
instead.
4. Go through the five ID passes available and look for a channel that
represents everything that’s behind the glasses. Looking carefully, you
find what we’re looking for in ID3.blue, as shown in FIGURE 3.52.
In order to brighten all the light that’s affecting the shot, you are going to
need to work on both lighting passes at the same time. The right place to
do this is right after where you joined the lighting passes: Merge1.
6. Select Merge1 by clicking it, then press G to create another Grade node.
So now you’ve brightened everything, not just the eyes. You have that
Mask input, but instead of having to connect it, you can simply call up a
channel that already exists in your stream.
10. Move the Mix slider up and down to see what Grade4 does now. You
can see it works only on the area where ID3.blue is white.
12. View the whole tree by clicking Merge6 and pressing 1 on the
keyboard.
⬆
We still want to have a bit of contrast in those eyes, so let’s make some by
changing the Gamma value to a lower one.
13. In the Input field for Grade4’s Gamma property, type 0.65.
The glass now feels glossier, and the eyes pop out a little more—which is
great (FIGURE 3.54).
We’re done manipulating the look of the pilot for now. Only a few things
are left to do before the composition is ready. Let’s hit it.
The ones we have with Read1 are the depth pass, ID passes, the normals
pass, and the motion vector pass. None of these actually makes up the
shader we built earlier, but they all aid the compositor in fixing problems
and reaching a better looking composite.
1. To view the MV channel set, make sure you are viewing Premul1 in the
Viewer, then from the Channel Set Viewer button, choose MV.
This is the MV pass. The Red channel describes the movement in X, and
the Green channel describes the movement in Y. The Blue channel isn’t
used.
⬆
FIGURE 3.55 Inserting a VectorBlur node after the
Premult1 node
5. The Multiply property defines how much motion blur the image will
have overall. Play with this value to see the image changing. I ended up
leaving it on 7 for this shot.
You character now has motion blur. Aren’t you happy? Now for more
blurring...
Look at that: a pretty red image. But what’s that? It’s cropped at the top?
Why’s that (FIGURE 3.56)?
At the beginning of this chapter, you cropped the image according to the
auto crop calculation of CurveTool1. This calculation was performed on
the RGBA channel set and so it didn’t include areas that are not black in
other channels. If you call up the depth pass from the bottom point in the
stream, you get a cropped version of that element. Therefore, to get an
uncropped version, you have to go back up to a place before the crop and
pick a pass from there.
3. At the top of the tree, with nothing selected, create a Dot and connect
its input to Read1. Drag it to the right until it passes the rest of the tree.
4. Select the new Dot and insert another Dot, connected to it. Drag the
new Dot down until it passes the rest of the tree (FIGURE 3.57).
8. View ShuffleCopy2 in the viewer. You should still be viewing the depth
pass and can see that it is no longer cropped.
10. Insert a ZDefocus node from the Filter toolbox after ShuffleCopy2.
11. In the on-screen controls, drag the focal_point control to the eye of
the pilot (FIGURE 3.59).
Using the on-screen focal_point control actually changes the Focus Plane
property in the Properties panel. Let’s animate it so that the eye is always
in focus.
14. Drag the focal_point control to the eye again. Notice how this created
a new keyframe for the Focus Plane’s property.
15. To really get a strong depth of field, change both the Size and the
Maximum properties to 50.
Now you can look at this composite and see the result of all the work you
did. You have the pilot in the sky. Those with more experienced eyes may
notice that there is a lot more work to do to get this composite to look as
good as it can—but that’s more of a compositing lesson than a Nuke
lesson.
What’s important now, though, is that you have access to every building
block of what makes up the way the composite looks. Having the separate
passes easily accessible means your work will be easier from here on out.
For now, though, leave this composite here. Hopefully this process helped
teach you the fundamental building blocks of channel use and how to
manipulate a bigger tree (FIGURE 3.61).
PREV NEXT
⏮ ⏭
Recommended
2. Touring the/ Interface
Queue / History
with a Basic
/ Topics
Composite
/ Tutorials / Settings / Get the App / Sign Out 4. Color Correction
© 2017 Safari. Terms of Service / Privacy Policy
⬆
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
3. Compositing CGI with Bigger Node Trees 5. 2D Tracking
🔎
4. Color Correction
Wow. This is a bit naive, calling a lesson “Color Correction.” It
should be a whole course on its own. But this book is about
more than that, and limited space reduces color correction to a
single chapter. So let me start by explaining what it means.
Whatever reason you have for color correcting an image, the process will
work according to the way Nuke handles color. Nuke is a very advanced
system that uses cutting-edge technology and theory to work with color.
Therefore, it is important to understand Nuke’s approach to color so you
understand color correcting within Nuke.
Linear: Linear can mean lots of things, but here, in terms of color, I
mean linear color space. A computer monitor doesn’t show an image as
the image appears in reality, because the monitor is not a linear display
device. It has a mathematical curve called gamma that it uses to display
images. Different monitors can have different curves, but most often, they
have a gamma curve called sRGB. Because the monitor is not showing the
image as it appears in reality, images need to be “corrected” for this. This
is usually done automatically because most image capture devices are
applying an sRGB curve too, in the opposite direction. Displaying a
middle gray pixel on a monitor shows you only middle gray as it’s being
affected by the gamma curve. Because your scanner, camera, and image
processing applications all know this, they color correct by applying the
reverse gamma curve on this gray pixel that negates the monitor’s effect.
This process represents basic color management. However, if your
image’s middle gray value isn’t middle gray because a gamma curve has
been applied to it, it will react differently to color correction and might
produce odd results. Most applications work in this way, and most people
dealing with color have become accustomed to this. This is primarily
because computer graphics is a relatively new industry that relies on
computers that, until recently, were very slow. The correct way to
manipulate imagery—in whatever way—is before the gamma curve has
been applied to an image. The correct way is to take a linear image, color
correct it, composite it, transform it, and then apply a reverse gamma
curve to the image to view it correctly (as the monitor is applying gamma
correction as well and negating the correction you just applied). Luckily,
this is how Nuke works by default.
NOTE
Nuke has many color correction nodes, but they are all built out of basic
mathematical building blocks, which are the same in every software
application. The next section looks at those building blocks.
The midtones, meaning the colors in the image that are neither dark
nor bright
In Nuke, and in other applications that support colors beyond white and
black (float), there are two more potential parts to the dynamic range: the
super-whites and the sub-blacks.
1. Launch Nuke.
4. With Read1 selected, go to the Color toolbox and click Add in the Math
folder.
You have now inserted a basic color-correcting node after the car image.
Let’s use it to change the color of the image and see its effect.
5. In Add1’s Properties panel, click the Color Picker button to display the
In-panel Color Picker. Play with the R, G, and B colors to see the changes
(FIGURE 4.1).
You can see that everything changes when you’re playing with an Add
node—the highlights, midtones, and even blacks (FIGURE 4.2). An Add
operation adds color to everything uniformly—the whole dynamic range.
Every part of the image gets brighter or darker.
7. Select Read1 again and branch out by holding the Shift key and clicking
a Multiply node from the Math folder in the Color toolbox.
You can see very different results here. The highlights get a strong boost
very quickly whereas the blacks are virtually untouched.
10. Repeat the previous process for the Gamma node. Remember to
branch from Read1 (FIGURE 4.4).
You can see that gamma deals mainly with midtones. The bright areas
remain untouched and so do the dark areas.
You should now have three different, basic, math-based color correctors
in your Node Graph that produce three very different results, as shown in
FIGURE 4.5.
11. Select Read1 and then Shift-click RolloffContrast in the Color toolbox
to create another branch.
12. While viewing RolloffContrast1, open its Properties panel and play
with the Contrast value (FIGURE 4.7).
You can see how, when you increase the contrast above 1, the lowlights get
pushed down and the highlights are pushed up.
13. Keep the Contrast property above 1 and bring the Center value down
to 0.
Now you can see that the result of the RolloffContrast operation is very
similar to that of the Multiply node. In fact, they are virtually identical.
When setting the center value at 0, you lock that value in place. The value
0 is locked in place when you’re multiplying as well.
You haven’t gone through an operation called Lift yet, but the
RolloffContrast operation is virtually the same as that operation. With
Lift, the value 1 is locked in place, and the further the values are away
from 1, the bigger the effect. You will go through Lift when you learn
about the Grade node later in this chapter.
When dealing with color, usually you need to control the lowlights,
midtones, and highlights separately.
The Add operation adds the same amount of color to every part of the
dynamic range.
⬆
USING AN I/O GRAPH TO VISUALIZE COLOR
OPERATIONS
Studying an I/O graph (input versus output graph) is a great way to
understand color operations. The X axis represents the color coming in,
and the Y axis represents the color going out. Therefore a perfectly
diagonal line represents no color correction. The graph shows what the
color operation is doing and the changes to the dynamic range.
To view an I/O graph like this, you can bring in a premade script I made.
1. Choose File > Import Script to load another script from the disk and
merge it with the script you have been building.
Notice that when you imported the script (which is only four nodes), all of
its nodes were selected. This is very convenient as you can immediately
move the newly imported tree to a suitable place in your Node Graph.
3. Make sure the imported tree is not sitting on top of your existing tree.
Move it aside to somewhere suitable, as in FIGURE 4.8.
• The first node is a Reformat node, which defines the resolution of your
image—in this case, 256×256 pixels. Notice that its input isn’t connected
to anything. This is a good way to set a resolution for your tree.
• The second node is a Ramp. This can be created from the Draw toolbox.
This node generates ramps—in this case, a black to white horizontal ramp
from edge to edge.
• The third node is a Backdrop node used to highlight areas in the tree.
You can find it in the toolbox called Other. It indicates where to add your
color correction nodes in the next step.
• The fourth and last node is an Expression node, a very powerful node. It
can be found in the Color > Math toolbox. It lets the user write an
expression with which to draw or manipulate an image. You can do a lot
of things with this node. From simple color operations (such as adding or
multiplying, though this is wasteful) to complex warps or redrawing of
different kinds of images all together. In this case, you use this node to
draw on screen values of a horizontal black to white ramp (you have the
ramp from above) as white pixels in the corresponding height in the
image. A gray value of 0.5 in the ramp will generate a white pixel halfway
up the Y resolution in the output of the Expression node. The leftmost
pixel is black in the ramp and shows as a white pixel at the bottom of your
screen. The middle pixel is a value of 0.5 and so it shows as a white pixel
in the middle of the screen. The rightmost pixel has a value of 1 and so it
draws a white pixel at the top of the screen. All these white pixels together
form a diagonal line (FIGURE 4.9). Changing the color of the ramp will
change the line. This happens on each of the three color channels
individually.
⬆
FIGURE 4.9 The I/O graph in its default state
Let’s start using this I/O Graph tree. You will insert a Color node in
between Ramp1 and Expression1 and look at the resulting I/O graph.
5. Insert an Add node from the Color > Math toolbox after Ramp1, as
shown in FIGURE 4.10.
FIGURE 4.10 Add2 has been inserted after Ramp1 and will
change your I/O graph.
You can see, as in FIGURE 4.11, that the Add operation changes the
whole dynamic range of your graph and, therefore, for any image.
Let’s replace your Add node with a Multiply node. You’ve never done this ⬆
before, so pay attention.
7. With Add2 selected, Ctrl/Cmd-click the Multiply node in the Color >
Math toolbox to replace the selected node with the newly created one.
The Multiply operation has more effect on the highlights than the
lowlights. When you are moving the slider, you can see that the 0 point
stays put, and the further away you go from 0, the stronger the effect
becomes.
Let’s try gamma. Maybe you don’t know what a gamma curve looks like.
Well, here’s your chance to learn.
10. Replace Multiply2 with a Gamma node from the Color or Math
toolbox by holding down Ctrl/Cmd and clicking Gamma from the Color >
Math toolbox.
11. Load Gamma2’s In-panel Color Picker and play with the sliders for R,
G, and B.
FIGURE 4.13 Notice that only the middle part of the graph
moves.
The Gamma operation changes the midtones without changing the blacks
or whites. You can tell that the points at the furthest left and at the
furthest right are not moving.
Contrast is next.
The contrast operation pushes the two parts of the dynamic range away
from one another (FIGURE 4.14). ⬆
FIGURE 4.14 A basic contrast curve. Though it’s not curvy,
it’s still called a curve.
14. Play around with RolloffContrast2’s center property. When you are
finished, set the value to 0.
Here you can see what actually happens when you play with the center
slider. It moves the point that defines where the lowlights and highlights
are. When leaving the center at 0, you can see that the curve is identical to
a Multiply curve (FIGURE 4.15).
⬆
FIGURE 4.16 Moving the slider up to 1 is actually a Lift
operation.
This is a Lift operation, which is covered later in this chapter. Your white
point is locked, while everything else changes—the opposite of Multiply.
RolloffContrast has one other property you can see in the I/O graph. This
property, called Soft Clip, is the property that gives this node its name.
This property smooths out the edges of the curve so that colors don’t all of
a sudden turn to black or white and result in a harsh transition.
16. Move the center slider to 0.5 and start to increase the Soft Clip slider.
I stopped at 0.55.
FIGURE 4.17 shows what happens when you increase the soft clip. This
creates a much more appealing result, which is unique to this node.
If you have a fair amount of experience, you must have noticed that the
I/O graph looks a lot like a tool you may have used in the past—something
applications such as Adobe After Effects call Curves. In Nuke, this is
called ColorLookup, and it is discussed in the next section.
Let’s try this node on both the image and the I/O graph itself.
The interface for this node has the narrow curves list on the left and the
curve area on the right. Choosing a curve at left displays that curve at
right, which enables you to manipulate it. There are five curves. The first
controls all the channels, and the next four control the R, G, B, and alpha
separately. You can have more than one curve appear in the graph
window on the right by Shift-clicking or Ctrl/Cmd-clicking them in the
list.
You’ve just created another point. You can move it around and play with
its handles. If you look at the I/O graph on the Viewer, you can see that it
mimics what you did in the ColorLookup node. They are exactly the same
(FIGURE 4.19).
4. Select Read1 and Shift-click the ColorLookup node in the Color toolbox
to branch another output.
6. Play around with ColorLookup2’s curves. You can play with the
separate RGB curves as well.
When matching colors, the normal operation is to match black and white
points between the foreground and background (only changing the
foreground), then match the level of the gray midtones, and finally match
the midtone hue and saturation.
⬆
TABLE 4.2 Grade Node Properties
NOTE
1. If you want, you can save your script. When you are finished, press
Ctrl/Cmd-W to close the script and leave Nuke open with an empty script.
You will quickly composite these images together and then take your time
in color matching the foreground image to the background.
4. Select Read1 and press the M key to insert a Merge node after it.
The composite is almost ready. You just need to punch a hole in the
foreground car so it appears to be behind the snow that’s piling up on the
windshield. For that, you’ll bring in another image (you will learn how to
creates mattes yourself in Chapter 6).
Here you can see this is a matte of the snow. It is a four-channel image
with the same image in the R, G, B, and alpha. You need to use this image
to punch a hole in your foreground branch. To do this, you need another
Merge node.
8. Drag Merge2 on the pipe between Read1 and Merge1 until the pipe
highlights. When it does, release the mouse button to insert Merge2 on ⬆
that pipe (FIGURE 4.22).
FIGURE 4.22 Inserting a node on an existing pipe
You can see here that this is not the desired result. You still need to
change the Merge2 operation to something that will cut the B image with
the A image. This operation is called Stencil. Stencil is used often to
combine mattes in the same way we’re using it now. The reverse of this
operation, which is just as important, is called Mask, which masks the B
image inside the A image’s alpha channel. Mask holds image B inside the
alpha channel of image A, and Stencil holds image B outside image A.
Looking at your comp, you can see that it now works—short of a color
difference between the foreground and background. Let’s use a Grade
node to fix this shift.
11. Select Read1 and press the G key to insert a Grade node after it.
As you know from Chapter 2, you are not allowed to color correct
premultiplied images. It is often hard to tell if an image is premultiplied
or not, but in this case it is. You can also look at the RGB versus the alpha
channels and see that the areas that are black in the alpha are also black
in the RGB. ⬆
Since you can’t color correct premultiplied images, you have to unpremult
them. You can do this in one of two ways: using an Unpremult node
before the color correction (in this case, Grade1) and then a Premult node
after it, or using the (Un)premult By Switch in your Color nodes. Let’s
practice both.
12. Bring the Grade1’s Offset property up to around 0.4 (FIGURE 4.25).
You can see that the whole image, except the dashboard area, turned
brighter, even though you are correcting only the car image. This is due to
the lack of proper premultiplication. Let’s do the two-node method first.
13. Click Read1 and, from the Merge toolbox, add an Unpremult node.
14. Click Grade1 and, from the Merge toolbox, add a Premult node and
look at the Viewer (FIGURE 4.26).
The problem has been fixed. This is one way to use proper
premultiplication. Let’s look at another.
15. Select Unpremult1 and Premult1, and press the Delete key.
The resulting image looks exactly as it did before (in Figure 4.26). This
technique does exactly the same thing as the first method, just without
using other nodes. I usually prefer the first method, as it shows clearly in
the DAG that the premultiplication issues are handled. However, if you
look at Grade1 in the DAG now, you will see that, although the change is
not as noticeable, Grade1 is showing that it is dividing the RGB channels
with the alpha channel. The label now says “rgb/alpha” (FIGURE 4.28).
⬆
FIGURE 4.28 The node’s label changes to show the
Unpremult and Premult operations are happening inside
the node.
Let’s use the second method you have set up already. You will now be
color correcting an unpremultiplied image but outputting a premultiplied
image.
After a little rearranging, the tree should look like the one in FIGURE
4.29.
FIGURE 4.29 Your tree should look like this at this point.
Using CurveTool and Pixel Analyzer to match black and white points
Think back to the introduction of this section; how are you going to find
the darkest and lightest points in these two images so you can match them
together?
One way, which is valid and happens often, is by using your eyes to gauge
which are the darkest and brightest pixels. However, the computer is so
much better at these kinds of things, and it doesn’t have to contend with
light reflections on the screen and other such distractions.
The node to use for this is the CurveTool node, which you used in Chapter
3 to find the edges of the pilot element. You can also use this node to find
out other color-related stuff about your image. Let’s bring in a CurveTool
node to gauge the darkest and brightest point in the foreground and use
that data to stretch the foreground image to a full dynamic range.
This time you are going to use the Max Luma Pixel Curve Type. This finds
the brightest and darkest pixels in the image.
The purpose of this operation is to find the darkest and lightest pixels in
the image. When switching to this tab you see two sections, the one
showing the lightest pixel (Maximum) and the darkest pixel (Minimum).
For each, the X and Y location and the RGB values are displayed.
Looking closely, you can see that the value of the minimum pixel is 0 in
every property. This is because this image is a premultiplied image, and as
far as CurveTool is concerned, all that black in the image is as much a part
of the image as any other part of it. You need to find a way to disregard
that black area. Let’s do the following.
What you did here was replace, momentarily, the black background with a
middle gray background. This way, you get rid of the black and replace it
with a color that is not the darkest nor the lightest in the image. This new
image is the image you want to gauge using the CurveTool. You’ll need to
move the pipe coming in to CurveTool1 (FIGURE 4.32).
12. Switch to the MaxLumaPixel tab again and have a look (FIGURE
4.33).
Now you can see that the minimum values are far from being all 0. You
are getting a true result that shows the lightest and darkest pixels. Let’s
make use of them.
13. Close all Properties panels in the Properties Bin to clear some room.
16. Click the 4 icon next to Grade1’s Blackpoint, Whitepoint, Lift, and
Gain to enable the four fields.
The foreground image’s dynamic range now spans from a perfect black to
a perfect white. This enables you to push those colors to new black and
white points to match these points to the background image. You can use
another CurveTool to find those points in the background image, but just
for fun, let’s use the Pixel Analyzer for that this time.
The Pixel Analyzer is a new panel in Nuke 8.0. It helps you analyze the
pixel values in your image.
19. From the Properties Bin’s Content menu, choose “Split Vertical”.
20. From the newly created pane’s Content menu, choose Pixel Analyzer
(FIGURE 4.35).
⬆
FIGURE 4.35 The Pixel Analyzer now lives at the bottom
right of the interface.
Notice this time around that a line of red dots appears on the screen? All
those points accumulate to fill the five color boxes in the Pixel Analyzer
with values.
Clicking any of the boxes shows the values of that color below—RGBA and
HSVL (Hue, Saturation, Value, Luminance).
Dragging on the screen is all well and good, but the whole frame is what
you need to know about. There’s a feature for that too.
24. Close all Properties panels in the Properties Bin to clear some room.
Because the Pixel Analyzer is a panel and not a node, you can’t link to it,
but you can very easily copy the values across from the Pixel Analyzer to
the property where the values are needed by dragging.
26. Drag from the Pixel Analyzer’s Min box to Grade1’s Lift Color swatch
to copy the values across (FIGURE 4.37).
27. Do the same from the Max box to the Gain Color swatch.
28. You don’t need the Pixel Analyzer anymore, so from its Content menu
choose Close Pane.
You have now matched the foreground’s shadows and highlights to those
of the background (FIGURE 4.38).
As you can see from the image, the shadows and highlights are matched,
but the image is far from looking matched. The midtones, in this case,
make a lot of difference.
1. Hover your mouse pointer in the Viewer and press the Y key to view the
luminance. ⬆
To change the midtones now, use the Gamma property. You can see that
the whitish snow on the right is a darker gray than the whitish car. Let’s
bring down the whitish car to that level.
3. Bring the Gamma slider up to 0.85 and bring the Multiply slider down
a bit to 0.9 (FIGURE 4.39).
4. Hover your mouse pointer in the Viewer and press the Y key to view the
RGB channels (FIGURE 4.40).
OK, so the midtones’ brightness is better now, but you need to change the
color of the car’s midtones. At the moment, the car is too warm for this
winter’s day. Matching color is a lot more difficult because you always
have three options: red, green, and blue. Matching gray is a lot easier
because you need to decide only whether to brighten or darken it.
However, as each color image is made out of three gray channels, you can
use the individual channels to match color too. Here’s how.
5. Hover your mouse pointer in the Viewer and press the R key to view
the red channel (FIGURE 4.41).
Now you are looking only at levels of gray. If you change the red sliders,
you will get a better color match while still looking only at gray.
6. Display the In-panel Color Picker for the Gamma property by clicking ⬆
the Color Picker button.
You also want to change the Multiply and Offset values to achieve a
perfect result. This is because, even though you matched the black point
and white point, the distance of the car from the camera means the black
point will be higher and the white point lower. At the end of the day, it will
look right only when it does match, math aside.
7. Click the Color Picker button for the Multiply and Offset properties.
Your screen should look like FIGURE 4.42.
8. Since you are looking at the red channel in the Viewer, change the red
sliders for Gamma, Multiply, and Offset until you are happy with the
result; little changes go a long way. I left mine at Gamma: 0.8, Multiply:
0.82, and Offset: 0.02.
9. Display the green channel in the Viewer, and then move the green
sliders to change the level of green in your image. My settings are
Gamma: 0.85, Multiply: 0.89, and Offset: 0.025.
10. Do the same for the blue channel. My settings are Gamma: 1.05,
Multiply: 1, and Offset: 0.065.
This is as far as I will take this comp. Of course, you can use your already
somewhat-developed skills to make this a better comp, but I’ll leave that
to you.
2. Press the R key and bring in, from the chapter04 folder, the car.png
image again.
3. While the newly imported Read1 node is selected, press the C key to
create a ColorCorrect node. You can also find the ColorCorrect node in the
Color toolbox.
In this tab (it’s similar to ColorLookup, isn’t it?) you have three graphs, all
selected. One represents the shadows, another the midtones, and a third
the highlights (FIGURE 4.45).
6. Click the Test check box at the top of the graph (FIGURE 4.46).
⬆
FIGURE 4.46 The test shows the parts of the dynamic
range in the Viewer.
This shows a representation in the Viewer of what parts of the image are
shadow, midtone, and highlight. Highlights are represented by white,
midtones as gray, and shadows as black. Green and magenta represent
areas that are a mix of two ranges.
7. Click the Test button at the top of the graph again to turn it off.
The ranges are fine for this image, so we won’t change anything and will
continue working.
You will now give this image a dreamy, car commercial look—all soft
pseudo blues and bright highlights. If you don’t define the look you are
after in the beginning, you can lose yourself very quickly.
Before changing the color of this image, I’ll show you my preferred
interface setup for color correcting.
10. Hover your mouse pointer in the Viewer and press the spacebar to
maximize the Viewer to the size of the whole interface (FIGURE 4.48).
Since the Properties panel is floating, it is still there. This way, you can
look at the image at its maximum size without wasting space on things
like the DAG, yet you are still able to manipulate the ColorCorrect node.
What I am aiming for is something like that in FIGURE 4.49. You can
try to reach this look yourself, or you can follow my steps.
⬆
FIGURE 4.49 This is the image look I am referring to.
11. Let’s start by desaturating the whole image a little, so in the Master set
of properties, set the Saturation property to 0.5.
Now for the shadows. I would like to color the shadows a little bluer than
normal.
Remember, in addition to the In-panel Color Picker, you can also use the
Floating Color Picker. To use this, Ctrl/Cmd-click the Color Picker button.
The benefit of using the Floating Color Picker is that all sliders also have
Input fields, so you can type things up numerically.
13. From the Hue slider, choose a blue hue. I selected 0.6. Now change
the Saturation for the shadows.gamma color. I set it to 0.31. Finally,
adjust the brightness, or Value slider in the Floating Color Picker. I have it
at 1.22 (FIGURE 4.51).
⬆
FIGURE 4.51 Setting the shadow’s Gamma properties using
the Floating Color Picker
15. You have a lot more work in the midtones. First, set the Saturation to
0 so that the midtones are tinted black and white.
16. To create a flatter palette to work on, set the Contrast for midtones at
0.9.
18. Use the Gain property to tint the midtones by Ctrl/Cmd-clicking the
Color Picker button for Midtones/Gain.
19. In the Floating Color Picker that opens, click the TMI button at the
top to enable the TMI sliders (FIGURE 4.52).
If you need to make the Floating Color Picker bigger, drag the bottom-
right corner of the panel.
20. Now, for a cooler looking shot, drag the T (temperature) slider up
toward the blues. I stopped at 0.72.
21. To correct the hue of the blue, use the M (magenta) slider to make this
blue either have more magenta or more green. I went toward the green
and left it at –0.11.
As always, only the RGB values affect the image. You just used TMI sliders
to set the RGB values in an easier way.
⬆
23. You now increase the highlights a little, so let’s start by setting the
Contrast to 1.5.
24. To color correct the highlights, first click the 4 icon to enable the
individual Gain input fields.
25. Click in the right side of Gain’s first input field (for the red channel)
and use the up and down arrow keys on your keyboard to change the red
value. I left it on 0.75 (FIGURE 4.54).
26. Leave the next field (green) where it is, but use the arrow keys in the
blue field to increase blue. Because I want everything to be a little bluer, I
left mine at 1.5.
The first stage of the color correction is finished. Let’s bring back the rest
of the interface.
You haven’t learned to create complex mattes yet, but in this case, you
really need only two radial mattes. You can create those easily using the
Radial node in the Draw toolbox.
If you use the Grade node as it is, the whole image gets brighter. You’ll
need to use Grade1’s mask input to define the area in which to work.
2. With nothing selected, create a Radial node from the Draw toolbox
(FIGURE 4.56).
3. View Radial1.
It creates a radial, see? I told you. By moving the edges of the radial box,
you can change its shape and location.
4. View Grade1. ⬆
5. Drag Radial1’s edges until it encompasses the back wheel (FIGURE
4.57).
FIGURE 4.57 Radial1 encompasses the back wheel.
You’ll need another Radial node to define the second wheel. (You can add
as many Radial nodes as you need. Everything in Nuke is a node,
remember?)
8. To make use of the radials, take the mask input for Grade1 and attach it
to the output of Radial2, as in FIGURE 4.59.
This means whatever you now do in Grade1 affects only where the radial’s
branch is white.
10. Some of the deep blacks have become a little too gray, so decrease the
Blackpoint property a bit. I left mine at 0.022.
At this point, the grading is finished. Mask inputs can be very important
in color correction because a lot of times you want to color correct only an ⬆
area of the image. But remember not to confuse mask inputs with mattes
or alpha channels. The use of the mask input is solely to limit an effect—
not to composite one image over another or to copy an alpha channel
across.
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
4. Color Correction 6. RotoPaint
🔎
5. 2D Tracking
Tracking makes it possible to gauge how much movement is
taking place from frame to frame. You can use this movement
information to either cancel the movement by negating it—
called stabilizing—or transfer the movement to another
element, called match-moving. Tracking is done via the
Tracker node, which got a big overhaul with Nuke 7.
⬆
FIGURE 5.1 Tracker point anatomy
STABILIZING A SHOT
In this first exercise you track a shot in order to stabilize it, meaning that
you stop it from moving. To give you something to stabilize, bring in a
sequence from the files you’ve copied to your hard drive.
3. Zoom in close to the area in the image where the spoon is, place your
mouse pointer over the edge of the spoon handle, and don’t move it
(FIGURE 5.2).
Notice how much film weave there is in this plate. Film weave is the
result of the celluloid moving a little inside the camera when it is
shooting. You will now fix that weave and add some flares to the candle
flames.
6. Make sure you’re viewing the output of Tracker1 in both the Viewer and
frame 1.
The Tracker node’s Properties panel loads into the Properties Bin, but
we’ll leave that for now. More importantly, a new toolbar that looks like
FIGURE 5.3 appears in the Viewer. Using these controls, you can track a
feature in the image. You’ll start by defining the pattern box.
⬆
FIGURE 5.3 The Tracker node’s Toolbar
7. In the Tracker Toolbar in the Viewer, click the Add Track button. It
turns red to show that it’s selected.
If you leave this button on, every click in the Viewer will create another
tracker point. However, there’s a quicker way to create a tracker point.
Ctrl/Cmd-Alt/Option-clicking in the screen simply creates a tracker point
in that location.
10. In the Viewer, click the center of the tracker point, where it says
track1, and move it to the edge of the spoon handle (FIGURE 5.4).
The Tracker node starts to follow the pixels inside the pattern box from
frame to frame. A progress bar displays showing you how long it will be
until the Tracker (shorthand for Tracker node) finishes. When the Tracker
finishes processing, the tracking part of your work is actually finished.
Anything beyond this is not really tracking—it’s applying a Tracker’s
result.
You can see the Tracker-accumulated tracking data in the track_x and
track_y columns in the Properties panel as keyframes (FIGURE 5.6).
The first tab of the Tracker Properties panel is used mainly for viewing the
accumulated data, not using it.
12. Move back in the Timebar using the left arrow key.
Look at the track_x and track_y input fields and how they change to
reflect the position of the pattern box in each frame.
If you subtract the X and Y values in frame 1 from the X and Y values in
frame 2, the result is the movement you need to match the tracked
movement. If you take that number and invert it (5 becomes –5), you
negate the movement and stabilize the shot. You can do this for any
frame. The frame you are subtracting—in this example, it is frame 1—is
called the reference frame.
Now that you’ve successfully tracked the position of the spoon from frame
to frame, you probably want to use this to stabilize the shot. This is done
in another tab in the Tracker Properties panel.
The Transform tab holds the Properties with which you can turn the
⬆
tracked data into transformations.
This is all you need to do to stabilize the shot. To make sure the shot is
now stabilized, compare it in the Viewer to the unstabilized shot.
16. In the Viewer, change the center Viewer composite control drop-down
menu from – to Wipe.
17. Reposition the axis from the center of the frame to just above the
spoon handle.
Notice that Tracker1’s transform controls are in the way. You need to get
rid of these to see the Viewer properly.
18. Close all Properties panels by clicking the Empty Properties Bin
button at the top of the Properties Bin.
19. Press Play in the Viewer and look at both sides of the wipe (FIGURE
5.8).
20. Click Stop and switch the center Viewer composite control drop-down
menu back to –.
21. Click Tracker1 and press the 1 key to make sure you’re viewing that
part of the branch.
For this shot, the director asked you to “bloom” each candle flame a little.
To do so, you will add a flare to each candle flame using a node called, you
guessed it, Flare from the Draw toolbox.
NOTE
The Flare node is a great, very broad tool for creating various
types of flares and lighting artifacts.
⬆
22. Select Tracker1 and, from the Draw toolbox, add a Flare node
(FIGURE 5.9).
FIGURE 5.9 A new node, Flare1, is inserted after Tracker1.
23. Drag the center of the flare onto the center of the rightmost candle
flame using the on-screen controls, as shown in FIGURE 5.10.
I don’t have room to explain every property of the Flare node. It’s a very
involved and very artistic—and hence, subjective—tool, which makes it
difficult to teach. And let’s face it, it has a lot of properties. I encourage
you to play with it and learn its capabilities, but in this case, I ask you to
copy some numbers from the list in the next step.
24. Copy the following values into the corresponding properties one by
one and see what each does. When you’re done copying, if you want, you
can change them to suit your taste:
• Radius = 0, 0, 50
• Corners = 12
• Edge Flattening = 6
The result of the Flare node should look a lot like FIGURE 5.11.
⬆
FIGURE 5.11 The flare after treatment
Now use this Flare node to place three more flares in the shot.
26. Make sure Flare1 is still selected and paste another flare by pressing
Ctrl/Cmd-V.
You want to move the location of Flare2 to the second candle from the
right. This can get confusing because when you paste a node, its
Properties panel doesn’t load into the Properties Bin automatically, so
moving the on-screen control now moves the position for Flare1. You have
to be careful which on-screen control you’re using.
30. Repeat the process twice more for the other two candles.
Your image should have four flares on it—one on each candle. Your tree
should look like FIGURE 5.12.
⬆
FIGURE 5.12 Four Flare nodes inserted one after the other
What’s left to do now is to bring back the film weave you removed in the
first step. You already have all the tracking data in Tracker1. You can now
copy it and insert it at the end of the tree to bring back the transformation
that you removed before.
NOTE
33. Click the Transform tab and choose Match-move from the Transform
drop-down menu.
You can see that the motion data was returned as it was before, except
now it has flares, which are weaving just like the rest of the picture.
37. Save your project in your student_files directory with a name you find
fitting.
38. Press Ctrl/Cmd-W to close the Nuke script and create a fresh one.
One point tracking: This can produce movement in the horizontal and
vertical positional axes only.
Two point tracking: This can produce movement in the same way as
one point tracking, but it can also produce 2D rotation and scale.
By tracking the right amount of points, you can tell Nuke to create the
right kind of movement using the Tracker node—whether it stabilizes or
match-moves that movement. You do this by adding more tracks, tracking
each one in turn, and then using the T, R, and S check boxes in the
Properties panel, shown in FIGURE 5.13, to tell Nuke to use a track to
create T (transform), R (rotation), and S (scale) movement.
FIGURE 5.13 These three check boxes tell Nuke what kind
of movement to create.
2. Save this Nuke script to your student_files folder and call it frame_v01.
3. Make sure you are viewing Read1 in the Viewer and press the Play
button to watch the clip (FIGURE 5.14). Remember to stop playback
when you’re done and to return to frame 1.
What you are seeing is a picture frame on a table with a handheld camera
rotating around it. You need to replace the picture in the frame. The
movement happens in more than two dimensions (though, of course, as
far as the screen is concerned, everything is moving in just two
dimensions). The picture changes its perspective throughout the shot.
That means you have to track four individual points, one for each corner
of the frame, to be able to mimic the movement.
The Tracker node has the ability to track as many tracking points as you
would like. Simply keep adding more tracking points using the same
button we used before.
5. Make sure you are viewing Tracker1 and that you are on frame 1 in the
Timebar.
8. Repeat this twice more for the top-right and top-left corners of the
picture frame (in that order please).
You now have four tracking points on the four corners of the picture
⬆
frame, as shown in FIGURE 5.15.
FIGURE 5.15 Creating all four tracking points
Take a look at the Properties panel for Tracker1; you can see it is now
populated with the four trackers you created. You can select the tracking
points either here or in the Viewer and use this list to delete tracks, or as
another way to add new tracks using the buttons at the bottom (FIGURE
5.16).
Let’s adjust the reference pattern and search box for each tracker.
9. Using the on-screen controls, resize the pattern boxes so they’re a lot
smaller and the search area box so it’s bigger, similar to what you see in
FIGURE 5.17.
⬆
FIGURE 5.17 Adjusting the reference and search areas
10. Select all tracking points by holding the mouse pointer in the Viewer
and clicking Ctrl/Cmd-A.
11. Click the Track Forward button in the Toolbar and hope for the best.
Chances are the track went well, but in case it didn’t, here’s a list of things
to do to improve your tracking:
Don’t track more than one tracking point at a time, as you just
attempted. It makes things more difficult. Select a single tracker, focus on
getting that one right, then move on.
It’s all about the pattern and search areas. Select good ones and you’re
home free. A good pattern has a lot of contrast in it, is unique in that it
doesn’t look like anything around it, and doesn’t change much in its
appearance except its location from frame to frame.
If you try once, it doesn’t work, and you have to go back, go back to the
last good frame and click the Clear Fwd button (short for forward) to
delete the keyframes already generated (FIGURE 5.18). You can click
Clear All to clear everything, and Clear Bkwd (short for backward) if you
are tracking backward.
TIP
This button turns the path the keyframes made in the Viewer into a clear
gauge of the quality of the track.
⬆
FIGURE 5.20 Clicking the Traffic Lights button makes the
trackers’ paths colorful.
NOTE
The fact that there are red areas doesn’t make your track a
bad one. Rarely does a reference pattern stay as it is. It might
just indicate this is the best match, and it will be correct. The
pattern you are tracking simply changed.
Look at those colors. Green dots show areas where the current reference
pattern matches the original reference pattern. Red dots show areas that
have the most amount of deviation from the original reference pattern.
Let’s focus on track point 1. On my track, halfway through the clip, I have
a red section. Let’s pretend that the tracker actually went off course there
and we want to fix it.
The following instructions rely on the track I have. If you want to follow
my instructions exactly, then do the following:
5. Connect the newly imported Tracker1 to Read1 and then view it in the
Viewer. Double-click it to open its Properties panel.
6. Make sure Track 1 (the actual tracking point, not the node) is selected
by clicking it in the Properties panel (FIGURE 5.21).
At the top left of the Viewer, you see two boxes. The leftmost one, entitled
Track 1, displays the zoomed-in reference pattern on the current frame.
The next box is the keyframe box for the keyframe you created on frame 1.
It is labeled Frame 1 at the bottom of the box, indicating as much. In order
to create another keyframe and make sure it is just like the other
keyframes, be sure to use these boxes, among other things.
Let’s start by locking some good tracking results in place. The first frame,
for example, is good, but so is the last frame—as you can tell, because it’s
green also.
7. Go to the last frame, frame 100, and click the Add Keyframe button in
the Toolbar (FIGURE 5.22).
Now you have a new keyframe box to the right of the keyframe box for
⬆
frame 1. This one is for frame 100.
8. Click the keyframe box for frame 1, then the one for frame 100.
The reference pattern box on the left flicks between showing frame 1 and
100. This is because clicking the keyframe boxes in the Viewer changes
the frame you are on.
The reference pattern box is not just a pretty box to look at, it’s useful too.
Dragging in this box actually changes the location of your tracker at that
frame.
9. Click between frame 1 and frame 100 using the keyframe boxes.
10. Adjust frame 100 so it looks closer to what frame 1 looks like.
Now let’s make another keyframe by finding the worst area and correcting
that.
11. Drag the Timebar until you reach the reddest part of the tracker’s
path, according to the colors produced by having the Traffic Lights button
switched on. I ended up at frame 68.
13. Adjust the new keyframe by dragging in the reference box. Switch
between the three keyframes to see what you’re matching to.
I reached something that’s better than what I had before. So that’s good—
for this frame. But what about all the rest of the frames? This is where
keyframe tracking comes in. Keyframe tracking means tracking, not only
to find the reference pattern in the next frame, but also toward the next
keyframe. This functionality is made up of some smart algorithms inside
that tracker.
14. Click the Keyframe Track All button in the Toolbar shown in
FIGURE 5.23.
15. Go to frame 86 in the Timebar. This is a rather red frame for me.
16. Drag in the reference box to better match this frame as well, and
release the mouse button.
Look at the tracker go! It starts tracking immediately after you change
that reference pattern. What actually happened is that you created
another keyframe on frame 86 by changing the location of the tracker.
Because this is a keyframe-based track, it updated automatically to allow
for the newly added keyframe. You should now have less red in your
colored path (FIGURE 5.25).
⬆
FIGURE 5.25 It is very easy to improve your track with
keyframe-based tracking.
You can carry on fixing Track 1, move on to the other three trackers, or
just leave it all as it is. I’m sure it’s good enough already.
You can’t use the Tracker node’s Transform options to create perspective
transformations. That functionality is reserved for the CornerPin node.
Here’s what is next on the to-do list: how to use the accumulated tracking
data outside the Tracker node.
1. From the Chapter05 directory, bring in the file called statue.jpg with a
Read node and view it in the Viewer (FIGURE 5.26).
You will insert part of this Buddha image into the frame.
2. Select Read2 and, from the Draw toolbox, insert a Rectangle node.
This node creates, as you might guess, rectangular shapes. In this case,
you want to create the shape only in the alpha channel.
To create a nice framing for the statue, drag the rectangle until you are
happy with how it looks.
4. Drag the rectangle or enter values in the Properties panel to frame the
image. The Area property will show values similar to 500, 790, 1580, and
2230 (FIGURE 5.27).
⬆
FIGURE 5.27 The edges of Rectangle1’s on-screen controls
will end up being the edge of your picture in the frame.
5. You now need to multiply the RGB with the alpha to create a
premultiplied image. Attach a Premult node from the Merge toolbox after
Rectangle1.
Because you’re not going to use the Tracker node to move the picture
around, you need another transformation tool. CornerPin is designed
specifically for this kind of perspective operation, so you will use it in this
exercise. CornerPin moves the four corners of the image into a new
positions—exactly the data the Tracker node accumulates.
The developers at The Foundry know this. Smart people. So they made it
easy for you to create a CornerPin node right from the Tracker node with
all the correct expressions already in place to tie the CornerPin node to
the Tracker node.
7. Click track 1’s row in Tracker1’s Properties panel, then Shift-click track
4’s row to select all four trackers.
Because you can have as many tracking points as you want in the list, you
must choose four trackers to match the four corners of an image.
8. At the bottom of the Tracker tab, under the Export separator, make
sure CornerPin2D (Use Current Frame) is selected, and click the Create
button to the right of it.
⬆
FIGURE 5.28 Your tree should look like this now.
What actually happened when you clicked the Create button is that a
script (some clever bit of programming, that is) automatically created a
CornerPin2D node and connected its properties to Tracker1’s properties
with expressions. Expressions might sound scary to some, but many
others already use them. You already used expressions in Chapters 3 and
The green line that appeared in the DAG (shown in Figure 5.28) when you
created the CornerPin2D node shows that a property in CornerPin2D1 is
following a property in Tracker1.
The From pins, available in the From tab, tell the CornerPin node where
the original corners are. To make this happen, follow these steps:
TIP
The location of the four pins should be at the corners of Rectangle1’s Area
property. So let’s type an expression link between the two.
What you actually did here is very similar to what you did automatically in
the previous lesson. You referred the current property so it looks at
another by providing its address as
NodeName.KnobName.SubKnobName. Aren’t you proud. Now let’s do
the rest.
6. Click the three other From Animation menus and edit their expressions
in sequence according to this list (x for the first and y for the second):
7. View CornerPin2d1’s output in the Viewer (make sure you are viewing
RGB) and switch back to the CornerPin2D tab in CornerPin2D1’s
Properties panel.
You can see here that the To points are sitting on the edges of the image,
which is what you were after (FIGURE 5.31).
Now you need to place the new, moving image on top of the frame
background.
8. Select CornerPin2D1 and press the M key to insert a Merge node after
it.
⬆
FIGURE 5.32 Merging the foreground and background
branches
FIGURE 5.33 The picture now sits in the frame, but it could
use some more work.
The picture sits beautifully and snuggly in the frame, doesn’t it? But
maybe its too snug.
11. In the Rectangle1 Properties panel, set the Softness value to 20.
You also need to blur the image a little—it was scaled down so much by
the CornerPin node that it has become too crisp and looks foreign to the
background.
12. Attach a Blur node to CornerPin2D1 and set its size value to 2.
You have now added motion to an image that didn’t have it. When that
happens outside the computer, the image gets blurred according to the
movement. You should make this effect happen here as well.
This sets motion blur at 100% quality. It is that easy to add motion blur to
motion generated inside Nuke. Let’s look at the final result.
15. Click Play in the Viewer and have a look (FIGURE 5.34).
FIGURE 5.34 This is what the final result should look like.
One thing is missing: some kind of reflection. You will take care of that in ⬆
Chapter 10.
17. Save your script. This is very important, because you will need it again
for Chapter 10 (FIGURE 5.35).
FIGURE 5.35 The final Nuke tree should look like this.
PREV NEXT
⏮ ⏭
Recommended
4. Color Correction
/ Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out 6. RotoPaint
© 2017 Safari. Terms of Service / Privacy Policy
⬆
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
5. 2D Tracking 7. Keying
🔎
6. RotoPaint
Roto and paint are two tools I thought would be long gone by
now. In fact, I thought that back in 1995 when I started
compositing. Nowadays I realize these tools are always going to
be with us—at least until computers can think the way we do.
Fortunately, the tools for roto and paint are getting better and
better all the time and are being used more and more often.
Learning how to use these two techniques is even more
important today than it used to be.
NOTE
The paint technique is simply a way to paint on the frame using brushes
and similar tools we’ve all grown accustomed to, such as a clone tool
(sometimes called a stamp tool). Tools used for this technique are rarely
available in compositing systems as ways to paint beautiful pictures, but
instead they are used to fix specific problems, such as holes in mattes or
spots on pretty faces, and to draw simple, but sometimes necessary,
doodles (but I can’t really call them paintings).
In Nuke, roto and paint are combined in one tool, RotoPaint, that can
generate both roto shapes and paint strokes as well as handle a few other
tasks. Therefore sometimes I refer to this single node as a system (the
RotoPaint system), because it is more than just a simple node, as you will
soon find out.
In the Viewer, you will see a third line of buttons (the Tool Settings bar) at
the top and a new line of buttons (the Toolbar) on the left (FIGURE 6.1).
In the Toolbar on the left, you can choose from the various tools that are
displayed and click each icon to display a menu of more tools (FIGURE
6.2).
The Tool Settings bar at the top lets you define settings for the selected
tools. This bar changes depending on the tool you choose (FIGURE 6.3).
The first two tools in the Toolbar on the left are the Select and Point tools,
which enable you to select and manipulate shapes and strokes. The rest of
the tools are the actual drawing tools and are split into two sections:
shape-drawing tools (the roto part) and stroke-drawing tools (the paint
part). The shape-drawing tools are listed in TABLE 6.1.
⬆
TABLE 6.2 Stroke-Drawing Tools
Painting strokes
Try drawing something to get a feel for how this node works. Start by
drawing some paint strokes.
TIP
At the top of the Viewer, you will see the Tool Settings bar change to
reflect your selection. You now have controls for the Brush tool, including
opacity, size, color, and more (FIGURE 6.4).
3. With the Brush tool selected, start painting on screen. Create a few
strokes. Change the brush size, color, and opacity. (Use the Color Picker
button to change the color.)
The settings from the Tool Settings bar are mirrored in the Properties
panel. You can find generic controls, such as color and opacity, for all
shapes and strokes for RotoPaint at the center of the Properties panel
(FIGURE 6.5).
Here you can find other controls applicable only to strokes, such as brush
size, hardness, spacing, and more.
You can play with all those controls for as long as you like (FIGURE 6.7).
⬆
FIGURE 6.7 Can you tell I didn’t even use a tablet to draw
this?
All the strokes you drew have disappeared. The Tool Settings bar allows
you to specify the length of strokes using the last unlabeled drop-down
menu on the right. The menu shows Single, meaning just a single frame
stroke. The good thing is, though, that you can change stroke time lengths
after you draw them.
Editing strokes
In a tree-based compositing application—and in Nuke specifically—
creating paint strokes is done with timing in mind. Sometimes you need
to create a stroke that will last only a frame, sometimes it’s an infinite
stroke (one that lasts throughout the whole length of the composition and
beyond). Sometimes you need a stroke to start at a specific frame and go
all the way to the end of the composition, and sometimes you need it to
appear just in a specific range.
You can change the length of strokes using the Lifetime Type property in
the Lifetime tab (FIGURE 6.8), the drop-down menu, or the buttons
(the functionality is the same). The options are
Now click the All Frames button. Your last stroke now exists throughout
the clip and beyond—infinitely. Even if you make your comp longer now,
no matter how long you make it, the stroke will be there.
Painting in vectors
You can make changes on the fly in Nuke because the RotoPaint system is
vector based. In other words, pixels aren’t really being drawn by your
mouse strokes. When you “draw” a stroke, a path is created called a
vector, which mathematically describes the shape of the stroke. You can
then use this vector to apply a paint stroke in any number of shapes and
sizes and, for that matter, functionality. This means you can change your
paint strokes after you draw them, which is very powerful and saves a lot
of time.
To be able to change stroke settings after they have been created, you
need a way to select strokes. You do that in the Stroke/Shape List window,
also called the Curves window (FIGURE 6.9).
⬆
FIGURE 6.9 The Stroke/Shape List window
TIP
NOTE
NOTE
You can also edit strokes by changing the vector that draws
them.
Your second-to-last stroke now exists infinitely. Now for the rest of the
brush strokes.
3. Select all the brush strokes by clicking the topmost one, scrolling down,
and Shift-clicking the last one.
7. Click the Select tool (the arrow-shaped tool at the top of the RotoPaint
Toolbar).
8. To select stroke points, you need to enable the Show Paint Selection
button on the Tool Settings bar (FIGURE 6.10).
⬆
FIGURE 6.11 The box around selected vector points allows
for greater manipulation.
1. Double-click the Paint tool to access the Eraser tool, and then draw on
screen where you drew before.
The Eraser erases previous paint strokes. It does not erase the image (if
you have one connected in RotoPaint’s input) in any way, and it’s a paint
stroke just like any other.
2. Switch back to the Select tool and click to select a stroke to delete.
NOTE
2. With nothing selected in the DAG, press the P key to create another
RotoPaint node, RotoPaint2.
7. After you’ve closed the shape, you should automatically be set back to
the Select tool. If you are not, switch back to the Select tool by clicking it
in the Toolbar.
You can now keep changing and editing the shape. You can move points
around, change what kind of point you have, delete points, or add points.
8. Go ahead and play with the points and tangents using the hot keys to
move your shape around.
9. You can also click and drag points together to create a marquee and
then use the box that pops up to move points around as a group
(FIGURE 6.13).
You can blur the whole spline from the Properties panel.
10. Click the Shape tab, then drag the Feather property’s slider to blur
outside the spline. You can also reduce the slider to below 0 to blur within
the spline.
The Feather Falloff property lets you choose how hard or soft the ⬆
feathering will be (FIGURE 6.14).
FIGURE 6.14 A high Feather setting with a low Feather
Falloff setting
11. Reset the Feather Falloff property to 1 and the Feather property to 0.
You can also create specific feathered areas instead of feathering the
whole shape. I sometimes call these soft edges.
12. Pull out a soft edge from the Bézier itself by holding Ctrl/Cmd and
pulling on a point or several points (FIGURE 6.15).
NOTE
You don’t have to remember all these hot keys. You can see
all of them in the context menu by right-clicking/Ctrl-clicking a
point.
13. To remove a soft edge (and bring back the secondary curve to its
origin), select the point and press Shift-E, or right-click/Ctrl-click a point
and choose Reset Feather from the drop-down menu.
Animating a shape
Now you will try animating a shape.
1. Advance to frame 11 in the Timebar and, with the Select All tool, click a
point to move your Bézier around a little and change the shape.
2. Move the Timebar from the first keyframe to the second keyframe and
see the shape animating from one to the other.
⬆
FIGURE 6.16 Changing the shape will result in a keyframe
for which the Autokey check box is selected.
The Autokey check box is selected by default, meaning the moment you
start drawing a shape, a keyframe for that shape is created and gets
updated on that frame as you are creating the shape. Moving to another
frame and changing the shape creates another keyframe and thus creates
animation. If, before you start drawing, you deselect the Autokey check
box, drawing will not create a keyframe. If you then move to another
frame and change the shape, no keyframe will be created either, and
neither will animation. If you have a keyframe and then deselect the
Autokey check box, and then you change the shape, this is only temporary
—if you move to the next frame, the shape immediately snaps to the last
available keyframe. You have to turn on the Autokey check box if you
want to keep the new shape after you change it.
4. Move the Timebar to see the shape animating between the three
keyframes.
You can also animate other things for the shape, such as its overall
feather.
5. Go to frame 1. In the Shape tab, click the feather’s Animation menu and
choose Set Key (FIGURE 6.17).
6. Go to frame 11.
The color of the numeric box is now light blue to indicate that some
animation is associated with this property, but no keyframe is at this
frame.
Using the Animation menu, you can add and delete keyframes. But you
need greater control over animation. Sure, you can add keyframes as you
have been doing, but what about the interpolation between these
keyframes? What kind of interpolation are you getting? And what about
editing keyframes? You know how to create them, but what about moving
them around in a convenient graphical interface? This is what the Curve
Editor is for.
⬆
FIGURE 6.18 There is usually a Curve Editor option, but
this is not the case with RotoPaint.
1. Click the Curve Editor tab in the Node Graph pane to switch to it
(FIGURE 6.19).
In the Curve Editor shown in Figure 6.19, the window on the left shows
the list of properties (I call it the Properties List window), whereas the
window on the right shows the actual curves for the properties as they are
being selected. I call this the Graph window.
You can now see somewhat of a hierarchy in the Properties List window.
The first item we’ll talk about is called Bezier1. If you have already drawn
more than one shape or stroke and you select them in the Stroke/Shape
List window in the Properties panel, they are displayed here instead of
Bezier1. Under Bezier1, you have the name of the property that has
animation in it—Feather, and under that you have W and H, for Width
and Height. True, you operated only the Feather property with one slider,
but this slider is actually two properties grouped together, one for Width
and another for Height.
These curves are already loaded into the Graph window on the right.
2. Select both the W and H Feather properties by clicking the first and
then Shift-clicking the second. Click the Graph window on the right to
select that window.
3. Press Ctrl/Cmd-A to select all keyframes and then press F to fit the
curves to the window (FIGURE 6.20).
The Curve Editor now shows two curves, and all the keyframes for both
curves are selected. You have selected two curves that are exactly alike,
which is why it appears as if you are seeing only one curve.
One thing you can do in the Curve Editor is change the interpolation of a
keyframe. You can switch between a smooth keyframe and a horizontal
one, for example.
⬆
5. Select the middle keyframe on the curve and press the H key to make
the point a horizontal one (FIGURE 6.21).
FIGURE 6.21 Making a point on a curve horizontal
TABLE 6.4 lists some hot keys for different types of interpolation on
curves.
You can move around in the Curve Editor in the same way that you move
around in other parts of the application. You use Alt/Option-click-drag to
pan around. Use + and – to zoom in or out, or use the scroll wheel to do
the same. You can also use Alt/Option-middle button-drag to zoom in a
nonuniform way. Pressing the F key frames the current curve to the size of
the window.
Many other functions are available in the Curve Editor, but I won’t cover
them all now. Here’s one last function that enables you to edit the whole
curve using simple math.
8. Drag to select the whole curve (or Ctrl/Cmd-A to select them all).
⬆
FIGURE 6.23 You can access more features through the
contextual menu.
10. In the Move Animation Keys dialog box that opens, enter x+5 in the X
field to move your curve 5 frames forward (FIGURE 6.24).
Watch your curve move five frames forward. What happened here is you
asked to move all X values by the current X value plus five. So the whole
curve moved five frames forward.
13. This time enter x/2 in the X field and press Return/Enter.
Watch your curve shrink. This is because you asked all X values to be half
their current value. Because X is time, your animation will now be twice
as fast.
These examples and the other features in the contextual menu enable you
to manipulate your curve in many ways.
14. If you want to save this project, do so. Otherwise, press Ctrl/Cmd-W
to close this project and create another one. If Nuke quits altogether for
some reason, just start it again.
This concludes the introduction to both the Curve Editor and the
RotoPaint node. That’s enough playtime! Let’s move on to practical uses.
ROTOPAINT IN PRACTICE
The exercise you are about to perform deals with a variety of issues that
RotoPaint can fix. You will paint single frames, clone with the Clone
brush, and create shapes.
This exercise also uses techniques you have already learned: matching
colors and tracking. To save space, I will not explain how to do these
steps, and as a result, you start this comp halfway through. If you’re
feeling confident enough and want to challenge yourself, try and get to
this stage on your own after you look at the starting point provided.
Let’s begin.
1. From the File menu, choose Open, navigate to the chapter06 directory,
and double-click RotoPaint01_start.nk (FIGURE 6.25). ⬆
FIGURE 6.25 This script’s starting point
This shot is taken from a new short film by Alex Norris called Grade Zero.
You should check it out on his website, www.alexnorris.tv
(http://www.alexnorris.tv).
What you see here is a foot that shouldn’t be there—yes, that one at the
top. It doesn’t work in that cut and the director (that would be Alex—and
you do what Alex tells you to) says to take it off (FIGURE 6.26).
There’s a fair amount of movement in this shot—not just the camera, but
the actors are moving and the lights change too. All of these need to be
dealt with.
4. Click Read2 and press 1 to view it. Switch between viewing the RGB
and the alpha. When you’re done, stay on RGB.
Read2 is a patch. I made it using just frame 1, some roto, corner pinning,
and paint work. But it’s just a single frame with an alpha channel.
• Premult1: The alpha channel for Read2 shows a lot of black, but the RGB
doesn’t. This means it is a straight or unpremultiplied image, and so it
needs premultiplying before it can be filtered or merged.
• Merge1: I’ll let you figure out this one on your own.
• CurveTool1: I showed you how to execute this node to figure out the light
changes in Chapter 4. Here it’s used to gauge the lighting change of the
background.
The picture went very dark. It shouldn’t be very dark. In fact, on frame 1,
it should not change at all.
⬆
CurveTool1’s output is the overall brightness of the image, which in our
case is relatively dark. It doesn’t know what a starting point is. So when
you’re copying the data from CurveTool1, it takes an image that has a
certain brightness and darkens it further. You need the changes in only
brightness, however, much like in the Tracker. In the Tracker, you don’t
need to know the location of the reference in frame 1, instead you want to
know how much it changed in frame 2, frame 3, and so on. What you need
to do is get the inverse of the color correction according to only frame 1’s
data. This leaves frame 1 unchanged and returns only the changes in
relation to frame 1.
You have now copied over just frame 1’s values. But remember, you
wanted to invert the color correction. The Grade node has just the
solution for that.
The color of the image is now back to normal as you are color correcting
with Grade1 and then inverting the operation with Grade2. On the rest of
the frames you will see the change in color only in relation to frame 1.
A lot of things are working already. The patch is moving with the
background and the colors match throughout the shot. The main problem,
seen in FIGURE 6.28, is that the knife and the fingers holding it are
hidden behind the patch where the shoe used to be.
5. Draw a Bézier shape around the knife and the fingers holding it. You
can see my shape in FIGURE 6.29.
⬆
FIGURE 6.29 Draw around the knife and fingers.
You immediately have a keyframe on frame 1. Now you need to follow this
hand over time and keep adding keyframes so the shape stays around the
knife throughout the shot. Here are some tips:
• Rotoscoping is just like animation. You should add keyframes where the
motion changes. Adding keyframes arbitrarily produces bad roto.
• Scroll through your clip and write down the frames you should add
keyframes in. The first and last frames are obvious choices; the other
choices depend on what’s mentioned in the previous tip.
• Once you are done adding keyframes in the frames you thought
appropriate, look for the places in between those keyframes that the
difference between the object and the shape is biggest. Adjust these.
• You don’t need the rest of the interface when you’re rotoscoping, so just
press the spacebar while hovering the mouse pointer in the Viewer to
maximize it.
TIP
If the shape is ever blocking you from seeing where you want
to draw, you can turn it off by hovering the mouse pointer over
the Viewer and pressing the O key. Press O again to redisplay
the overlays.
I checked and it seems like good frames for adding keyframes are these: 1,
8, 12, 18, 19, 23, 28, 32, 42, 47, 55, and 58. Go ahead and check yourself,
and see if your list and mine are similar.
6. Follow and adjust the shape throughout the shot until you have
matched the shape to the knife.
⬆
FIGURE 6.30 Using another Merge node to cut a hole in
our patch
Mine looks pretty good and I hope yours does as well. If not, fix it until it
does. There’s just one thing on mine, and I’m sure it’s on yours too, that’s
bothering me: The matte is too sharp. It is too sharp for two reasons:
First, this area is not 100% in focus, and second, there’s motion blur that
needs to be matched.
11. To fix this, with RotoPaint1’s Properties panel open, select your Bézier
shape and switch to the Shape tab.
12. Enable motion blur by clicking the On check box to the right of the
Motion Blur slider (FIGURE 6.31).
13. Select RotoPaint1 in the DAG and press B to insert a Blur node.
All is well and looks good, but if your roto is a little like mine, you will see
a little bit of a dark edge around the knife. You can solve this with the
Feather property in RotoPaint1.
16. Bring the Feather slider down to about –3 or so—until you lose the
dark edge.
This just about concludes this part of the exercise. But what’s that there?
There’s a remnant of a hand on the patch we used. See it here, in
FIGURE 6.32?
For this fix, you will use a brush you haven’t practiced with before: the
Clone brush. This brush uses pixels from one location in the frame and
copies them to another, basically allowing you to copy areas.
1. Make sure Read2 is selected and attach a RotoPaint node to it. Make
sure you are viewing RotoPaint2.
NOTE
4. This is a single frame at this point, and because you want the hand to
be gone for the duration, change the Lifetime Type drop-down menu from
Single to All in the Tool Settings bar (FIGURE 6.33).
The Clone brush copies from one location in the image to another. You set
it up by Ctrl/Cmd-clicking the point you want to copy from and then,
without letting go, dragging to the point where you want to copy to.
In this case, you need to keep aligning your brush with the diagonal lines
of the checkered floor.
5. Make sure you’re zoomed in nicely (by using the + and – keys) to the
area where you want to work and then Ctrl/Cmd-click the area you want
to copy from. Then, without letting go, drag toward the hand—where you
want to copy to—and then release (FIGURE 6.34).
6. You can now paint in one direction along the diagonal line. Notice that
your paint strokes are now copying grain and all the other texture from
your source area.
Remember to change the size of your brush and the direction and distance
of your clone source once in a while so you don’t get a repetetive pattern.
As you can see in FIGURE 6.35, I went a bit far with my painting. The
black checker should have stopped earlier. Let’s erase this mistake.
⬆
FIGURE 6.35 Good thing there’s an Eraser tool!
8. Repeat the process until you are happy with the results. Remember
that you can change properties as well—such as Size and Opacity—that
might help.
Let’s move on. We have only one little thing left to do...
In this case, even though you already have two RotoPaint nodes in the
tree, you still need a third one, and it needs to be in a different location in
the tree. But even if you could use one of the existing RotoPaint nodes, it’s
better to keep things separate. After all, that’s the good thing about the
node tree. So, go ahead and use another RotoPaint node for painting out
the dust. You can also use another little trick to clean up the dust quickly
and easily:
⬆
NOTE
3. Clear the Properties Bin and then double-click RotoPaint3 to open just
that node’s Properties panel.
To make it easy to paint out the dust, you are going to reveal back to the
next frame using the Reveal brush in hope that the speckle of dust won’t
appear in the same location frame after frame. This is faster than
performing cloning on each frame, but it works only if there is little to no
movement in the clip or if the movement in the clip has been stabilized
beforehand.
5. Take a pen and paper and start writing down which frames have dust.
Here’s the list I made for the first 20 frames: 1, 2, 5, 9, 10, 13, 19. You
should find all the rest of the frames that have dust in them and write
them down.
As mentioned earlier, you will use the Reveal brush in this example. Now
let’s set up the Reveal brush and reveal one frame forward from the
current frame.
6. Select the Reveal brush from RotoPaint3’s toolbar in the Viewer. The
Reveal tool is the second tool in the Clone tool’s menu (FIGURE 6.38).
7. Once you’ve selected the Reveal brush, make sure Single is chosen from
the Lifetime Type drop-down menu in the Tool Settings bar.
The magic of this setup happens in the Reveal’s Source Time Offset
property (FIGURE 6.39), which determines what offset should be given
for the clip from which you reveal. You are revealing from the background
—the same source as you are painting on—however, you want to reveal to
one frame forward.
8. Change the Source Time Offset field at the top of the Viewer from 0 to
1. This is the field represented by the Δt label.
⬆
FIGURE 6.40 Choosing a reveal source from the drop-
down menu
10. This kind of work is more easily done with a big screen, so hover your
mouse pointer over the Viewer and press the spacebar briefly to maximize
the Viewer. In case you need to go back to the full interface, press the
spacebar again.
11. Zoom in so the image fills the frame by pressing the H key.
12. Go to frame 1 and paint over the dust speckle to reveal back to the
next frame and, in effect, remove the speckle.
14. When you’re finished, click Play in the Viewer to make sure you didn’t
miss anything. If you did, repeat.
15. Save your Nuke project in the student_files folder and restart Nuke
with a new empty project.
You used RotoPaint three times here for three different purposes. There’s
a lot more you can do with this versatile tool; another example is coming
next.
NOTE
For this shot, you have been asked to dirty up the windshield and write
CLEAN ME on it, as some people do when they walk past a dirty car. The
magic is going to be that, in this case, the message needs to appear as if
it’s being written by an invisible hand.
3. Since you want to create the Bézier only on the alpha channel, deselect
the Red, Green, and Blue boxes at the top of RotoPaint1’s Properties panel
(FIGURE 6.42).
TIP
5. Refine your shape by changing the Bézier handles and moving things
around until you’ve created a matte you’re happy with.
6. Hover your mouse pointer over the Viewer and press the A key to view
the alpha channel.
You have now created your first matte! I kept promising in previous
chapters you would. Now the moment has finally arrived.
3. View Keymix1.
4. I don’t want this much noise, so change Keymix1’s Mix property to 0.2.
5. The matte you created is a bit sharp, so select RotoPaint1 and press the
B key to insert a Blur between RotoPaint1 and Keymix1.
7. Close all Properties panels by using the Clear All Properties panels at
the top right of the Properties Bin (FIGURE 6.45).
⬆
FIGURE 6.45 A dirty car window
You finally have a dirty car window. Now let’s move on to punch a hole in
the matte by writing CLEAN ME on it. You’ll use RotoPaint1 to create the
hand-drawn text CLEAN ME.
3. To see what you’re doing, temporarily hide Bezier1 by clicking the Eye
icon to the right of the name Bezier1 (FIGURE 6.48).
4. Select the Brush tool and hover your mouse pointer over the car
window.
6. Make sure you’re drawing on all frames by choosing All from the drop-
down menu in the Tool Settings bar.
TIP
8. Turn Shape1 back on by clicking to bring back the Eye icon in the
Stroke/Shape List window to the right of the name Bezier1.
You won’t see the writing anymore. This is because both the shape and the
strokes draw in white. However, RotoPaint is a mini-compositing system
in its own right. You can tell all the strokes to punch holes in the shape
just as you can with a Merge node.
9. Select all the strokes called Brush# by clicking the first and then Shift-
clicking the last in the Stroke/Shape List window (FIGURE 6.50).
10. Change the Blending Mode drop-down menu from Over to Minus
(FIGURE 6.51).
⬆
FIGURE 6.52 Your car should look something like this.
The only thing left to do is animate the writing of the words. For that, you
use the Write On End property in the Stroke tab and the Dope Sheet.
To get started with a Dope Sheet, the first thing to do is create two
keyframes for all the brush strokes on the same frames. You then stagnate
the keyframes using the Dope Sheet so the letters appear to get drawn one
after the other.
1. All the brush strokes that draw the text should already be selected in
the Stroke/Shape List window. If they aren’t, select them again.
5. Right-click/Ctrl-click the field and choose Set Key from the contextual
menu (FIGURE 6.53). This sets keyframes for all the selected brush
strokes.
At the moment, all the letters are being written at once. You can stagger
this so that the first letter is written first and so on.
9. In the bottom-left pane, switch from the Node Graph tab to the Dope
Sheet tab.
The window on the left, which I call the Properties List window, shows the
list of properties that are open in the Properties Bin. The window on the
right, which I call the Keyframe window, shows the actual keyframes for
each property.
10. Click the little – symbol to the left of the CLEAN_ME submenu in the
Dope Sheet’s Properties window, then click it again to open it back up.
You now see this holds all the keyframes for the brush strokes you added
to the CLEAN_ME folder (FIGURE 6.55).
Each of these strokes has keyframes associated with it—in your case,
keyframes for the actual shape that was drawn and two keyframes for the
Write On End property you animated.
I really want to only stagger the Write On End property. If I move the
stroke’s drawing keyframe, it won’t have any effect on the animation or
the shape. As it’s easier to move both of them as one entity, that’s what
we’ll do.
11. Click the – symbol to the left of all the Brush# properties.
12. Using the Range fields at the bottom right of the Dope Sheet, frame
the Keyframe window between 0 and 20 (FIGURE 6.56).
You can now see the keyframes more clearly (FIGURE 6.57). Each
vertical box represents a keyframe.
Dragging the box that appears around the keyframes, you can move them
in unison. You can also use the box to scale the animation by dragging on
its sides. Of course, you can also move each keyframe individually
(FIGURE 6.58).
14. Click the center of the box and drag the keyframes until the first one is
at frame 6. You can see the frame numbers at the top and bottom
(FIGURE 6.59).
So you are starting to do what you set out to do: stagnate the animation.
You need to keep doing this for every subsequent brush stroke for another
five frames. Here’s another way to do this.
15. Select Brushes 3 until the end by clicking Brush3 and then Shift-
clicking the last Brush stroke—in my case, Brush10 (FIGURE 6.60).
At the bottom of the Dope Sheet window there is a Move button and next
to it is a field (FIGURE 6.61). You can use this button to change the
location of keyframes without needing to drag.
17. Click the Move button twice to move all the keyframes selected by 10
frames (FIGURE 6.62).
Next time you click Move, you don’t need to move Brush3 anymore, so
you need to deselect it before clicking. Do so again without Brush4, then
again without Brush5, and so on.
It is probably getting hard to see what you are doing because as you are
pushing keyframes forward, they are going off screen. You need to
reframe your Dope Sheet again.
When you are finished, the Dope Sheet should look like FIGURE 6.63.
FIGURE 6.63 The staggered staircase of keyframes in the
Dope Sheet
This concludes the animation creation stage. All you have to do now is sit
back and enjoy this writing-on effect you have made.
22. Switch back to the Node Graph and, with Keymix1 selected, press 1 to
view it. Then click Play.
I hope you are enjoying the fruits of your labor (FIGURE 6.64).
The RotoPaint node is indeed very powerful and you should become good
friends with it. I hope going through these examples helped.
PREV NEXT
⏮ ⏭
Recommended
5. 2D Tracking/ Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out 7. Keying
© 2017 Safari. Terms of Service / Privacy Policy
⬆
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
6. RotoPaint 8. Compositing Hi-Res Stereo Images
🔎
7. Keying
Keying is the process of creating a matte (an image that defines
a foreground area) by asking the compositing system to look for
a range of colors in the image. This is also sometimes called
extracting a key. It’s a procedural method, which makes keying
a lot faster than rotoscoping, for example. However, it has its
own problems.
Because you want the computer to remove a color from the image (blue or
green, normally) you want the screen to be lit as evenly as possible to
produce something close to a single color. This is, of course, hard to do
and rarely successful. Usually what you get is an uneven screen—a screen
that has many different shades of the screen color.
Because you have to shoot especially for keying and you can’t shoot on
location, you have to do a lot of extra work to make a shot like this work.
Extracting a key is not an easy process, and problems—for example, holes
in the matte and fine edges such as hairs—are always an issue. Also,
standing an actor in the middle of a green-painted studio means the actor
will have a strong green discoloration (called spill) that you will have to
remove somehow—a process called spill suppression or just spill for
short.
But still, keying is very often the better method of working, and is used
extensively in the VFX (visual effects) industry.
Most applications try to create a magic keying tool that gets rid of the
screen with a couple of clicks. However, this hardly ever works. Most
likely, you have to create a whole big composition just for extracting the
matte. Nuke’s tree-based approach makes it easy to combine keys, mattes,
and color correction together to reach a better overall matte and corrected
(spill suppressed) foreground.
⬆
BASIC KEYING TERMS
There are four basic types of keying techniques. Without going into a
complete explanation of the theory of keying, here are the four
techniques:
IBK: The Image Based Keyer (IBK) was first developed at Digital
Domain, which also originally developed Nuke. This keyer is designed to
work with uneven green- and bluescreen elements. So instead of having
one blue or green shade, you have many. The IBK, which consists of two
nodes—the IBKColor and IBKGizmo—creates a color image representing
all those different screen colors (but without the foreground element) and
then uses that to key instead of a single color (FIGURE 7.3).
⬆
FIGURE 7.3 A basic IBK tree that includes the two IBK
nodes
Keylight: This plug-in from The Foundry is included free with Nuke.
It keys only blue- and greenscreens—and it does the job very well,
producing results I find unprecedented.
Ultimatte: This plug-in from Ultimatte is also bundled with Nuke. Yet
another great keyer, it has control over the matte, edge, transparency, and
spill.
Now, let’s jump right in and try a few of these keying nodes.
This shot is taken from a wonderful short (not that short actually, at 40
minutes) called Aya by Mihal Brezis and Oded Binnun. The actress is
Sarah Adler. I was very lucky to get approval to use the shot in the book.
It’s a small drama between two characters driving and the whole thing is
shot on greenscreen, not that you would think it watching the film.
⬆
This is a pretty flat greenscreen element. Then again, even if it is, as
always, it will still pose all sorts of problems. A lot of wispy hairs that we
can hopefully retain, and also a fair amount of green spill on the white
areas in the woman’s shirt and the dark midtones of her skin tone need to
be fixed.
HUEKEYER
The HueKeyer is a straightforward chroma keyer. It has one input for the
image you want to key. HueKeyer produces an alpha channel by default,
and does not premultiply the image.
1. Select Read2 and insert a HueKeyer node from the Keyer toolbox after
it (FIGURE 7.6).
2. Make sure you are viewing the output of HueKeyer1. Switch to viewing
the alpha channel (FIGURE 7.7).
You can see that already there’s a very promising alpha in there. This is
because, by default, HueKeyer’s designed to key out a range of greens and
cyans. This greenscreen is, surprisingly, very good. Let’s see where it is on
the graph.
4. Hover your mouse pointer over the greenscreen area and look at
HueKeyer1’s graph (FIGURE 7.8).
You can see by the yellow line that’s moving about that the greenscreen
sits somewhere around 3.1 on the hue, or X axis. The dark areas of the car
interior are somewhere around 2.3, by the way.
Now you are going to edit the curve in the graph by moving some of the
points that are already there. You can do it by hand, but I’ll start you off
by typing numbers instead.
6. In the Curve List window, click the Amount curve so you see only that
curve in the graph (FIGURE 7.9).
The other curve controls the amount of saturation, which you can edit the
same way. However, doing so has little effect in this case, so you don’t
need it.
8. Double-click the X value next to the point itself. This displays an input
field.
9. Change the value from 2 to 2.6, which tells the keyer not to key out the
dark areas of the car interior. Press Enter/Return (FIGURE 7.11).
10. Select the point at X = 3, Y = 1 and drag it to the right until it reaches
somewhere around 3.1 on the X axis. This gives a little bit more softness
to the hair.
Notice that when you start to drag in one axis, the movement is already
locked to that axis. This is very convenient because it allows you to change
only what you want to change without having to hold any modifier keys
(FIGURE 7.12).
Surprisingly, this is not a bad key for such a simple keyer. Although some
⬆
keying nodes can extract only green- or bluescreens, you can use the
HueKeyer to extract a key from any range of colors. The downside to the
HueKeyer is that it can’t key different amounts of luminance (although
the sat_thrsh curve, or saturation threshold, does allow you to control
saturation). In other words, it doesn’t have fine-tuning capabilities.
Put the HueKeyer aside for now and we’ll move on to another keying
node.
Incidentally, if you have a clean plate that was shot—as in, you asked the
actors to clear out of the frame and you took a picture without them—you
can connect the IBKGizmo’s C input to it instead of the IBKColour’s
output. This way of doing things is even better because it really gives you
the best source to work with.
The screen element you have is pretty flat, but still, the IBK can turn out a
great result here as well, especially on fine hair detail.
1. Select Read2 and hold Shift to branch an IBKColour node from the
Keyer toolbox.
2. Select Read2 again and branch an IBKGizmo node from the Keyer
toolbox.
You are connecting the background image to the Keyer so that the spill
suppression factors in the background colors.
The first thing you need to do is set up the IBKColour node to produce a
clean plate. When that’s done you set up the IBKGizmo node to produce a
key.
5. View IBKColourV3_1 in the Viewer. Also make sure that its Properties
panel is open in the Properties Bin.
You should see black. When adjusting the IBKColour node, the first thing
to do is state which kind of screen you have: green or blue.
You now see the greenscreen image with a black hole where the woman
and car used to be (FIGURE 7.14).
⬆
FIGURE 7.14 This is the first step in producing the clean
plate.
The main things left to remove are the fine hair details around the edges
of the black patch. If you leave them there, you are actually instructing the
IBKGizmo to get rid of these colors, which is not what you want to do.
The next step adjusts the Darks and Lights properties. For a greenscreen
element, you adjust the G (green) property. For a bluescreen, adjust the B
(blue) property.
Your cursor is now to the right of the 0 you just entered. Using the up and
down arrow keys, you can now nudge the value in hundredths.
9. Decrease the value by pressing the down arrow slowly. You are looking
for the magic value that does not bring in too much black into the green
area but still reduces the number of hairs that are visible on the edge of
the black patch. Nudge the value down to –0.12 (FIGURE 7.17).
You have actually gone too far. A lot of green areas (especially the mirror)
were filled in black. You need to go back a little.
10. I ended up moving the curser again to the right and adding another
digit (the thousandth), and my final value is –0.085. Anything else started
eating into the mirror. Find your magic value.
Now you should do the same with the Lights property. Slowly bring it
down. But I tried it already. Moving it doesn’t change the green area in
any way that contributes, so leave it as it is. You can always change it ⬆
later.
The next property to adjust is the Erode property. This will eat in further
from the black patch and reduce the amount of hair detail you have. It
might also introduce some unwanted black patches, so be careful.
11. Start dragging the slider to increase the Erode property value. Watch
out for the mirror and the green area behind the woman’s neck. I left
mine at 0.2 (FIGURE 7.18).
Finally you need to adjust the Patch Black property until all your black
patches are filled. There’s a lot of non-greenscreen image area here, so the
value is large.
12. Bring up the Patch Black property until all your black areas are filled
with green. The slider goes up to 5, but you need 15 to get the result you’re
after. To do this, you can enter 15 in the field (FIGURE 7.19).
Now that you have a clean plate, you need to adjust the IBKGizmo settings
to get a nice-looking key. Let’s move over to the IBKGizmo.
Again, the first thing to take care of here is setting the screen color.
You can already see that the alpha has some really good things happening
in it. First, the whole foreground (where the woman is) is mostly white.
Also, all the fine hair detail is preserved beautifully. There is just a little
noise in the black and white areas that needs to be cleaned up.
• Red Weight: This property changes the density of the generated matte
by adding or subtracting the red areas of the image.
• Luminance Level: This property no longer has any effect and will be
removed in the next version of the software.
16. Select IBKGizmoV3_1 and insert a Merge node after it by pressing the
M key.
17. Connect Merge1’s B input to Read1’s output (FIGURE 7.21).
FIGURE 7.21 Your tree should look like this after adding
the Merge node.
18. Make sure you are viewing the RGB channels of Merge1 in the Viewer.
Notice that the foreground is a little transparent—you will fix this. You
can make the matte whiter by using the Red Weight and Blue/Green
Weight properties (FIGURE 7.22).
19. While looking at your screen, adjust the Red Weight and Blue/Green
Weight properties until the car isn’t transparent. Don’t go too far or you
will compromise the density of the edges of your matte. I ended up with
0.79 for the Red Weight and 0.425 for the Blue/Green Weight.
To get a little more hair detail, select the Luminance Match check box and
then move the slider a little.
You can see that the hairs are a little softer and the edges of the mirror are
tighter as well.
22. Move the Screen Range property a little so that you reduce the
amount of noise on the background a little—without changing the
foreground. I left mine at 0.83.
If you don’t think this property did any good to the overall key, turn it off.
I’m leaving it on, though.
This is as far as you can get the matte. It is hardly perfect, but for this
greenscreen element, it is the best you can do with just the IBK. You will
get to use this matte later, using another technique. For now, you can still
adjust the spill a bit more.
The IBKGizmo has some controls remaining at the very bottom for edge
correction. These properties are Screen Subtraction, Use Bkg Luminance,
and Use Bkg Chroma (FIGURE 7.23). Let’s see what these do.
26. Use Bkg Luminance is deselected by default. Select it to see its effect,
and leave it selected when you’re finished.
27. Use Bkg Chroma is deselected by default. Select it to see its effect, and
leave it on when you’re finished.
28. In the Viewer, click Play. When you’re done viewing, click Stop and go
back to frame 1.
Even though the matte is noisy, the final composite still looks pretty good.
You know it is not perfect, but it holds pretty well. The noisy black areas
of the matte are getting color corrected in the RGB channels through the
last two check boxes you turned on. This makes all that noise pretty much
invisible.
You will learn to make this key look even better later in this chapter. For
now, move on to the third, and last, keying node: Keylight.
KEYLIGHT
Keylight, like the IBK earlier in this chapter, is a blue- and greenscreen
keyer. It is not designed to key out any color, just green- and bluescreens.
It does that job very well, and many times, all you have to do is choose the
screen color and that’s it. Keylight also tackles transparencies and spill
exceptionally well.
1. Select Read2 and Shift-click the Keylight icon in the Keyer toolbox on
the left (FIGURE 7.25).
2. Make sure you are viewing the output of Keylight1, and viewing the
RGB channels.
• Source: The first and main input—often the only input you will use.
This is where the element to key should go in. This input should already
be connected to your greenscreen element.
• Bg: You can connect a background image here. Because Keylight also
suppresses spill, it can use the color of the background for that
suppression (and it does so by default if the input is filled). Keylight can
also actually composite over the background, although this is rarely done.
• InM: Stands for inside matte (also called holdout matte). If you have a ⬆
black and white image (roto or other rough key), you can use it with this
input to tell Keylight not to key out this area. This can also be called a core
matte.
• OutM: Stands for output matte (also called garbage matte). If you have
a black and white image (again through a roto or a rough key), you can
use it with this input to tell Keylight to make all this area black.
When using Keylight, the first thing to do is connect the background if you
have it.
Now you can begin keying by choosing the screen’s main green pigment.
4. Click the Color Swatch for the Screen Colour property to activate it
(FIGURE 7.27).
NOTE
This is what I have set for Screen Colour property: 0.075, 0.24, 0.1.
This drag action is a very important one. It is the basis for all other
operations that come later. Try several different colors first, and then
choose the best one. Every time you release the mouse button and click
again, you are changing the selected color.
⬆
6. Deselect the Color Swatch to turn off Viewer sampling; then do a quick
Ctrl/Cmd-click in the Viewer to get rid of the red box.
TIP
7. You can look at the alpha channel now by hovering the mouse pointer
over the Viewer and pressing the A key. The matte won’t be perfect, but it
should look something like FIGURE 7.29.
You can now start to tweak the matte. Keylight has a lot of different
properties, but the main one is Screen Gain. This is a multiplier, which
hardens the matte. By harden I mean that it pushes the contrast up—the
dark grays toward black and the light grays toward white.
As you can see, the background part of the matte is completely black now.
However, I’ve achieved this at the expense of the white areas, which are
getting grayer, not to mention the fine detail in the hair that’s been lost.
Be very cautious when you use this value; it should rarely if ever reach
these high values.
You can find controls similar to Screen Gain, but a little finer, under the
Tuning submenu below Screen Gain.
10. Click to twirl down the triangular arrow in order to display the Tuning
submenu (FIGURE 7.31).
Here you see four properties. The bottom three are Gain controls for the
three dynamic ranges: shadows, midtones, and highlights. The first
property defines where the midtones are—if this was a generally dark
image, the midtones would be lower.
11. Bring down the Shadow Gain property to about 0.81. This should fill
the grays in the dark background area.
12. To fill in the white areas of the matte with more white, adjust the
Highlights Gain down. I stopped at 0.19.
⬆
NOTE
If your matte looks different than mine, that means you picked
different colors for the Screen Colour property. That’s OK. But
you’ll have to play around with the Shadow Gain, Midtones
Gain, and Highlights Gain properties to make the foreground
areas white and the background areas black. The Screen
Matte properties, which I explain in the next step, also need to
be adjusted in a different way.
13. To fill in more, adjust the Midtones Gain down a little, so you do not
add gray to the background. I reached 0.97.
14. Click the arrow next to Tuning to hide these controls, and then click
the arrow next to Screen Matte to display those options instead (FIGURE
7.32).
You can adjust the Clip Black and Clip White properties to remove any
remaining gray pixels in the matte. If you lose too much detail, you can
use the Clip Rollback property to bring back fine detail.
15. Adjust Clip Black and Clip White to get rid of all the gray pixels in the
matte. I ended up with 0.03 in the Clip Black property and 0.78 in Clip
White.
The View drop-down menu at the top of the Properties panel changes the
output of the Keylight node to show one of several options. Here are the
important ones to note:
⬆
FIGURE 7.33 Choosing output mode in the View property
16. To see the final result with the background, change the View property
to Composite.
Here you can see your greenscreen element composited over the
background. It is not bad at all (FIGURE 7.34).
Usually, using post key operations, as you did before using the properties
in the Screen Matte submenu, significantly degrades the matte. Edges
start to bake and wiggle (two terms used to describe a noisy matte that
changes in an unnatural way from frame to frame), and the level of detail
starts to deteriorate. A good way to get a better key is to use the tree itself
to build up a series of keys together that produce the best result.
The result of the Keylight node is pretty good too, but not great. You can
use the Screen Matte properties to create a very hard matte, but it has
well-defined white and black areas. Maybe you can use the Keylight node
and the IBK nodes together to create a perfect key.
First let’s use Keylight1 to create a garbage matte—a matte that defines
unwanted areas.
2. Make sure Keylight1’s matte doesn’t have any grays in it (aside from the
edges). Do this by adjusting the Clip Black and Clip White properties
further. I ended with 0.05 for Clip Black and 0.65 for Clip White
(FIGURE 7.35).
Now what you have in front of you is a hard matte, or a crunchy matte,
and you can use it as two things: as a garbage matte and as a core matte.
In this case, however, you don’t need a core matte, because the core of the
IBK is fine. You do need a garbage matte because the outside of the IBK
matte has a fair amount of gray noise. However, if you use the matte as it
is now, you will lose a lot of the fine detail the IBK produced. You need to
make the matte you have here bigger, and then you can use it to make the
outside area black. For that you need to use one of three tools: Erode,
Dilate, and Erode (yes, really, keep reading).
⬆
Erode, Dilate, and Erode
Three tools in Nuke can both dilate (expand) and erode (contract) a
matte. Their names are confusing and, since they all have two sets of
names, it gets even more confusing. They are called one thing in the Filter
toolbox, where you can find them, but once you create them, they are
called something else in the interface. Yes, it is that crazy. I do wish they
would simplify this little source of confusion. Here’s the rundown:
Erode (Fast): A simple algorithm that allows only for integer dilates
or erodes. By integer I mean it can dilate or erode by whole pixel values. It
can do 1, 2, 3, and similar sizes, but it can’t do a 1.5 pixel width, for
example. This makes it very fast, but if you animate it, you see it jumping
from one size to another. Once created, the control name becomes Dilate.
Erode (Filter): A more complex algorithm that uses one of four filter
types to give more precise control over the width of the Dilate or Erode
operation. This allows for subinteger (or subpixel) widths. Its name
changes to FilterErode when created.
1. Select Keylight1 and insert an Erode (Fast) from the Filter toolbox after
it (FIGURE 7.36).
2. You also need a Blur node, so insert one after Dilate1 (the Erode you
just created).
3. View Blur1 in the Viewer and make sure you are viewing the alpha
channel (FIGURE 7.37).
⬆
FIGURE 7.37 In the Node Graph, the new Erode node is
labeled Dilate1.
Now you can see what you are doing and have full control over the size of
your matte. You can increase the Dilate Size property to expand the matte.
You can then increase the Blur Size property to soften the transition.
The matte defines areas that need to remain as part of the foreground
rather than be thrown out; this is because usually it’s the white areas that
define a shape, not the black areas. If you invert this image, it defines the
areas that are garbage.
Now you need to combine the garbage matte and the matte coming out of
the IBK; use a regular Merge node for that.
1. Select Read2 and Shift-click to branch out a HueCorrect node from the
Color toolbox (FIGURE 7.42).
To reduce the amount of green in the brown color of the woman’s hair and
the beige car interior, you first need to find out where that color is in the
graph. This works in a similar way to the way it does in the HueKeyer
node.
The resulting yellow line in the graph shows a hue at around 1.9. This is
the area where you need to suppress green. To suppress the green color,
use the G_sup curve in the Curves List window on the left.
⬆
3. Click G_sup in the Curves List window.
5. Bring down the point you just made to somewhere closer to 0.5 on the
Y axis.
6. To get rid of more spill, also bring down the point at the X value of 2 to
something closer to 0.3 on the Y axis. Then click it again and drag it to the
right so it’s at about 2.8 on the X axis.
7. Grab the point at the X value of 3 and bring that lower, to about 0.1,
then drag to the right to about 3.5.
This has taken care of the spill in the image. Now you have a branch for
the matte, ending with Merge2, and a branch for the fill, ending with
HueCorrect1. You need to connect them.
This now copies the alpha channel from the matte branch to the RGB
channels from the HueCorrect branch, which is exactly what you need.
Now you can use the output of Premult1 as the A input for a new Merge
node.
11. With Premult1 selected, click M to insert a Merge node after it.
⬆
FIGURE 7.45 The current tree
You can see the difference in the result between Merge1 and Merge3. One
is a little better than the other as far as spill goes.
Now you have a tree consisting of three keyers, but you use only two of
them for output. You have one branch for the matte and another for the
fill. Though no one keyer gave you a perfect result, the tree as a whole did
manage that. If you need to further tweak the matte, do so in the matte
branch using eroding nodes, blurs, roto, and merges. If you need to
further correct the color of the foreground image, do so in the fill branch
and add color-correcting nodes there. If you need to move the foreground
element, do so after the keying process finishes, between Premult1 and
Merge1 (or Merge3). The same goes for filtering, such as blur.
NOTE
Try to take this composite to the next level. You need only the tools that
you’re already familiar with. I’ve also created a version of this Nuke script,
called Chapter07_end.nk, in this chapter’s folder. You can open it and see
what I did.
PREV NEXT
⏮
Recommended
6. RotoPaint / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out
⬆⏭
8. Compositing Hi-Res Stereo Images
© 2017 Safari. Terms of Service / Privacy Policy
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
7. Keying 9. The Nuke 3D Engine
🔎
So far you’ve been using Nuke easily enough, but you might
have noticed some things were missing—things you might be
accustomed to from using other applications. For example, you
haven’t once set the length, size, or fps (frames per second)
speed of a project. In many other applications, these are some
of the first things you set.
The Project Settings panel appears in the Properties Bin. The bottom part
of the Root tab is filled with goodies (FIGURE 8.1).
When the Lock Range box is selected, the Frame Range property stays as
it is. If it isn’t selected, when you bring in a longer Read node, Nuke
updates the Frame Range fields to accommodate this longer node.
The fps field determines the speed at which the project is running: 24 is
for film, 25 is for PAL (video standard in Europe), and 30 is for NTSC
(video standard in the United States). These numbers have very little
meaning in Nuke—Nuke cares only about individual frames. It doesn’t
care if you later decide to play these frames at 25fps or 1500fps. Setting
the fps field just sets the default fps for newly created Viewers and for
rendering video files such as QuickTime.
The Full Size Format drop-down menu sets the default resolution for
creating images, such as Constant, Radial, and RotoPaint nodes, from
scratch. You can always set the resolution in other ways as well, including
using Format drop-down menus in the node’s Properties panel or
connecting the input to something that has resolution (such as a Read
node). You have done this several times before, but when you set the
resolution in the Project Settings panel, you no longer have to worry about
it.
The next four properties—the Proxy Mode check box, the Proxy Mode
drop-down menu, Proxy Scale, and Read Proxy File—control proxy
settings that are discussed later in this chapter.
1. Click the LUT tab at the top of the Project Settings panel (FIGURE
8.2).
Review two main nonlinear color spaces, sRGB and log, and understand
why they exist.
• sRGB shows the reverse of what the nonlinear monitor you’re using is
displaying (yes, that means all your monitors, even that really good new
⬆
Apple LED cinema monitor—yes, even that one). sRGB is applied to show
you the real image on the nonlinear monitor. You can click the sRGB
curve in the Curves List window on the left to see the color correction that
will be applied to an sRGB image. This curve will be used just like in a
ColorLookup node.
• Log (sometimes called Cineon) is there to compress and better mimic
that large abundance of colors found on a celluloid negative. It’s used
when scanning film to digital files. You can click the Cineon curve in the
Curves List window on the left to see the color correction that will be
applied to a log image. With the arrival of the new film digital cameras
such as ARRI Alexa and RED, there is now more than just one type of log.
There’s the Cineon log type that I just mentioned, but other logs, such as
AlexaV3LogC and REDLog, match those cameras respectively.
When you bring an image into Nuke, Nuke needs to convert it to linear so
that all images that come in are in the same color space and so that
mathematical operations give you the results you are looking for (a blur
on a log image gives very different results than a blur on a linear image).
To convert all images to linear, Nuke uses lookup tables (LUTs). LUTs are
lists of color-correcting operations similar to curves in the ColorLookup
node. Nuke uses these LUTs to correct an image and make it linear, and
then convert it back to whatever color space it came from or needs to be
for display or render.
The LUT tab is split into two areas, the top and the bottom. The top area
is where you create and edit lookup tables. The bottom part sets default
LUTs for different image types.
2. Click Cineon at the top-left list of available LUTs to see the Log
Colorspace graph (FIGURE 8.3).
What you see now is a standard Cineon curve. Studios that have their own
color pipelines can create other LUTs and bring them in here to better
customize Nuke and make it part of their larger-scale pipeline.
NOTE
By default, as you can see set out in FIGURE 8.4, Nuke assumes that all
images that are 8 and 16 bit were made (or captured) on a computer and
have the sRGB color space embedded. It assumes that log files (files with
an extension of .cin or .dpx) are log/Cineon, and that float files (files that
were rendered on a computer at 32-bit float) are already linear. Also it
rightly assumes that your monitor is sRGB and sets all Viewers to sRGB as
well.
This tab manages Views, which let you have more than one screen per
image. This functionality can be used for multiscreen projects, like big
event multiscreen films, but it is usually used for stereoscopic projects.
Stereoscopic projects are made with two views, one for each eye—you’ve
seen this effect in practically every blockbuster since James Cameron’s
Avatar. In regular speak, they are called 3D films. The professionals calls
them stereoscopic.
The Views tab lets you set up as many views as you like. The Set Up Views
For Stereo button at the bottom allows you to quickly set up two views
called Left and Right for a stereo workflow (FIGURE 8.6).
Throughout this chapter you use various controls in the Project Settings
panel, so get used to pressing the S key to open and close it.
2. Navigate inside BulletBG to the full directory where there are a pair of
stereo image sequences: bulletBG_left.####.dpx and
bulletBG_right.####.dpx. One represents what the left eye will see and
the other what the right eye will see.
You now have two Read nodes. Read1 is the left sequence, and Read2 is
the right sequence.
4. Click Read1 (left) and press the 1 key to view it in Viewer input 1. Then
click Read2 (right) and press the 2 key to view it in Viewer input 2.
5. Hover your mouse pointer over the Viewer and press the 1 and 2 keys
repeatedly to switch between the two Views.
NOTE
You should see a woman holding a gun in front of her (FIGURE 8.7).
Switching between the two inputs is like shutting one eye and then
opening the first and shutting the other—one is supposed to look like it
was shot from the direction of the left eye and the other like it was shot
from the direction of the right eye.
6. Stay on Viewer input 1. Have a look at the bottom right of the image
(FIGURE 8.8).
What you see is the resolution of the image you are viewing. This is a log-
scanned plate from film with a nonstandard resolution of 2048×1240.
Normally a 2K scanned plate has a resolution of 2048×1556. (Plate is
another word for image, used mainly in film.)
Setting formats
Now you need to define the resolution you are working in so that when
you create image nodes, they will conform to the resolution of the back
plate. You will give this resolution a name, making it easier to access.
1. Double-click Read1 to make sure its Properties panel is at the top of the
Properties Bin.
2. Click Read1’s Format property and review the options in the drop-
down menu (FIGURE 8.9).
⬆
FIGURE 8.9 The Format drop-down menu
The Format drop-down menu lists all the defined formats available. The
image you brought in doesn’t have a name, just a resolution: 2048×1240.
You can add a name to it, and by that, define it. Because it is already
selected, you can choose to edit it.
3. Choose Edit from the Format drop-down menu, which brings up the
Edit Format dialog box (FIGURE 8.10).
The File Size W and Full Size H fields represent the resolution of the
image, which should already be set for you. The pixel aspect at the bottom
is for non-square pixel images, such as PAL widescreen and anamorphic
film.
NOTE
If you look at the bottom-right corner of the image in the Viewer, you will
see that it now says Christmas2k instead of 2048×1240 (you might need
to press 1 and 2 on the keyboard while hovering in the Viewer to refresh).
You have now defined a format. You can set the Full Size Format property
in the Project Settings panel to the newly defined format as well.
6. Hover over the DAG and press the S key to display the Project Settings
panel.
Now when you create a Constant node, for example, it will default to the
Christmas2k format and resolution.
1. Double-click Read1 to load its Properties panel into the Properties Bin.
2. At the bottom right of the Properties panel, select the Raw Data check
box (FIGURE 8.12).
This Property asks the Read node to show the image as it is, without any
color management applied. The image looks washed out and is lacking
any contrast (FIGURE 8.13).
So why did you see it looking better before? This is where Nuke’s color
management comes in. Every Read node has a Colorspace property that
defines what LUT to use with an image to convert it to a linear image for
correct processing (FIGURE 8.14).
⬆
FIGURE 8.14 The Colorspace drop-down menu
Even after you choose which LUT to use and apply this change, this still
isn’t the image you saw in the Viewer before. It was automatically
corrected a second time in the Viewer so you could see the correct image
for your sRGB screen. The setting to make this second correction is in a
drop-down menu in the Viewer, and unless you are fully aware of what
you are doing, you should leave it set as it is (FIGURE 8.15).
The log image you now see is also corrected by the Viewer—if it wasn’t you
wouldn’t see a real linear image.
I mentioned the default LUT settings before, and here they are again at
the bottom of this tab. When you imported this file, Nuke knew (because
of its .dpx extension) that this was a log Cineon file, so it automatically
assigned it the Cineon LUT. Now that Raw Data is checked, Nuke is no
longer using this LUT to do the color conversion.
Another way to convert the image from log to linear is to perform the
correction yourself rather than using Nuke’s color management.
Before you change the Color Management setup, save a copy of the color-
managed image so you have something to compare to.
5. Select Read1 and press the 2 key to load it into Viewer1’s second buffer.
7. Click the Pause button at the top right of the Viewer to disable any
updating on Viewer1’s 2nd buffer (FIGURE 8.16).
8. While hovering over the Viewer, press the 1 key to view the first buffer.
Now that you have the uncorrected, unmanaged log image, let’s see the
alternative method for applying color space conversion.
10. Make sure Read1 is selected in the DAG and, from the Color toolbox,
insert a Colorspace node. This node converts between different color
spaces (such as sRGB, Log, etc.).
11. As you are bringing in a Cineon image, choose Cineon from the In
drop-down menu (FIGURE 8.17).
⬆
FIGURE 8.17 The In drop-down menu in Colorspace1
12. Hover over the Viewer and press the 1 and 2 keys to compare between
the two types of color management you used.
The images look the same. These two different ways to color manage
produce the same result. The first method is quicker and uniform. The
second method is more customizable, but it means more work because
you need to apply it to every Read node.
15. Switch to Viewer1’s second buffer and deselect the Pause button.
Stereo views
You have two Read nodes in the DAG, both representing the same image,
but through a different eye. For the most part, everything you do to one of
these images you also do to the other. For the illusion to work, the two
images need to feel like they are indeed images seen from the audience’s
left and right eyes. Hence color correction applied to one eye should also
be applied to the other eye.
Doing this seems like it would be very annoying, though, because you’d
have to keep maintaining two trees and copying nodes from one to the
other—or it would be this way if Nuke didn’t have its Views system.
Using Views, you can connect two Read nodes into one multiview stream
and from then on build and manipulate only one tree. If you do need to
work on just one View, you will be able to do so per node—or if needed,
split the tree in a specific point to its two Views and join them again later.
Let’s connect the two Read nodes into one stream. But before we do that,
you need to define this project as a Stereo Multiview project.
At the moment, only one View appears in the View list: Main. That’s
normally the case, but now let’s replace that with two Views called Left
and Right. You can do this manually using the + (plus) and – (minus)
buttons at the top of the list. However, since Left and Right views are the ⬆
norm, a button at the bottom of the list enables this as well.
After clicking this button, your Views list should change to display Left
and Right instead of Main. At the top of the Viewer you should also see
two new buttons that allow you to switch between the two Views you
created, as shown in FIGURE 8.20.
4. Select the Use Colours in UI? check box at the bottom of the Views tab.
In the Project Setting’s Views tab, notice the red-colored box next to the
Left view and a green-colored box next to the Right view. Selecting the
Use Colours in UI? check box makes these colors reflect in the Views
buttons in the Viewer and they will be used to color the pipes connecting
left and right specific parts of your trees.
Now that you’ve set up the multiview project, you can proceed to connect
your two Read nodes together into one multiview stream.
All View-specific nodes are held in the Views toolbox in the Node toolbar.
You use the node called JoinViews to connect separate Views into one
stream.
5. With nothing selected in the DAG, click JoinViews from the Views
toolbox to create one.
6. Connect JoinViews1’s left input to Read1 and the right input to Read2
(FIGURE 8.21).
7. Make sure you are viewing JoinViews1 in the Viewer and use the Left
and Right buttons to switch between the views.
You can see the two Read nodes’ output in the Left and Right views now
instead of through separate Viewer inputs. This is the beginning to
⬆
working with Views. Later in this chapter you will do more.
For now, you have just one more thing to set up so that you can work
quickly with such large-scale images.
Using proxies
Working in 2K (over 2000 pixels wide, that is), which is becoming the
norm, can become very slow, very quickly. Because compositing software
always calculates each and every pixel, giving it more pixels to work with
dramatically increases processing times, both for interactivity and
rendering.
For example, a PAL image of 720×576 has 414,720 pixels, which is seven
times fewer than a normal 2K frame of 2048×1556 with 3,186,688 pixels!
So, as you might guess, the 2K frame is that much slower to work with.
Nuke has a few ways to make working with hi-res images faster. First, you
can use the Viewer Downscale Resolution drop-down menu. This menu
lets you scale down the display resolution. Input images are scaled down
by the selected factor, and then they are scaled up in the Viewer by the
same factor. This creates a speedier workflow with just a quality
difference in the Viewer (FIGURE 8.22).
You can see the apparent change in quality. The first time you load a
frame it will still take a little longer because it still needs to access the full-
resolution image.
This time around, Nuke took no time at all to show this frame. From now
on, working with both frames 1 and 2 will be very quick because you will
be working with 1/32nd of the resolution. Note that if you are using a very
fast system with a fast hard drive, this change in speed might be
negligible.
But this is just the tip of the proverbial iceberg. Nuke has a full-blown
Proxy System that handles the switch between low-res and hi-res images
for the whole project.
5. Press the S key again while hovering over the DAG to make sure your
Project Settings panel is at the top, and click the Root tab.
• The first area is the Read node. All images coming in through a Read
node are scaled down by a scale ratio. The Proxy System takes care of that.
A Proxy Scale of 0.5, for example, halves the resolution of all images.
• The third area is the Viewer. It’s inconvenient that anytime a proxy scale
changes, the Viewer shows images at a different size. Because of this, the
Proxy System scales up the Viewer to compensate for the change in
resolution. All this is done automatically and is controlled using one of
two ways to define the change of scale (FIGURE 8.23).
The drop-down menu for the Proxy Mode property shows two types of
proxy: Format and Scale. Format lets you choose another defined format
for the size of the proxy. On the other hand, Scale allows you to choose a
ratio by which to scale the image down, as a derivative of the Full Size
Format dimensions.
By default, the Scale option is selected and under it the Proxy Scale
property lets you choose the ratio by which to scale everything (FIGURE
8.24).
6. Under the Root tab in Project Settings, choose Format from the Proxy
Mode drop-down menu.
Let’s use the Proxy System to make working with this project faster. Use
the Scale property rather than Format, and scale down to a third of the
original resolution.
9. To activate the Proxy System, select the box next to the Proxy Mode
property to turn it on (FIGURE 8.26).
You don’t have to load the Project Settings panel every time you want to
turn on proxy mode. To toggle proxy mode on and off, you can also either
use the Proxy Mode button in the Viewer, as shown in FIGURE 8.27, or
press the hot key Ctrl/Cmd-P.
Instead of using on-the-fly proxies as you are using now, you can read
smaller-resolution images directly from a specified location on the hard
drive. This way, the images are already there and will always be there, and
you never need to process them—which results in a quicker workflow
throughout.
10. Create an unattached Write node and connect its input to Read1.
11. Click the little folder button beside the Proxy property (FIGURE
8.28).
FIGURE 8.28 This time you should use the Proxy property
instead of the File property.
13. Name your sequence the same as the full-resolution one (add this at
the bottom to the end of the path), bulletBG_left.####.dpx, and press
Enter/Return.
Just like in the Read node, there is a Colorspace property here as well.
Because the tree itself is in linear color space, the image needs to be
reconverted to Cineon so it will be like the image that came in. All this is
set by default in the Views tab in the Project Settings panel.
14. Make sure the Proxy Mode check box is selected in the Viewer.
Since you are rendering Read1 and not a multiview tree (only after
JoinView1 does your tree become a multiview tree), you need to use the
Views drop-down menu in the Render panel to select only the Left view.
16. In the Render panel that opens, use the Views drop-down menu to
deselect the Right check box (FIGURE 8.29) and then click OK to start
the render.
Because proxy mode is on, your image is now being scaled to a third of its
size. Nuke is now using the Proxy property in the Write node instead of
the File property it usually uses, and it’s actually rendering third-
resolution images.
When the render finishes, tell your Read1 node that there are prerendered
proxies on the disk.
17. Copy and paste the path from the Write1 Proxy field to the Read1
Proxy field.
Copying the file path like you just did doesn’t update the Read node. You
⬆
can easily fix that.
18. To update the Read node, click the folder icon to the right of the Proxy
property field, select the sequence that’s in there, and then press
Enter/Return. If the Viewer still doesn’t update, switch between having
the proxy mode off and on using the button in the Viewer.
19. Switch proxy mode on and off. Now, when you enable and disable
proxy mode, you actually switch between two file sequences on the disk.
Now you need to do the same for the right view, or Read2. You can use the
same Write node—just move its input and change some settings.
Now you need to replace the path in Write1’s proxy path from left to right.
You can go overboard now and do it by using this unique feature.
1. Click to select Write1 and press Ctrl/Cmd-X to cut it from the tree.
The node appears as some lines of text in the document (FIGURE 8.30)
that start with set cut_paste_input.
Look further down in this text. A line starts with the word Proxy and
immediately after the word is a file path.
4. In the file path, find the word left and change it to right without
changing anything else or adding any spaces.
5. Select the whole text block that you pasted and copy it by pressing
Ctrl/Cmd-C.
Perhaps this is a little bit of overkill, but you now have the Write node you
need, it is inserted where you want it, and the path has changed for the
right eye.
7. Make sure proxy mode is on because you now want to render the lo-res ⬆
version of the image.
9. Click OK.
When this render finishes, you have prerendered proxy files for both
views. You need to tell Read2 to use the files you just generated.
10. Copy and paste the path and filename from Write1’s Proxy field to
Read2’s Proxy field.
11. To update the Read node, click the folder icon to the right of the Proxy
field, select the sequence that’s in there, and click Enter/Return.
12. You don’t need Write1 anymore; you can delete it.
Also, notice you had to run through the proxy-making stages twice, once
for the left eye and once for the right eye. This is not strictly necessary.
There are other ways to work with Read and Write nodes when it comes to
multiview projects and trees.
Instead of bringing in two separate Read nodes, one for each eye, you can
bring in a single Read node and use a variable to tell Nuke that both a Left
and a Right view exist. A variable is simply a placeholder that tells Nuke
to look for something else. In this case, the variable will be %V, and it tells
Nuke that it needs to replace this variable with whatever is in the Views
list. In this case, it looks for Left and Right.
1. Create a new Read node and navigate the File Browser to the
chapter08/BulletCG/full/ directory.
Here you can see there are two sequences. One is called
bulletCG_left.####.exr and the other bulletCG_right.####.exr.
Replacing Left or Right with %V will enable you to use only one Read
node.
2. Click the first sequence in the list, the one indicating the left eye.
3. In the path bar at the bottom, replace the word “left” with %V and
press Enter/Return (FIGURE 8.32).
You now have a single Read node in the DAG: Read3. If you look at it
carefully, you can see there is a little green icon with a V in it at the top-
left corner (FIGURE 8.33). This indicates that this node has multiple
views available. Let’s see this in the Viewer.
6. Switch between viewing the Left and Right views using the Viewer
buttons (FIGURE 8.34).
You can see the difference between the two eyes. This bullet is coming
right at you. You can see more of its left side on the right of the screen
with the left eye, and more of its right side on the left of the screen with
the right eye. (Take a pen, hold it close to your face, and shut each eye and
you can see what I mean.)
7. View the alpha channel in the Viewer, then switch back to the RGB
channels. Notice there is an alpha channel here.
Having an alpha channel affects some of what you are doing here. As a
rule, you shouldn’t color correct premultiplied images without
unpremultiplying them first. Having an alpha channel is supposed to raise
the question of whether this is a premultiplied image. You can see in the
Viewer that all areas that are black in the alpha are also black in the RGB.
Black indicates that this indeed is a premultiplied image. Just to make
sure, I asked the 3D artist who made this to render out premultiplied
images, so I know for a fact that’s what they are.
Now that you’ve brought in this stereo pair of images, you can make
proxies for them in one go.
Notice that again you are using the %V variable. This means you can
render the two views, and the names of the views—Left and Right—will be
placed instead of the variable.
NOTE
10. Because you want to render the alpha channel as well, change the
channel set for Write1 from RGB to All to be on the safe side. ⬆
This is an EXR file, which has a lot of properties you can change. The
original image is 32 bit and you will keep it that way.
Notice that the render range here is only 1–50, whereas the background
sequence is 1–60. You’ll deal with this later.
13. In the Render panel, change the Views drop-down menu so that both
Left and Right are turned on (FIGURE 8.36).
16. When the render is complete, copy and paste the proxy path from
Write1 to Read3. Again click the folder icon to the right of the Proxy field,
select one of the file sequences that are in there, change the view to %V,
and click Enter/Return.
The proxy setup is ready. You have prerendered proxies for all files,
individual or dual-view streams. Now you can go ahead and create the
composite, which is very simple.
Retiming elements
The first creative way to deal with the lack of correlation between the
lengths of these two elements is to stretch (slow down) the bullet element
so it’s 60 frames long. Slowing down clips means that in-between frames
need to be invented. You can do that in three ways:
TIP
Several different nodes in the Time toolbox deal with timing. You’re going
to look at two different options for slowing down elements. ⬆
1. Select Read3 and insert an OFlow node after it from the Time toolbox.
4. Go to frame 60 in the Timebar and set the Frame field to 50 (the last
frame of the original element you brought in).
If you look at the sequence frame by frame, you can clearly see that there
are some original frames and some newly created frames. Change that by
increasing the ShutterTime property in the OFlow node; doing so
increases the blending that’s happening in between frames.
OFlow is a very good tool. However, its downside is that it is also very
slow. You are working in proxy mode at the moment, so things are quick
and easy. But when you go back to the full-res image, you will feel how
slow it is. Let’s try another retiming node.
7. Delete OFlow1.
8. Click Read3 and, from the Time toolbox, insert a Retime node.
You can specify speed using a combination of the Speed, Input Range, and
Output Range properties.
9. Check the boxes next to the Input Range and Output Range properties.
10. Enter 60 in the second field for the Output Range property.
Again, you can see that movement isn’t exactly smooth. Retime is set to
blend frames at the moment. Setting it to blend or freeze is done in the
Filter property. Box means blending, while Nearest means freezing. None
means slowing down animation done inside Nuke—meaning keyframes—
and will create smooth slow motion for those. None of these three options
will produce appealing results, however. Slowing down crispy, clean
movements like the one you have here for the bullet is always a bit
difficult and creates jerky movement. Let’s delete this retiming node.
⬆
12. Click Stop.
Notice that the element you were working on was part of a multiview
branch. You didn’t even notice. You were viewing only one view—either
Left or Right, depending on what your Viewer was set to—but the other
view was being manipulated as well. That’s the beauty of working in this
way.
15. Enter 1 and 50 in the two Frame Range fields (FIGURE 8.39).
The longer length of the background element doesn’t matter that much
anymore. Now you will use the end of the element and trim the beginning,
rather than the other way around. You need to shift the position of two
Read nodes in time.
Using the Dope Sheet, you can shift the position of Read nodes in time.
18. Click the Dope Sheet tab in the same pane as the DAG (FIGURE
8.40).
Using the bars on the right panel of the Dope Sheet, one for the first Read
node and one for the second Read node, you can shift the timing of the
Read nodes.
19. In the panel on the right, click the first file property and move it back
(to the left) until your out point on the right reaches frame 50 (FIGURE
8.41).
That’s it for retiming. Now you can proceed to placing the foreground over
the background.
⬆
FIGURE 8.42 The slap comp (a quick, basic comp). You can
see it needs more work.
At this point, you should be getting pretty good playback speed because of
the small proxy you are using. What you see now is the bullet leaving the
gun’s barrel (the trigger was pulled in a previous shot). The color of the
bullet is wrong, and it should start darker, as if it’s inside the barrel. Let’s
make this happen.
This ensures that the RGB channels are divided by the alpha and then
multiplied again after the color correction.
You can see here that the metal of the gun and the metal of the bullet have
very different colors. Although they aren’t necessarily supposed to be the
same kind of metal, at the moment, it looks as if there is different-colored
light shining on them, and their contrast is very different as well. You use
the Gamma property to bring down the midtones and create more
contrast with the bright highlights that are already there; then use the Lift
property to color the shadows to the same kind of red you can find in the
background image. Finally, use Gain to tweak the highlights to match.
8. Adjust the Gamma, Lift, and Gain to color correct the bullet so it looks
better. I ended up with Lift: 0.035, 0.005, 0.005; Gamma: 0.54, 0.72,
0.54; and Gain: 0.95, 1, 1.35.
Take a look; the bullet should look better. Now for the animation—but,
before you get started, first make sure you learn what to do if you want to
change the color of just one of the views.
You now have three new nodes in your DAG, not just one. SplitAndJoin
actually creates as many nodes as it needs to split the number of views out
to individual branches—usually two views, with a OneView node—and
then it connects them up again using a JoinViews node. You can now
insert whatever node you want in whatever branch you want. Here’s one
method.
The other method is to split just specific properties inside nodes, because
you need them.
Look at the Gain property. There’s a new button on the right that lets you
display the View menu (FIGURE 8.45).
4. Click the View button to display the View drop-down menu, which
allows you to split off one view to be controlled separately.
5. Choose Split Off Left from the View drop-down menu for the Gain
property (FIGURE 8.46).
6. Click the little arrow next to the Gain property (FIGURE 8.47).
FIGURE 8.47 You now have two properties for Gain, one
for each view.
This method lets you manipulate each view separately for this property
and still have overall control over the rest of the properties as well.
You can also reverse this using the View drop-down menu again.
7. Choose Unsplit Left from the View drop-down menu for any one of the
Gain properties.
You are back where you were before—controlling the two views together.
⬆
Now it’s time to change the color of the bullet as it’s leaving the barrel.
9. Create a keyframe for both the Gain property and the Gamma property.
10. Go to frame 1 in the Timebar.
11. Change these two properties so the bullet looks dark enough, as if it’s
still inside the barrel. I set the Gain to 0.004 and the Gamma to 1.55, 1.5,
1.35.
This is all you are going to do at this stage to this composite. It’s been a
long lesson already.
You now have a Write node set up to render to the hard drive. You haven’t
set up a proxy filename, just the full-resolution file name. If you render
now, with proxy mode turned on, the render will fail. You can, of course,
create another folder and call it bullet_third if you want to render a
proxy. Right now, however, render the full-resolution image.
Notice that as you render a PNG file sequence, which is an 8-bit file
format, you are rendering it to the sRGB color space. Nuke is taking one of
your elements, which is a Cineon color space, and another element, which
is in linear color space, working with both of them in linear, and then
rendering out an sRGB color space PNG sequence. It’s all done
automatically and clearly presented in front of you.
5. Click the Render button. In the Render panel that appears, make sure
the Frame Range property is set correctly by choosing Global from the
drop-down menu, and make sure the Proxy setting is off.
This render might take a little while because of the hi-res images you are
using. Good thing you deleted that OFlow node—it would have taken a lot
longer with that one.
When the render is complete, bring in the files so you can watch them.
You can do it in the Write node.
6. Select the Read File box at the bottom of Write1’s Properties panel.
You can now view your handiwork. The thing is, however, that this is a
stereo pair of views you have here, and you can watch only one eye at a
time.
If you have anaglyph glasses (red and cyan glasses, not the fancy kind you
find in cinemas), you can use them to watch the true image you are
producing.
7. In case you don’t have anaglyph glasses, click SideBySide from the
Views/Stereo toolbox to see your two views side-by-side (FIGURE
8.49).
⬆
FIGURE 8.49 Choose between the Anaglyph node or the
SideBySide node to view both views.
If you use the Anaglyph node, your image now looks gray with red and
cyan shifts of color on the left and right. This shift in color makes viewing
the image with anaglyph glasses correct in each eye (FIGURE 8.50).
If you use the SideBySide node, you see both eyes, one next to the other
(FIGURE 8.51).
PREV NEXT
⏮ ⏭
Recommended
7. Keying / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out ⬆
9. The Nuke 3D Engine
© 2017 Safari. Terms of Service / Privacy Policy
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
8. Compositing Hi-Res Stereo Images 10. Camera Tracking
🔎
3D SCENE SETUPS
In Nuke, 3D scenes are built out of four main elements: a camera, a piece
of geometry, a Scene node (optional), and a ScanlineRender node to
render the 3D data into a 2D image.
NOTE
Camera: Through the camera element, the rendering node views the
scene. You can also import camera properties from fbx files.
⬆
geometry) created in Nuke, such as a card or a sphere, or it can be
imported from another application as an obj file, an obj sequence of files
(for animating objects), or an fbx file.
Scene: The Scene node connects all the pieces that make up the scene
you want to render; this includes all pieces of geometry, lights, and
cameras. This means if you have only one element, you don’t need this
node. By using this node you are saying that all these elements live in the
same space.
Using these elements together, and some other nodes that are not part of
the basic 3D setup, you can do remarkable things. I cover a few
techniques in this chapter and the next two chapters.
You’ll find all the tools that deal with the 3D engine in the 3D toolbox. I’m
not going to go over everything here because it’s just too expansive, and I
need to leave something for the advanced book, don’t I? But never fear;
you will still get to do some pretty advanced stuff.
1. With nothing selected in the DAG, create a Camera node from the 3D
toolbox.
3. Select Camera1 and Scene1 and insert a ScanlineRender node from the
3D toolbox.
The ScanlineRender node connects itself to the Camera and Scene nodes
in the correct inputs (FIGURE 9.2). Eventually you will need some kind
of geometry or this setup won’t mean much, but there’s still time.
⬆
FIGURE 9.2 The beginning of a 3D setup
At the moment you don’t see much in the Viewer, and you shouldn’t.
Although you have a Camera node and a ScanlineRender node, you have
nothing to shoot through the camera. You don’t see anything in the
resulting 2D image. However, in the 3D world, you should be seeing at
least a camera. Nuke’s Viewer functions as both a 2D and a 3D Viewer.
You can easily switch between 2D and 3D viewing modes by hovering the
mouse pointer over the Viewer and pressing the Tab key. Alternatively,
you can choose 2D or 3D (or other views such as Top and Front) from the
View Selection drop-down menu at the top right of the Viewer, as seen in
FIGURE 9.3.
5. Hover the mouse pointer over the Viewer and press the Tab key.
You are now in the 3D view, as you can see if you look at the View
Selection drop-down menu. There’s your camera; it sits in the middle of
the virtual world at 0, 0, 0. All 3D elements are generated at this position.
You are seeing the camera from Nuke’s perspective camera, which you can
use to view your 3D scene. You navigate the perspective camera, and
hence the 3D Viewer, using a combination of magic and, well, mainly hot
keys.
Use the scroll wheel (if you have one) or the + and – keys to zoom in
and out.
Now that you know how to move around the 3D world, you can use this
knowledge to move around a little.
You can, of course, use the Properties panel to move the camera, but often
this is unintuitive. Instead, you want to use on-screen control Camera
axes. To do this, first select the Camera element in the Node Graph and
make sure that it’s loaded in the Properties Bin. Only nodes that are ⬆
loaded in the Properties Bin are available for manipulating with the on-
screen controls.
Now that you have the axes, you can click them and move the camera
around. You can also use an individual axis to move in one direction. The
reason the axes are in three colors is to help you figure out which is which.
The order of colors is usually red, green, blue; the order of directions is
usually X, Y, Z. In other words, the red-colored axis controls the X
direction, the green one controls the Y direction, and the blue controls the
Z—very convenient and easy to remember.
3. In the Viewer, hold down Ctrl/Cmd and watch the axes change to show
the rotation controls (FIGURE 9.6).
The circles that appear on the axis when you hold down Ctrl/Cmd allow
you to rotate the camera around one of three axes. This is done in the
same way as the translation axes: Red for X rotation, green for Y rotation,
and blue for Z rotation.
These are the basic techniques for moving things around in the 3D viewer.
Feel free to play around with the camera and get a feel for it. You will reset
it in a minute anyway, so you can really go to town here.
Nuke ToolSets is a Toolbar toolbox, just like any other toolbox, with one
major difference: You populate this toolbox yourself by selecting groups of
nodes in the DAG and clicking a few buttons. Everything is saved—not
just the nodes, but the connections between the nodes, and every property
⬆
in each of the nodes (as well as animation, or expressions, or anything
else) (FIGURE 9.7).
FIGURE 9.7 The ToolSets toolbox
As all animation is saved with the ToolSet, let’s reset the camera.
You are presented with the panel shown in FIGURE 9.9. In it you need
to call your ToolSet by a name. You can also create submenus and later
choose to place more ToolSets in those submenus. Create a submenu
called 3D, and in it call this ToolSet Setup. Let’s see how.
5. Click Create.
6. Click the ToolSets toolbox again and have a look (FIGURE 9.10).
FIGURE 9.10 Your new, very own ToolSet inside your new,
very own submenu
⬆
FIGURE 9.11 This is how to delete a ToolSet.
Now that you have this ToolSet ready, and creating these three nodes is
just a click away, you won’t feel bad about starting a new Nuke script.
The Clear command simply starts a new empty script without asking any
questions. It doesn’t even start you off with a Viewer. But don’t worry.
We’ll create one.
Let’s start by bringing in the shot so you can see what I’m referring to.
3. Now that you know how to use the Project Settings panel, set up the
script by opening the Project Settings panel and changing Full Size
Format to HD and fps to 25. Close the Project Settings panel.
When this creature inserts its head between the bars of the grill, I would
like some magical butterflies to pop out from underneath said grill. You
use a particle system for this effect. A particle system is a technique that
reproduces chaotic systems by emitting a lot of objects and controlling
them with physics-derived forces. Not only are particle systems pretty
advanced stuff, but they are available only for NukeX owners; as a
consequence, this book doesn’t cover them. Instead, you’ll be loading
them as a script later on.
You know where to place the particle system in the 3D world by turning
the 2D image you brought into a 3D object using a world position pass. Bit
by bit I walk you through this process. Fear not. Let’s start with that last
bit: world position pass? What’s that?
⬆
FIGURE 9.12 Viewing other channel sets in the Viewer
2. Hover your mouse pointer in the Viewer and look at the bottom right to
see the values of pixels there (FIGURE 9.13).
The position pass produces a very colorful image. The colors you see have
little to do with what this pass actually is. The values show what’s
important here as they represent a position in 3D space. For example,
take a look at the pixel my mouse pointer is hovering over in Figure 9.13;
it is located at X = 15.17, Y = 7.67, and Z = 1.79.
Another pass, which is a little less important in this case but is worth
mentioning, is called normals. The normals pass is very similar to the
world position pass in that it represents the pixels in space, only the
normals pass represents the angle of the pixel in world coordinates. Let’s
use this valuable information.
The Viewer pops over to the 3D view since you are no longer looking at an
image. The image you had is now a 3D object, or it will be, once you
change a couple of properties.
You can see immediately that some pixels appeared in the Viewer. These
are not really pixels but points—small 2D planes, like pieces of paper,
floating in 3D space (FIGURE 9.14).
6. Move around your 3D viewer a little to see what you make of these
points.
If you’re already feeling good controlling the 3D space, you can probably
⬆
see the whole image pretty well. But in any case, this image is supposed to
be seen from a specific camera.
Importing a camera
For this exercise, you need the 3D camera to act like the camera used to
shoot this footage. The artist who made this shot already had a camera in
the software he used to produce this element. He already exported the
camera and has supplied you with a file to import. How convenient. You
probably want to learn how to import. Here goes.
First you need a 3D setup. Luckily you have a shortcut you can use to
produce one.
1. With nothing selected in the DAG, choose 3D/Setup from the ToolSets
toolbox (FIGURE 9.15).
The top-most property is a little check box called Read From File. Click it
to use the File tab, which is where you read the camera location and other
properties from a file (FIGURE 9.16).
5. To import the camera file, click the folder icon to the right of the file
property to load the File Browser.
7. A dialog box asks if it’s OK to destroy the current camera data; click
Yes.
An fbx file can hold a lot of cameras’ properties in various takes. From the
two drop-down menus, you will choose a take name and a node name.
These usually include a lot of default presets that you wouldn’t normally
use, depending on which application you used when you exported the
camera file. These should already be set up correctly, but just to be on the
safe side, take these steps.
FIGURE 9.18 This is how your File tab should look at this
point.
10. Switch to the Camera tab.
You can see that the Translate and Rotate Input fields are all filled with
values, and that they are all grayed out—unavailable for changing. This is
how the animation is carried across. If the file on disk changes, the values
here change.
The camera has disappeared from the Viewer (or at least it did on mine).
To see it again, tell the Viewer to frame the selected element.
12. Hover your mouse pointer over the Viewer and press the F key to
frame the Viewer to the selected element (FIGURE 9.19).
The great thing here is that you can already see the relationship between
the camera and the points created with the PositionToPoints node. Points
exist only in the area inside the camera’s field of view.
13. From the Viewer Camera drop-down at the top right of the Viewer,
where it says Default, as you can see in FIGURE 9.20, choose Camera1.
What’s this? Why are you seeing the 2D image all of a sudden? Well,
you’re not. You’re seeing the points from the correct angle—the angle they
were originally shot from. So now the image makes sense.
14. Move around the viewer a little so you can get a sense of what you’re
looking at. When you’re done, choose Camera1 from the drop-down menu
again to reset the camera (FIGURE 9.21).
There are a lot of holes in this object because only areas on the object that
are seen from the camera can be re-created with this method. But this is
OK. You are using this only to indicate location.
⬆
not covered in this book. However, I still want you to have the freedom to
use this as a 3D element, so you’ll use the element in its original form—a
Nuke script.
This book is written with a regular Nuke license in mind. The particle
system in Nuke is part of the NukeX series of nodes. The nice thing about
it is that even if you have only a Nuke license, you can still use NukeX
features; you just can’t change their properties.
3. If you are running Nuke, you get a message telling you that you are
using NukeX tools. Click OK to make it go away (FIGURE 9.22).
If you are running NukeX, you will not get a message, and you can change
the settings of any of the particle nodes later.
The group of nodes you are importing is automatically selected when you
bring it in. This makes it easy for you to move it to a convenient location.
4. Move the freshly imported nodes somewhere that doesn’t coincide with
the location of other nodes (FIGURE 9.23).
FIGURE 9.23 The tree you just brought in should look like
this.
The tree you brought in is missing one key element, and that’s the particle
image itself. Bring it in now and connect it.
6. At the very top of the imported tree, above two Crop nodes, is a dot
node. Connect Read2’s output to the input of this dot (FIGURE 9.24).
⬆
FIGURE 9.24 This is how you should connect the butterfly
Read node.
Let’s follow this tree down from Read2 so you understand the basics of it.
As you read, follow the tree and view each node I mention in the Viewer.
• The two Crop nodes under Read3 split the butterfly into two—the left
half and the right half.
• Card2 and Card3 are two flat 3D surfaces. They represent a paper-thin
object in 3D space. The butterfly images (left and right) texture those
surfaces. There’s an expression on one of the Card nodes that is being
referred to by the other Card node to make the two halves rotate, making
the butterfly appear to move.
• Scene2 connects the two parts of the butterfly and the TranformGeo1,
which is a 3D transformation node that rotates the combination of the two
Card nodes to better face the camera.
9. Click Play. When you’re done watching the butterfly flutter, press Stop.
TIP
• The nodes below Card1, and all the rest of the nodes in this tree that I
haven’t discussed, are either something you have already learned about or
part of the actual particle system, which is too expansive to discuss here.
• ParticleSettings1 is the last node in this tree. It is, then, the output of this
tree.
What you’re seeing here is the output of the particle system. It’s important
to note several things: The butterflies emit from world center, and they
emit on frame 1.
Let’s move the particle system into place, both in position and time.
You now have an axis that moves the whole particle system in 3D space.
The controls are the same as those for the camera you practiced earlier.
Use the axes to move, and hold down Ctrl/Cmd to rotate.
You double-click the first one so you can see it in the Viewer. You double-
click the second one so you can access the camera in the Viewer Camera
drop-down menu.
3. From the Viewer Camera drop-down menu at the top right of the
Viewer, choose Camera1.
You can now see the axis controlling the particle system in the Viewer in
relation to the creature (FIGURE 9.26). I want you to place the particle
system exactly where the creature’s head is when it’s in between the bars
of the grill.
The creature’s head is inserted into that grill. You can position
TranformGeo2 now.
I ended up with X = 2.2 and Z = 11.7. This brings TransformGeo2 and the
particle system that’s attached to it to where the head is. However, the
⬆
butterflies should be emitting from under the grill, so let’s move the
particle system down too.
Even after all this, the timing is all wrong. The butterflies are being
emitted on frame 1, but the creature pokes his head in only at frame 30 or
so.
The area on the right is the particle system. The area on the left is the
reference object generated with the PositionToPoints node. The purple
area at the bottom is the 3D setup. Having separate trees like this gives
you a lot of freedom to try out different things, but for the purpose of this
project, you need to connect all of them. Start by connecting the particles.
After all this you come to the point of the ScanlineRender node. You are
rendering, in the same way that external 3D software renders, the 3D
geometry through a camera and into a 2D image that can be manipulated
in Nuke with any 2D node, like Merge, Grade, and Blur.
8. Hover the mouse pointer in the Viewer and press V to start drawing a
Bézier shape. Draw along the grill as I draw in FIGURE 9.31.
FIGURE 9.32 shows the butterflies being masked out of the bottom grill,
and indeed, they already appear to be coming in from under the floor. The
mask’s edge is a little sharp.
FIGURE 9.33 Make sure your tree looks like this—and keep
it organized!
APPLYING MATERIALS TO OBJECTS
Making some of the butterflies fly in front of the creature and some
behind is not as easy as masking the grill. You need to know which
butterflies are in front and which are behind, and then mask only the ones
behind. That’s not something worth attempting because potentially it can
be a lot of work, and you might not get it right. And why should you?
There are easier ways.
There is already an object in the scene that sits where the creature is and
can be used as a mask—the creature made out of points. Let’s see it in the
render.
4. Switch between the inputs 1 and 2 by hovering the mouse pointer in the
Viewer and pressing 1 and 2.
When you do this, the good thing is that some of the butterflies do
disappear behind the creature’s head. The bad thing is that everything
becomes fatter in input 1. This is because the PositionToPoints node
doesn’t produce precise geometry. The points created aren’t the right size
and never will be. You can get closer by using PositionToPoint’s Point Size
property (FIGURE 9.35), but PositionToPoints simply isn’t designed or
intended for this. You should still use the untreated Read1 as the
background, but you can use PositionToPoints1 as a matte in 3D space.
An object usually has a material assigned to it. The Card nodes making up
the butterflies have the butterfly images assigned to them as a material;
the butterfly image is connected to the Img input of the Card nodes
(FIGURE 9.36). If you did not assign a material, you would still have the
cards and be able to see them in the 3D viewer, but nothing would render
out through the ScanlineRender node.
⬆
FIGURE 9.36 Most Geometry has an Img input that
materials connect to.
⬆
FIGURE 9.38 The creature now masks the butterflies that
are behind it.
FIGURE 9.39 See how the butterflies exist only where the
creature doesn’t?
Finishing touches
So almost everything is solved, we have just a couple of little simple
tweaks left to perform. The color of the butterflies is not right, and neither
is how sharp they look.
3. Change the color of the butterfly so that it’s warmer and maybe a little
brighter. I ended up changing just Gain to these values: R = 1.425, G =
1.18, B = 0.85.
6. Close all Properties panels, click Play in the Viewer, and enjoy your
work (FIGURE 9.40).
If the render is too slow for you, you might want to render the output to
hard drive instead of waiting by the Viewer. You know how to do that by
now, so you don’t need me to explain. If you’re not sure, refer back to
Chapter 2.
This is essentially it. You should be pretty pleased with yourself! You’ve
done some pretty advanced stuff already, and you can see your tree is
larger than what you’ve had until now. You’ve also learned some ⬆
important building blocks here, mainly how to use the Camera, Scene,
and ScanlineRender nodes.
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
9. The Nuke 3D Engine 11. Camera Projection
🔎
If you don’t have a NukeX license, please read on anyway because only a
small part of this chapter covers steps you won’t be able to perform.
NOTE
Since you’ll be tracking with the CameraTracker node, you need to launch
NukeX rather than standard Nuke. If you don’t have a NukeX license,
finish reading this section with your standard version of Nuke. Read the
next section, “3D Tracking in Nuke,” but do not perform the steps. Then,
continue reading and performing steps from the “Loading a Pre-generated
CameraTracker Node” section. If you do have a NukeX license, do all the
steps in “3D Tracking in Nuke” but do not perform the steps in “Loading a
Pre-generated CameraTracker Node.”
NOTE
NOTE
Next, you will use Nuke’s Camera Tracker to extract the camera
information from the scene. Another useful thing that will come out of
this process is the point cloud, which represents locations of several
elements in the shot.
3D TRACKING IN NUKE
Nuke’s Camera Tracker works in three stages:
Track is the actual tracking process. Nuke looks at the image and finds
tracking points according to high-contrast and precisely defined locations ⬆
(such as dots and crosses), much as you would choose tracking points
manually.
Solve solves the 3D motion. Nuke looks at the various tracked points it
has, throws away redundant points that didn’t track well, and, by looking
at the movement, figures out where the camera is and where each tracking
point is located in space.
1. Select Read1. This should be the clip containing the frame on the table.
3. Make sure you are viewing the output of the newly created
CameraTracker1 (FIGURE 10.3).
Tracking features
Your CameraTracker node is ready, but you can make several adjustments
to make for a better track.
Several properties in this tab can help you achieve a better track. Here are
some important ones:
• Detection Threshold: The higher the number, the more precise the
tracker has to be in finding trackable points.
• Feature Separation: The higher the number, the farther apart the
tracking features have to be. If you increase Number of Features, reduce
this value.
• Preview Features: This check box shows the points that will be used
for tracking in the Viewer. Handy for the next properties described.
This shot can be tracked well without changing anything, but for added
accuracy, up the amount of features for tracking. Make sure you preview
⬆
this change first.
See how more features were added? More features means better accuracy,
but a slower operation. You don’t mind waiting a few more seconds, do
you?
This tab has several properties that change the accuracy of the track by
telling Nuke more and more about the camera and how it was used. The
more you know, the better the track.
3D tracking works only on stationary objects. Objects that are moving will
distract the solving process, as their movement is not generated from the
camera’s movement alone. If there’s a moving element in the scene, you
should create a roto for it, feed it into the CameraTracker’s Mask input,
and then use this property to tell the CameraTracker node which channel
to use as a mask. If you’re not sure where CameraTracker’s Mask input is,
drag the little arrow on the side of the node (FIGURE 10.7).
• Focal Length: Here you can tell Nuke what lens was used on the shoot,
if you know it. You can choose to give as precise or vague values as you
can. You do this in a moment.
• Film Back Size: This is the size of the sensor used to capture the
image. The sensor can be the size of the camera’s back in a film camera or
of the chip in a digital camera. If exact scale doesn’t matter in the end
result of your track—that is, if you don’t need to match to a specific real-
world scale—then a ratio such as 4:3 or 2:1 will be enough, depending on
your format.
The camera in the shot is a free camera, so you’ll leave the setting as is.
It’s not always apparent whether the camera is free—in fact, the best way
to know is by being on set and writing down whether it is or not. However,
in this case it is apparent the camera is moving because there is parallax
in the shot.
Nothing was done to remove lens distortion from the image, and so the
default setting of No Lens Distortion needs to change.
As for the focal length, just by looking at the clip you can tell one thing:
There is no zoom change, and so the default choice of Unknown Constant
is correct.
For now, you will simply run the tracking process and hope for the best.
7. Click the Track button at the bottom of the Properties panel (FIGURE
10.8).
The CameraTracker node automatically tracks the length of the Input clip.
It’s doing it right now, as you can see from the progress bar that appears.
If you want to change the tracking length, change the property called
Range in the CameraTracker tab (FIGURE 10.9).
The Camera Tracker will track forward from the start; when it reaches the
end, it will track backward and then stop. You can see all the tracking
points, or features, the Camera Tracker tracked (FIGURE 10.10).
1. Click the Solve button under the Track button (FIGURE 10.11).
FIGURE 10.11 The Solve button
Nuke is now processing the information and trying to re-create the real-
world situation out of the features it tracked. The progress bar indicates
how long this is taking.
The Error property field returns the overall level of error produced by the
solve operation. A value of 1 is considered bad (FIGURE 10.13). I have
(and so should you) an error of 0.76, which is not bad at all. But let’s see
ways to reduce this further.
• Error – (Min; RMS; Track; Max): These four curves show different
levels of error. The difference between them is difficult to explain;
however, reducing them helps get a better solve.
• Min Length; Max Track Error; Max Error: These three curves
show the setting chosen in the three properties of the same name that can
be found just under the curves window (FIGURE 10.15).
Let’s use the curves and properties to refine our solve and reduce the
overall level of error.
3. Click to select Track Len – Min in the curve list, then Ctrl/Cmd-click
the Min Length curve.
Now you should have both these tracks selected and shown in the curve
window.
4. Click the curve window, then press F to frame the two selected curves.
The squiggly curve you see in FIGURE 10.16 (presented in pink) shows
the minimum length of track in every frame. The straight curve shown in
green at the bottom is the property called Min Length.
FIGURE 10.16 The Track Len – Min curve in pink and Min
Length in green.
5. While looking in the Viewer and the curve window, using the slider,
change the Min Length property to 13.
What you are doing is chopping off tracks that are too short to be
considered. As you are climbing up the squiggly curve in the curve
window, more tracks are turning red in the Viewer, meaning they will no
longer be used for solving.
6. Click to select Error – RMS in the curve list, then Ctrl/Cmd-click the
Max Track Error curve.
7. Click the curve window, then press F to frame the two selected curves.
Here you see one level of error in the solve, and again you use the
corresponding slider to chop off any features that damage the solve.
8. Bring the Max Track Error slider down to 0.5 and look at the Viewer
(FIGURE 10.17). ⬆
FIGURE 10.17 A lot of features have turned red and won’t
be considered in the solve.
A lot of points turned red. Too many, in fact. If you keep going like this,
you won’t have any features left and will get a very bad solve indeed.
9. Click the curve window, then press F again to reframe the window.
10. Change the Max Track Error field to 0.91. This marks only the peak at
the end of the clip as unwanted, leaving plenty of features to use for
solving (FIGURE 10.18).
Do this again for the Error – Max curve and the Max Error property.
Again, you want to chop off only the peaks.
11. Click to select Error – Max in the curve list, then Ctrl/Cmd-click the
Max Error curve (FIGURE 10.19).
12. Click the curve window, then press F to frame the two selected curves
(FIGURE 10.20).
⬆
FIGURE 10.20 Find the peaks in the Max – Error curve.
The curve peaks are apparent in the Max – Error curve. Changing the Max
Error property to something around 4.7 separates the peaks from the rest
of the curve.
13. In the Max Error Input field, type 4.7.
So by using the three sliders and the curve window, you determined what
features to remove. You now need to remove these tracks so they won’t be
used at all.
14. Click the Delete Rejected button (FIGURE 10.21). In the dialog that
opens, click Yes.
All the tracks marked red have now been removed. However, there are
still amber tracks. You should delete those as well.
15. Click the Delete Unsolved button. In the dialog that opens, click Yes.
You should now be left with only green colored tracks (FIGURE 10.22).
All these changes have already done wonders to the Solve Error value. It
has dropped to 0.55 and you can see it at the top of the AutoTracks tab.
It’s the same property you saw in the CameraTracker tab; it’s just
mirrored here for convenience (FIGURE 10.23).
In order for all these changes to take effect in the output of the
CameraTracker node, you should solve again.
17. Click Solve, and in the dialog that opens, click Yes.
When you solve again, the progress bar remerges as the work is completed
using the tracked features that you haven’t deleted. When my solve
finished, I ended up with an Error value of 0.53. The work done here
reduced the error from 0.76 to 0.53. That’s a big change, and it’s a good
thing we did it.
⬆
FIGURE 10.24 Choosing what to export using the Export
drop-down menu and the Create button
You now have three new nodes in the DAG (FIGURE 10.25). You have a
Camera node, which is the main thing you were looking for. You have a
Scene node, which simply connects everything, but is really redundant
and will not be used. And you have a node called
CameraTrackerPointCloud1. This last node serves as a placeholder for the
point cloud data. If you want to export the point cloud so it can be used in
another application as a 3D object, you can connect a WriteGeo node from
the 3D toolbox to it and render that out as an obj file.
Not having a NukeX license means you can’t click the tree buttons in the
CameraTracker tab: Track, Solve, and Create. But you can still use a pre-
generated CameraTracker node as you do in the remainder of this
chapter. Here’s how to load the pre-generated CameraTracker node.
2. Click File > Import Script and, from the chapter10 directory, import a
script called CameraTrackerScene.nk.
You are now ready to continue reading the rest of the chapter.
In this case, the table is parallel to the ground so you will use the table as
the ground plane. Let’s look at the point cloud in the 3D Viewer.
⬆
1. If you are not already viewing the 3D scene, hover your mouse pointer
over the Viewer and press the Tab key to switch to the 3D Viewer.
3. Zoom in a bit and rotate so you can see the scene from the right
(FIGURE 10.26).
You can see that the points are spread out in a way that shows something
flat at the bottom and something upright on the right. The bottom thing is
the points representing the table while the upright points represent the
picture frame. You want to make the table flat, not diagonal like it is at the
moment.
You can pick points only in the 2D view, but it’s better to see your
selection in the 3D view—so you need two Viewers. Let’s create a second
Viewer.
5. In the new pane, choose New Viewer from the Content menu
(FIGURE 10.28).
⬆
FIGURE 10.28 Creating a new Viewer in a new pane
You now have a new Viewer and a new Viewer node in the node graph.
You need to connect what you want to see to that new Viewer.
You now have two Viewers, one showing the scene in 2D and the other in
3D (FIGURE 10.30).
Let’s select features, or points, on the table. You define the ground plane
by selecting as many features on the table as you can; by double-checking
that they are OK in the 3D Viewer, you eliminate any remaining errors.
Notice that when you’re selecting points in the 2D view, they are
highlighted in the 3D view. Make sure the points you select are part of the
main point cloud area representing the table in the 3D view.
⬆
You can see the points I selected in FIGURE 10.31, both in 2D and 3D
view.
FIGURE 10.31 Selecting points from the point cloud
You can see how the point cloud in the 3D Viewer jumped so it’s aligned
with the grid in the 3D Viewer. The camera is now pointing down, which
fits the real-world scenario better too (FIGURE 10.33).
9. Choose Restore Layout 1 from the Layout menu in the menu bar.
11. Select Viewer1 in the DAG and press 1 to turn it back on.
This removes the extra Viewer and brings the interface back to its default
setting.
This ends the Camera Tracker part of the lesson, but what now? You need
something to use as a reflection.
This is a complete Nuke script of its own. You will now load in this script
and use its output as a reflection map.
This will be invaluable when you bring in the panorama script. If you
don’t make sure this space is free, the newly imported script will overlay
the current nodes in the DAG, and you’ll have one big mess.
When you import another script, all the nodes that came with it are
selected automatically, which makes it easy to move them.
5. Drag the newly imported nodes to the right of the rest of the nodes
(FIGURE 10.34).
ScanlineRender nodes
Reflection maps tend to be full panoramic images, usually 360 degrees.
That’s exactly what you have in this imported panorama. It consists of 15
still frames shot on location with a 24-degree rotation from frame to
frame. All these frames are sitting in 3D and are forming a rough circle,
producing a 360-degree panorama.
However, this is only a panorama in the 3D world, not the 2D one. You
need to convert this into a 2D panorama for a reason explained later.
1. Double-click Scene2 (the Scene node at the end of the panorama script
you just brought in).
2. Hover your mouse pointer over the Viewer and press the Tab key to
switch to the 3D Viewer. If you are already in the 3D Viewer, there’s no
need to do this.
3. Navigate back and up in the 3D Viewer by using the + and – keys and
by clicking Alt/Opt-drag. (FIGURE 10.35).
5. Select Scene2 (the Scene node connecting our 15 Card nodes) and
insert a new ScanlineRender node from the 3D toolbox.
6. In ScanlineRender2’s Properties panel, change the Projection Mode
drop-down menu from Render Camera to Spherical (FIGURE 10.36).
Now you can see the whole panorama as a 2D image. There’s no need to
connect a camera to the ScanlineRender this time, as the camera has
absolutely no meaning. You don’t want to look at the scene from a specific
angle—you just want to look at it as a panorama. However, you do need to
input a background to the ScanlineRender, as spherical maps are a ratio
of 2:1.
Usually you use the Reformat node to change the resolution of an image.
For example, if you have an image that’s full HD (1920×1080) and you
want it to be a 720p format (1280×720), this is the node to use. It can also
change the format of an image beyond just resolution, can change its pixel
aspect ratio, and, depending on the properties, can crop an image (though
that’s rarely used).
10. Select the Force This Shape check box to enable both the Width and
Height fields. Otherwise you will be able to specify only Width; Height
will be calculated with aspect ratio maintained.
11. Enter 2000 in the Width field and 1000 in the Height field.
⬆
FIGURE 10.38 The Reformat node determines the
resolution of ScanlineRender1’s output.
4. Select all the points inside the picture frame. You can do this by
marqueeing them or Shift-clicking them.
5. From the Viewer’s contextual menu, choose Create, and then Card
(FIGURE 10.39). (Right-click/Ctrl-click to display the contextual
menu.)
The Viewer now switches to the 3D view (if it doesn’t, you can press Tab
to do so) and you can see that a Card was created at the location of this
feature in the point cloud. It is placed well and at the correct angle. This
saves you a lot of work (FIGURE 10.40).
In the DAG, you now have a new Card node that should be called Card16.
⬆
This is the Card the CameraTracker node dutifully created for us
(FIGURE 10.41).
FIGURE 10.41 The new Card node is created at the center
of the DAG.
You now have practically all the elements you need: a camera, a reflective
surface, and something for it to reflect. Time to connect it all up. Only two
pieces to the jigsaw puzzle are missing; they are discussed in the next
section.
What Nuke does have is a light source called Environment, which shines
light, colored by an input image instead of a single color, onto surfaces.
These surfaces need strong specular material characteristics. Specular
means objects (or part of objects) that reflect a light source directly to the
camera. A specular material can also be generated inside Nuke.
Let’s connect the whole thing up. First you’ll need to create the
Environment light and connect the panorama to it, then connect
everything up with a Scene node and finally another ScanlineRender
node.
When you do so, the Environment node projects the panorama on every
piece of geometry that has a strong specular material. You need to add a
specular shader to your Card16 node.
The newly created Scene3 node connects the card and the Environment
light so that they work together (FIGURE 10.42).
⬆
FIGURE 10.42 Another Scene node to connect the
Environment light and the shaded Card16
6. Select Camera1 and Scene3, then press the Tab key and type Scan. Use
the Arrow Down key until you reach ScanlineRender, then Shift-Enter to
branch another ScanlineRender node (FIGURE 10.43).
What you see now is the reflected surface showing the part of the
panorama that is in front of it. You can choose another part of the
panorama if you want, but first make the render a little quicker.
10. View the output of ScanlineRender1. That’s the one showing the
panorama (FIGURE 10.44).
As you can see in front of you, the bounding box, representing the part of
the image that is usable, is all the way around the image—even though a
good 66% of the image is black and unusable. You will make the reflection
render faster by telling Nuke that the black areas are not needed.
12. Move the top and bottom crop marks in the Viewer so they are sitting
on the top and bottom edges of the actual panorama (FIGURE 10.45).
This will make for a more optimized (faster) tree.
14. Select Scene2 and insert a TransformGeo node from the 3D/Modify
toolbox.
15. While still viewing ScanlineRender2, change TransformGeo1’s
Rotate.y property and see what happens (FIGURE 10.46).
As you bring the rotate value up, or clockwise, the image rotates
counterclockwise and vice-versa. This is because what you are watching is
a reflection, not the actual image. When you rotate the real image, the
reflection, naturally, rotates in the other direction.
1. Navigate the DAG to the left, where you have the statue image
connected to the Rectangle node.
2. Click Blur1 to select it and then press the 1 key to view it.
3. View the alpha channel by hovering your mouse pointer in the Viewer
and pressing the A key.
That’s the matte you need. No doubt about it. You just need to get it to the
other side of the DAG.
I’m not sure where you placed all the bits and pieces of the somewhat
disjointed tree I asked you to build. They might be far away. You are now
going to drag a pipe from this part of the tree to that one, and I don’t want
you to lose it on the way. Let’s make this a little easier for you.
6. Drag this new Dot to the right, or wherever you placed your
ScanlineRender2 node.
Now you need to hold the ScanlineRender2 node inside this matte, which
is connected to the Dot.
7. Select the Dot and press the M key to insert a Merge node.
To really understand how well this reflection works, render it and view it
in the Viewer. You should know how to do this by now, so I won’t bore you
with instructions. (If you need a refresher, see Chapter 1.)
Remember that, if you want, you can still pick a different part of the
panorama image to be reflected. You can do that with TransformGeo1.
Just change Rotate.y as before. I left mine at –80. I also tidied up my tree
a little so I can read it better. You can see it in FIGURE 10.49.
There are many other uses for camera tracking. Once you have the camera
information for a shot, a lot of things that you would consider difficult
suddenly become very easy. For example, I toyed with placing the doll
from Chapter 2 on the table. Try it.
PREV NEXT
⏮ ⏭
Recommended
9. The Nuke 3D/ Queue
Engine/ History / Topics / Tutorials / Settings / Get the App / Sign Out 11. Camera Projection
© 2017 Safari. Terms of Service / Privacy Policy
⬆
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
10. Camera Tracking 12. Customizing Nuke with Gizmos
🔎
1. From the chapter11 folder, import Ginza.png using a Read node and
view it in the Viewer (FIGURE 11.1).
FIGURE 11.1 The image you will use for the camera
projection
This is a photo I took of the Ginza Hermès building in Tokyo. You will
now make this still photograph move with true perspective and parallax
⬆
as well as add some more elements to it. Its resolution is 1840×1232. Not
exactly standard, but that doesn’t matter. You will make the final
composite a 720p composite. Let’s set up the project.
2. While hovering the mouse pointer over the DAG, press the S key to
display the Project Settings panel.
3. Set up the project as follows: Check Lock Range and choose New from
the Full Size Format drop-down menu. Then, enter the following:
• Name = 720p
• W = 1280
• H = 720
For camera projection to work, you need a 3D scene that is set up in the
same way as the shot that was taken. This means you need geometry for
the buildings as well as a camera located where the camera that took the
photo was (I asked somebody who uses 3D software to produce this). You
just need to import the elements.
NOTE
Nuke’s Camera Tracker can track still frames, not just clips.
However, you need several different still frames with enough
change in perspective between them to achieve that, which is
something we don’t have because we have only a single
image.
Yes, you can use a Read node to bring in geometry. When you select a
geometry file instead of an image file, the Read node turns into a ReadGeo
node.
⬆
FIGURE 11.3 The Alembic import dialog
This Alembic file holds three pieces of geometry called Bldg01, Bldg02,
and Bldg03. You need to bring these in separately. In order to do so, you
need to set each building item as a parent—meaning make it an item of its
own.
8. Click the Create Parents As Separate Nodes button at the bottom of the
dialog (FIGURE 11.5).
FIGURE 11.5 The Alembic Read node can split into several
ReadGeo nodes with the button.
You now have three ReadGeo nodes in the DAG, one for each parent
(FIGURE 11.6). Let’s view them in the Viewer.
These cubes represent the three buildings. Notice that the geometry is a
very basic representation, not a perfect copy. This is adequate, because
the texture is detailed enough all on its own.
11. Select the Read From File box in Camera1’s Property panel.
12. Switch to the File tab and click the folder icon at the end of the File
property.
You need to choose which camera from the imported data to use.
FIGURE 11.8 The camera has a good angle, but its location
and scale are off.
The geometry and camera were both generated in Autodesk’s 3ds Max.
This 3D application uses a different scale than Nuke. Because of that,
some elements get transferred with the wrong location or size. The ratio
in this case is 1:1000. Therefore, dividing each translation property by
1000 brings the camera into its correct location. Reducing the size of the
camera to a value of 1 will fix the scaling issue.
Because Read From File is selected, the properties will keep updating
from the file. To change some of the properties, you need to deselect the
box now.
16. Deselect the Read From File box, click in the Translate.x field, and
move the cursor to the end.
19. When you are finished, check that your values are approximately
these:
• Translate.x: 26.764
• Translate.y: 24.250
• Translate.z: 48.711
20. Change the Scale.x, Scale.y, and Scale.z from 100 to 1 by typing in
their respective Input fields.
⬆
FIGURE 11.10 The correct location for the camera
You now need to use this texture setup on the geometry. One way to do
that is to use the ApplyMaterial node you used in Chapter 10. The simpler
way is to use ReadGeo1’s input and connect it to Project3D1’s output.
The first building is now textured in the areas where you have texture for
it. If you look at this textured geometry through the camera, it looks right.
This technique allows some movement around this object—but only
limited movement, as this texture is really designed to be viewed only
from the angle in which it was shot. If you look at it straight on, you will
see noticeable stretching (deforming and scaling of the texture unevenly
over the geometry).
25. Select Project3D1, copy it, and, with nothing selected, paste.
28. For sanity’s sake, arrange the node in a way that makes sense to you. I
use a lot of Dots to help with that. Create a Dot by pressing the . (period)
key (FIGURE 11.14).
FIGURE 11.14 Using Dots to tidy up a script
Let’s build the rest of the scene to connect up this whole thing. You need a
Scene node, another Camera node, and a ScanlineRender node.
29. Create Scene and ScanlineRender nodes. Make sure that Scene1’s
output is connected to ScanlineRender1’s Obj input.
Now, why do you need a camera? You already imported a camera into the
scene. Well, the camera you have, Camera1, is being used as a projector, if
you recall. So you need another camera to shoot through. You also want to
be able to move the camera to get the 3D movement you set out to
achieve. If you move the projection camera, the texture moves, so that’s
not allowed. It’s easier if the camera you shoot the scene with is based on
the projection camera as a starting point, so that’s what you will now do.
Now you have two cameras. This can be confusing, so rename them to
easily figure out which is which.
You don’t need a BG input because you don’t need an actual background,
and the resolution is set by the Project Settings panel.
You can see the projection setup working; the three buildings are textured
and the sky is missing (FIGURE 11.15).
Keep the default position for the ShotCamera so you always know where
you started. Use an Axis node to move the camera instead. An Axis node is
another type of translating node, much like TransformGeo.
⬆
FIGURE 11.16 Connecting Axis1 to ShotCamera
Now you have an Axis with which you can move the camera.
3. While on frame 1, choose Set Key from the Animation drop-down menu
for Axis1’s Translate properties. Do the same for Axis1’s Rotate properties.
The kind of movement you will create for the shot starts at the current
location (which is why you added a keyframe on frame 1) and then moves
up along the building and forward toward the building while turning the
camera to the left.
4. Go to frame 100.
5. Play with Axis1’s Translate and Rotate properties until you reach
something you are happy with. The values I ended up with were
Translate: 100, 1450, –2200 and Rotate: –3, 7, 0.
Now watch the animation you did in the Viewer. To get a good rendering
speed, it’s smart to watch this animation in the 3D view.
Previewing 3D
The 3D view uses simple filtering and your GPU (that’s the power stored
in your graphics card) for fast playback. It makes it very convenient then
to use this as a preview for motion. The quality and colors are all wrong,
which makes the 3D view bad for previewing a look, but for animation it is
just fine.
1. Hover the mouse pointer over the Viewer and press Tab to switch to the
3D view.
2. As you want to see what ShotCamera is doing, from the Camera drop-
down menu choose ShotCamera.
When you choose ShotCamera from the menu and lock the 3D viewer to
it, it means the 3D viewer will show you what the camera sees within the
square markings defining the window of view (FIGURE 11.18).
⬆
FIGURE 11.18 The rectangular box shows what will render
through the camera.
This is great. It’s already useful. Let’s say the editor really needs to get this
shot’s motion into the cut as quickly as possible. Editorial doesn’t mind
that the look is not done yet, but the motion? That’s what they care about.
Luckily you can render out the 3D view to your hard drive with everything
you see in front of you. In some applications this is called a playblast; in
Nuke it is called Viewer Capture. The button for this is located at the
bottom right of the Viewer, as seen in FIGURE 11.19.
You are now presented with the Capture dialog. The default function is to
save the files in a temporary directory and to load it into Framecycler.
Framecycler is a great application that comes bundled with Nuke and is
used for playback. Since Nuke 7.0, playback functionality has been built in
to Nuke, so Framecycler isn’t used much anymore and is not covered here.
Furthermore, editorial is breathing down my neck to get this clip, so
saving to a temporary location just isn’t what I need to do. Let’s change
some properties.
6. Click the check boxes next to Customise Write Path and next to No
Flipbook.
7. Click the folder icon next to the Write Path at the bottom of the panel.
In the File Browser that opens, navigate to your student_renders folder
and call the file tmp.%04d.jpg. Click Open (FIGURE 11.20).
8. Click OK.
A progress bar comes up showing the render as the Viewer moves one
frame forward at a time. This should be a rather short process.
When the render finishes, nothing happens. Let’s bring in the render and
view it.
9. While hovering the mouse pointer in the DAG, click R to read a clip.
Navigate to the student_renders folder and double-click tmp.%04d.jpg.
10. In the DAG, select Read2 and press 1 to view it (FIGURE 11.21).
That’s exactly how it should be. You captured the 3D view with everything
that’s in it. All on-screen controls and markings are part of the viewer
Capture. Not only that, but the size of the capture equals the size of the
Viewer panel in the interface. Before sending this out to editorial you
probably want to crop this. But don’t worry about it now. The important
thing is you learned how to capture the Viewer.
As you were viewing the camera work in the 3D view, no doubt you
noticed that you had a wonderful tracking shot. Look again. Notice that on
the left of building02 (the back building) something odd is happening. As
more of building02 is exposed, you get more of the texture of building01.
This is because there is no more building02 texture—only what was shot.
You have to create more texture for that part of the building.
1. Create an unattached RotoPaint node from the Draw toolbox and insert
it between Read1 and Project3D2 (FIGURE 11.23).
The good thing about your tree is that each geometry has its own pipe
coming from the texture. This gives you the option of tweaking the texture
of each geometry separately.
Use the RotoPaint node to clone some texture where you’re missing it.
4. Using the Tools Settings bar, make sure you’re painting on all the
frames.
5. Align your brush using the lines on the building. Because the
perspective is changing throughout the height of the building, you have to
change the alignment as you go up and down. Start painting (FIGURE
11.24).
Painting should take some time because of the perspective change on the
buildings as you go down toward the ground level. Take your time and
you should be fine. You should have something like FIGURE 11.25 in the
end.
The render should be slower than it was when you were viewing a preview
in the 3D view. The texture should be fixed now, and the shot should
come to life as “more” of building02 is being exposed. This kind of move
cannot be achieved using normal 2D transformations.
1. Read in the SkyDome.png image from the chapter11 directory and view
it in the Viewer (FIGURE 11.26).
⬆
in a flat image.
3. Change the Output Format resolution by selecting New from the drop-
down menu and then creating a format called 2K 2:1 with a resolution of
2048×1024 (FIGURE 11.27).
4. Create a sphere geometry from the 3D/Geometry toolbox and attach its
input to SphericalTransform1’s output.
You’re probably not seeing much. This is because spheres are, by default,
created with a size of 1. The buildings, on the other had, are very big.
Because your perspective camera (the one you’re viewing the 3D scene
through) is probably set to view the buildings, the sphere is too small to
be seen. You’ll have to scale the sphere up. It’s supposed to be as big as
the sky, after all.
8. Select ScanlineRender1 and press the 1 key to view it. Make sure you
are viewing 2D.
You just did a sky replacement. A pretty sophisticated one as well. There’s
still work to do on it, though—as in, compositing.
Cloning nodes
To have two layers available for compositing outside the 3D scene, you
need to split the geometry into two sets and render each separately. You
will need two Scanline-Render nodes. However, with two ScanlineRender
nodes, changing properties in one means you have to change properties in
the other. This is because you want both of them to render the same kind
of image.
⬆
Instead of having to keep changing two nodes constantly, you can create a
clone of the existing ScanlineRender node. A clone means that there are
two instances of the same node. One is an exact copy of the other. Both
have the same name, and if you change a property in one, it is reflected in
the other.
1. Select ScanlineRender1 and press Alt/Option-K to create a clone.
Notice that cloned nodes have an orange line connecting them to show the
relationship. They also have an orange C icon at the top left of the node to
indicate they are clones.
You have now made a composite between two separate 3D scenes. The
ScanlineRender nodes are cloned, so if you change something in one it
changes in the other.
If you look carefully, you’ll see that the edges of the buildings are very
jagged. This is because you are using just one sample to render (FIGURE
11.30).
Final adjustments
The sky is a little blue for this image. You will color correct it to match the
foreground better.
There must be another part of the sky dome that fits the buildings better—
something with a highlight that matches the highlight on the buildings.
You can use the Sphere geometry’s Rotate properties to pick a different
part.
The composite is looking better, but the edges of the buildings still need a
little work. You want to erode the matte slightly. Because this changes the
relationship between the RGB and alpha channels, you need to break up
the premultiplied buildings image first.
5. Insert an Erode (filter) after Unpremult1 and a Premult node after that
(FIGURE 11.32).
This concludes this part of the exercise. However, let’s make this shot a
little more interesting by adding a billboard screen!
3. Using the leftmost Viewer channel button, see the different channel
sets you have available (FIGURE 11.34).
You can see four additional channel sets: BG1, BG2, BG3, and Fly_Ginza.
These are the separate layers making up the complete Photoshop file.
Let’s turn this file into a tree.
5. In the Viewer, view the PSDMerge3 at the end of the new tree.
An entire new tree appears in the DAG (FIGURE 11.36). You have easily
and successfully rebuilt the Photoshop setup and have controls over each
and every layer separately.
Now you want to create some kind of “dot LED” look, so you need some
kind of grid-looking element. You can make this kind of element inside
Nuke. The main thing you need is a grid of spaced dots. Use a Grid node
from the Draw toolbox for that.
6. Click Read3 to select it, then Shift-click to branch a Grid node from the
Draw toolbox, and then view it in the Viewer.
You don’t need it to composite over this image. You do this just to get the
right resolution. You can replace the whole image with the grid image. ⬆
7. Check the Replace box in Grid1’s Properties panel.
You need to create exactly even squares, as wide as they are tall; the way
the number of lines is chosen in this node is through the Number
property. This property relates to the resolution of the image, which is
determined by how many lines there are per resolution (X, Y). To make
sure you have an even number of horizontal and vertical lines, you need to
take resolution into account and make a quick calculation. The nice thing
is that Nuke can make this calculation for you as you type.
In input fields, use the words “width” and “height” to ask Nuke for the
current resolution as a number. This is very useful for a lot of things. If
you need to know the middle of something, for example, you can just
enter width/2 for the horizontal middle point.
8. At the far right of Grid2’s Properties panel, click the 2 next to the
Number field to enable both fields (FIGURE 11.37).
You now have perfect squares that are 16 pixels wide and 16 pixels tall.
This is all I wanted you to do here. Of course, you can take this a lot
further if you’d like.
Now let’s create the frame for this lovely screen to sit in. The frame is
going to be a simple affair for now—just a color with an alpha. Later on,
you will add a little something to it to make it look more believable. You’ll
make it in context—while looking at the final composite—which will make
your work even better.
To make this into a frame, start with a copy of this frame, make it smaller,
and use it to cut a hole in the original frame.
5. While you have Constant1 selected, insert a Transform node after it.
This is the frame. There’s nothing much to see before you place it over the
image itself.
10. Insert another Merge node after Merge2 and connect its B input to
PSDMerge3.
You now have a screen with a frame. Hooray! But remember, you used a
separate tree for this. Now you need to connect it up (FIGURE 11.42).
4. View Merge4.
Notice that the building has a very strong highlight at the top right. The
frame doesn’t.
6. Select Constant1 and insert a Ramp node after it from the Draw
toolbox.
The Ramp node is a simple two-point gradient that produces one color
that fades along the ramp. Use this to create a highlight on the frame.
7. Move the Ramp1’s p1 on-screen control to the top right of the frame
and move the p0 to roughly the center of the frame (FIGURE 11.44).
1. Insert a Write node (remember, just press the W key) at the end of this
comp, right after Merge1.
2. Click the little folder icon to tell Write1 where to render to and what to
call your render. Navigate to your student_files directory and name your
render ginza_moving.####.png.
3. Make sure you are not in proxy mode and click the Render button.
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
11. Camera Projection Appendix. Customizing Nuke with Python
🔎
Gizmos are what this chapter is about. They are nodes that you create by
adding together other nodes and wrapping them into a custom tool with a
user interface, knobs, sliders, and even on-screen controls. Creating these
lets you automate mundane tasks—which you would otherwise perform
repeatedly—or create completely new functionality.
This might seem a little advanced for some readers, but worry not. If you
follow the instructions slowly, you will be presented with one of the most
powerful tools about Nuke!
Nuke also has the Rectangle node and the Radial node, which can at least
let you create these two basic shapes. The way they are built, though,
makes it very difficult to create perfect squares and circles and position
them. Instead, both nodes are controlled by a box that defines the four
edges containing the shape. This is great for creating mattes, but if you
need a 100-pixel-diameter circle, you need to perform calculations and
take the time to create it (FIGURE 12.1).
⬆
FIGURE 12.1 The box controls both the location and size of
the shape generated by the Radial node.
Though Nuke doesn’t have better design tools by default, that doesn’t
mean you can’t create better design tools with some creative thinking and
some technical know-how. So, in this chapter, you are going to turn the
Rectangle and Radial nodes into something that’s easier to use. And
instead of having to repeat this process every time, I’ll use this chapter to
show you how to build a tool that generates this functionality for you
automatically. Neat, huh?
Let’s begin:
1. In a new, empty script, create a Rectangle node from the Draw toolbox
and view it in the Viewer (FIGURE 12.2).
The left side (called X) of the rectangle needs to be at 100 pixels left of half
the width of the image. The bottom needs to be at 100 pixels under half
the height of the image. Width and height are properties that you can use
in expressions. Isn’t that lucky?
⬆
2. From Area’s Animation menu, choose Edit Expressions (FIGURE
12.4).
FIGURE 12.4 You can edit the expression of this group of
properties by choosing this option.
As Nuke’s 0,0 point is at the bottom left of any image, going left means
subtracting 100 pixels. Moving down also means subtracting. Up and to
the right is adding.
The figure of 100 stands for half the size of the eventual square. The
expression width/2 signifies the position for the center of the shape.
Y: height/2-100
R: width/2+100
T: height/2+100
5. Make sure what you typed is the same as what appears in FIGURE
12.5, then click OK.
But hold on. What if this now needs to be a 400×400 square? You need to
open Edit Expressions again and change every one of the four expressions
you just typed. Not very user friendly, is it? There is another, better, way:
Create a user knob.
NOTE
⬆
1. Create a NoOp node from the Other toolbox.
On the right, the two top buttons are Add and Pick. Both are ways to
create new knobs. Add lets you choose from a list of available knob types,
such as a slider, a Size Knob, a Text Input knob, and so forth. Pick lets you
choose from existing properties (discussed later in the chapter).
You want to be able to change the size in both X and Y. You can use a
Width/Height Knob that, by default, appears as one slider but can be split
into two separate Input fields.
The Width/Height Knob panel opens so you can define the knob’s
properties. There are three properties:
• Name is the true name of the knob, which is used for calling the knob
through expressions.
• Label is what appears in the white text to the left of the slider. The Name
and Label don’t need to be the same—although they usually are.
Sometimes, however, when one knob name is already taken, you want the
Name to be unique so you use something longer. But you still need the
Label to be short and easily readable.
• Tooltip appears when you hover your mouse pointer over a property for
a few seconds. It’s a little bit of helper text that can remind you (and other ⬆
users) of the functionality of the knob.
4. In the Name Input field, enter shapeSize. Spaces are not allowed in
the Name property, but you can use an underscore in it if you like.
5. In the Label input field, enter shapeSize. Spaces are allowed in the
Label property, but I like to keep things consistent. Suit yourself.
FIGURE 12.11 The User Knobs panel now has two knobs in
it.
You now have two lines in the User Knobs panel. The first one reads User
and the second one reads shapeSize. You made only the one called
shapeSize. User was created automatically and is the name of the tab
where the User Knob appears. If you don’t want this tab to be called User,
click to select this line, and then click the Edit button and rename it in the
panel.
8. Click the Done button in the User Knobs panel (FIGURE 12.12).
You have just created a tab called User. Inside it you have a knob called
shapeSize. (I don’t call it a property because it doesn’t do anything yet. If
you play with the slider, nothing happens, because this is just a knob.)
10. Click the 2 icon—the individual channel’s button, to the right of the
slider you just created.
You now have two Input fields instead of the slider (FIGURE 12.13). ⬆
Note that the first Input field is called W and the second H.
FIGURE 12.13 Your Width/Height Knob’s two Input fields
are exposed.
You will replace the size part in Rectangle1’s Area property’s expression
with a link to this knob. To call up this property from another node, first
call the name of this node, then the name of your property, and then the
Input field you need.
12. In the first line, for the X property, replace the number 100 (in the
line width/2-100) with NoOp1.shapeSize.w/2. The whole expression
should now read: width/2-NoOp1.shapeSize.w/2.
15. Finally, for the T property, replace the number 100 with
NoOp1.shapeSize.h/2 (FIGURE 12.14).
17. Go ahead and play with your shapeSize Input fields. You can bring the
shapeSize value back to being a slider or leave it as two Input fields. The
expression will not break. You can see the box on screen change size
horizontally and vertically. When you’re finished, go back to a value of
200 and a single slider.
Right. Now that you have this, you should make a knob that controls the
position of the shape. The 2D Position knob is perfect for that. Not only
does it have values in X and Y, it also produces an on-screen control for
you.
19. Click User to select it and then click Add > 2d Position Knob.
20. In both the name and the label Input fields, enter shapeCenter.
21. In the Tooltip Input field, enter Controls center location of shape
(FIGURE 12.15).
22. Click OK. In the User Knobs panel, click Done (FIGURE 12.16).
FIGURE 12.16 The new knob appears under the previous
knob.
You now have another User Knob called shapeCenter at the top of the
Properties panel. It’s at the top because you selected User before you
created the knob. This placed the newly created knob under the User
Knob, which is the tab itself.
⬆
FIGURE 12.19 The square’s center follows the shapeCenter
on-screen control.
The size changes around the newly defined center. Your hard work is
paying off. You have two properties—one for each property of the
rectangle—defined as User Knobs.
NOTE
I’ve already explained and shown that a node is just some text and that
you can manipulate it as such. Let’s use this functionality to copy the
expressions from Rectangle1 to Radial1.
⬆
FIGURE 12.20 This is your node. Just a little bit of text.
Here you go. This little bit of text makes up this Radial node. For example,
the xpos and ypos lines say where the node was located in the DAG when
copied. The line that starts with area is the line we care about. Notice
there’s a space before the word area.
5. Back in Nuke, copy (Ctrl/Cmd-C) the Rectangle node.
The Area properties should be the same. Using a text editor makes
changing the value to an expression a very simple process.
8. In the Rectangle part, select the whole area line starting with the space
before the word area and going all the way (which might span several
lines) to the two squiggly brackets (}}) (FIGURE 12.22).
TIP
10. In the Radial line select the whole line starting with the space before
area and going all the way to the end. Press Ctrl/Cmd-V to paste the
copied text over it.
Radial should look like FIGURE 12.23. The value for the Area property
has been copied from Rectangle1 to Radial1. It’s time to bring this node
back into Nuke.
11. Select the entire text that defines Radial1, and press Ctrl/Cmd-C to
copy.
13. View Radial1 in the Viewer and double-click Radial1 to load its
properties in the Properties Bin.
The circle fills exactly the same area the square filled, but the circle is soft.
That solved the softness issue. Now we just need to connect it all together.
1. With nothing selected in the DAG, create a Dot node by pressing the .
(period) key.
2. Connect both inputs (for Rectangle1 and Radial1) to the output of the
Dot (FIGURE 12.25).
Note that the input of the Dot is the input to the shape-generating tree
you are making. The tree has two branches: the Rectangle1 branch and
the Radial1 branch. You need a way to have the outputs of the two
branches connect to a switchable output. Enter the Switch node. It’s
designed exactly for this.
The Switch node has an infinite number of inputs. Every input is assigned
a number starting at zero for the first input. The Switch node has a single
property called Which (FIGURE 12.27). The number given in this
property correlates to the input number and passes on to the output the
input selected. The image does not change, which makes it simple,
straightforward, and very useful for Gizmos.
⬆
When Which is at 0 you should see a square, and when it’s at 1 you should
see a circle. If it’s the other way around, you have your inputs crossed.
Let’s make a User Knob in NoOp1 for this. This time a pull-down menu is
a good idea, and you can choose from Rectangle and Radial in the menu.
9. Click User to select it and then click Add > Pulldown Choice (FIGURE
12.28).
You are presented with the Pulldown Choice panel. The first two Input
fields are the same as all the other knob creation panels you have already
used, but the input field labeled Menu Items is new.
Here you have a multiline Input field. Every line has a numerical value
starting at 0 and going up. So when you choose the first line, it produces a
value of 0, and when you choose the second line, it produces a value of 1.
This is great because the Which property in Switch1 needs these exact
values. A value of 0 shows the rectangle branch and a value of 1 shows the
radial branch.
11. In the Menu Items field, type Rectangle, press Enter, then type
Radial.
⬆
FIGURE 12.29 This is how you set up the Pulldown Choice
panel.
You now have a single input, a single output, and a way to choose the
output. You’re almost ready. The next stage is to wrap up this tree in
what’s called a group.
⬆
WRAPPING IN GROUPS
In Nuke, a group is a node that holds another tree inside of it. You can
still access what’s inside it and manipulate it, but you can also treat it as
one entity. It’s useful to warp little functionalities, or trees, that you know
are taking care of a certain element of your comp in a group. Some
compositors use them more and some less. To create a Gizmo, first you
have to create a group.
Like every other node, a Group node can hold User Knobs.
1. Select all nodes in the DAG except for Viewer1 and press Ctrl/Cmd-G.
The Group Output panel that pops up wants to know which node that you
selected will be the Group node’s output. Just as in any other node in
Nuke, a Group node can have many inputs, but only one output. In this
case, there are only two nodes for which the output is not in use: Switch1
and NoOp1. The output of the tree is the output of Switch1—not NoOp1,
which is unconnected to anything and is used only as a holder of Knobs
(FIGURE 12.32).
2. In the Group Output panel that pops up, make sure that Switch1 is
selected and click OK (FIGURE 12.33).
Your tree was replaced with a Group node called Group1. Notice it has two
inputs: One is the Dot’s input, and the other is NoOp1’s input. Let’s look
inside.
You should now also have the Group1 Properties panel loaded. Notice it
has a new button that other nodes lack—the S, or Show button (FIGURE
12.34). Clicking this button opens another Node Graph panel and shows
the tree that is held inside the Group node.
In this DAG, you can also see three new nodes you didn’t place here:
Input1, Input2, and Output1. These nodes were created when you created
the group. The Input nodes appear as inputs in the main DAG; these are
inputs 1 and 2 that you saw before. And the Output node indicates where
to output the tree, normally at the bottom-most node. You need only one
input: the one connected to the Dot. The Input node connected to NoOp1
was created because NoOp1 had a free input, and that’s how the group
creation works.
Since Nuke creates these nodes automatically, I don’t know which node is
Input1 and which is Input2 in your tree. Delete the one called Input2
because deleting Input1 causes a bug to happen where the deleted input
still appears in the main Node Graph though it doesn’t exist anymore as a
node. If Input2 is the node connected to the Dot, you still have to delete it
and then move the output of Input1 so that it is connected to the input of
the Dot.
4. Click Input2 and click Delete to delete it. If you need to, connect the
Dot’s input to the output of Input1.
Now you would like to change the shape size or position, but your Knobs
are not available—they moved to within the group with NoOp1. This is not
convenient at all. Furthermore, this Group node can hold Knobs without
needing a NoOp that does nothing. The Group node can both group trees
together and hold User Knobs. It’s time to put that NoOp1 node to rest
and move all its functionality into Group1. First let’s start by generating
some knobs, including a few new ones.
Now you’re going to add another knob you didn’t have before. There’s
quite a lot of functionality at the top of both Rectangle1 and Radial1 that
you lost by just creating the knobs you did. This functionality deals with
the resolution and channels the shape is drawn in. It would be good to
carry them across as well. Instead of adding these knobs from the Add
menu, you can pick them from the existing knobs in the tree held within
the group using the Pick button.
7. Click the Pick button, which brings up the Pick Knobs To Add panel
(FIGURE 12.36).
⬆
FIGURE 12.36 The Pick Knobs To Add panel
From the Pick Knobs To Add panel, you can expose any knob from the
tree held in the group. You just need to find the right one. You’re looking
for a node called Rectangle1, a tab called Rectangle, and a property called
Output.
8. Navigate and open submenus until you find the Output property, as
shown in FIGURE 12.37. Then select it and click OK.
In the User Knob panel you now have two entries: User and Output with
some curly brackets containing some code. You can click this Output entry
and then click Edit to change parts of it, but there’s no need. I will have
you change the name of the tab, though (FIGURE 12.38).
You now have a new type of knob that you created (well, sort of—you
actually copied it): This one contains a set of pull-down menus called ⬆
output. Changing the output knob changes in which channel set the Shape
is created (FIGURE 12.39).
FIGURE 12.39 The pull-down menu you just picked
You need to pick a few more properties: Replace, Invert, and Opacity.
14. Find the property Replace in Rectangle1, select it, and click OK.
15. Repeat this for Invert and Opacity. Make sure you add the properties
after the last added property by selecting the last knob in the knob list
before you click Pick. Also, make sure you select from the Rectangle node
and not the Radial node.
When you’re done you should see the list in FIGURE 12.40 in the User
Knobs panel.
You now have three other properties in the Properties panel. The panel, as
shown in FIGURE 12.41, has the two properties, Replace and Invert, one
below the other. It would be good to have them next to each other instead.
17. Right-click to load the User Knobs panel, select the Invert line, and
then click Edit.
18. Deselect Start New Line to tell this knob to stay on the previous line
created by the previous knob (FIGURE 12.42).
You should now have the Replace and Invert check boxes next to each ⬆
other (FIGURE 12.43). Well done.
FIGURE 12.43 Two knobs on the same line
20. Clear the Properties Bin, then double-click Radial1 (from the Group1
Node Graph), and then double-click Group1 (from the regular Node
Graph).
Pull-down menus such as the Output property and check boxes such as
the Replace property don’t have Animation menus. So what do you click
to drag? It’s different in each case, so pay attention.
FIGURE 12.44 The little button with the = sign is the Link
menu.
The colors of Radial1’s Output property are now muted (FIGURE 12.45).
This signifies a link. Clicking Radial1’s Output property’s Link menu
allows you to edit the link or remove it altogether.
22. In Group1’s Properties panel, Ctrl/Cmd-click the actual check box for
the Replace property, then drag and let go on the equivalent check box in
Radial1’s Properties panel (FIGURE 12.46).
FIGURE 12.46 After you create the link, the check box
turns a light blue color.
The Replace check box for Radial1 turns light blue to signify it now has a
link. You can right-click this check box if you want to change the
expression or remove it.
23. Now continue linking between Group1 and Radial1 for Invert and
Opacity.
When you’re done, Radial1’s Properties panel should look like FIGURE
12.47.
This is it. Both Rectangle1 and Radial1 are following these four knobs.
Currently there are no knobs in Group1 that can replace the three knobs
in NoOp1. Let’s run through this. First add a divider line graphic element
to the panel to separate the top half of the properties from the bottom—
just like the one in the original Radial and Rectangle nodes.
25. With Opacity selected in the knobs list, click the Add button, and from
the pull-down menu choose Divider Line.
26. With Unnamed (that’s the Divider Line you just added) selected,
choose Pulldown Choice from the Add button/menu.
27. For both the Name and Label fields, enter shapeType. In the Menu
Items field, enter Rectangle, then press Enter/Return, then type
Radial. In the Tooltip field, enter Choose type of shape. Click OK
(FIGURE 12.48).
31. For both Name and Label, use shapeSize. The Tooltip should be
Controls shape size. Click OK.
You should now have nine User knobs in the User Knob panel in the order
you can see in FIGURE 12.49. If the order is different, you can rearrange
it by selecting a knob in the panel and using the Up and Down buttons.
33. Set the values again for shapeCenter and shapeSize to 1024, 778, and
200, respectively.
You now want to change all the expressions that point to NoOp1 so they
point to the Group’s User Knobs. Let’s do one.
34. Make sure the Properties panel for Rectangle1 is loaded in the
Properties Bin.
35. Click in the area.x Input field and press = to load its Edit expression
panel.
Now you need to replace the NoOp1 part of the expression with
something—but what? Because Rectangle1 is sitting inside Group1,
Group1’s properties are considered part of Rectangle1’s properties, and so
no Node name needs to be called, just the Property name. Instead of
calling up Group1 by name, simply ask Rectangle1 to look for the
shapeCenter and shapeSize Knobs without mentioning a node.
That’s right. Here’s something to note about Nuke scripts that I have
mentioned before: They are actual scripts—not just the nodes themselves
as you worked with earlier in this chapter, but the entire script. This
means they are made out of normal human-readable text, which means
you can change them using a text editor. The search and replace functions
in text editors are uniquely suited for this type of work.
2. Navigate to your student_files directory where you will find your last
saved Nuke script, which should be called Chapter12_v01.nk.
You need to find every occurrence of the text “NoOp1.” (the dot is
important) and replace it with nothing (meaning, don’t type anything in
the Replace box).
4. Find the Replace function in your text editor. Normally it’s in the Edit
menu. It might be called Find instead, or the Replace function may be
included within Find. In Windows Notepad, press Ctrl-H; in Mac
TextEdit, press Cmd-Option-F.
5. Use the search function to find the words “NoOp1.” and replace each
⬆
instance with nothing (FIGURE 12.52).
FIGURE 12.52 Setting up the Find dialog box in Mac OS
10.8’s TextEdit.
6. You can use the Replace All function (you should have one).
7. Save the changes and then double-click the file again to open it in Nuke
once more.
9. Click the S button to open the Group1 Node Graph (FIGURE 12.53).
You can now see that NoOp1 is no longer being used. There are no green
expression lines pointing to it. All the expressions pointing to it have been
changed to point to the new knob in the group.
10. Change properties in Group1 and see the shape change in the Viewer.
The knobs you created in the group now control everything you need. You
no longer need the NoOp1 node.
It’s a good idea to name your group something that makes sense for the
Gizmo.
3. At the bottom of this window you can find the Export As Gizmo button
(FIGURE 12.54). Click it. (Note that only Group nodes have this
button.)
On Linux and Mac, folders that start with a dot are hidden by default. You
need to find a way to find the folder. On Mac, press Cmd-Shift-G to open
the Go To Folder dialog box, enter ~/.nuke/ (“~” means your home
folder), and press Enter.
In this folder you can drop all kinds of files defining your user preferences
and added functionality. These can include favorite directories, new
formats, Python scripts, ToolSets, and Gizmos among other things.
You can see the ToolSets directory in your .nuke directory. That’s where
you find the 3D Setup ToolSet you made in Chapter 9.
2. Drag the Shape.gizmo from the Desktop into your .nuke directory. This
is all you need in order to install the Gizmo.
3. Launch Nuke.
Where did the Gizmo go? You haven’t told Nuke in which toolbox or
which menu you want it to appear.
1. Click the Other toolbox, click All Plugins at the bottom, then click
Update (FIGURE 12.55).
Shape1 is created. In its Properties panel, you can see all the knobs you
made, play with them, and make shapes that fit your needs.
The shape is right at the top right of the image. You can use the
shapeCenter on-screen control to move it. You can also add as many
Shape nodes after this one as you want in order to add more and more
shapes. If you’re missing more functionality, go back to the group, add the
functionality, re-create the Gizmo, and replace it in the .nuke folder.
Life is just that easy when you can make your own tools.
3. At the bottom of this tab, find the Copy To Group button. Click it
(FIGURE 12.58).
This takes the Gizmo and reverses the operation, turning it back to a
group.
4. Double-click the newly created group, and click the S button at the top
right of its Properties panel.
You can now see the original tree that made up this Gizmo (FIGURE
12.59).
Gizmos can be found all around the Web. There’s a big depository at
nukepedia.com (http://nukepedia.com). Downloading Gizmos, installing them,
and then looking at how they’re built is a great way to learn advanced
features in Nuke.
You can also create real buttons inside real toolboxes for your Gizmos
instead of using the Update button. This requires a little Python scripting. ⬆
If you are interested in that, take a look at the Appendix at the end of the
book.
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
12. Customizing Nuke with Gizmos Index
🔎
NOTE
In this appendix, I cover only how to use Python to do very basic things
that I believe are absolutely necessary if you use Nuke anywhere—except
for in a big organization that has people who take care of all that
customization for you. But even after you master these simple procedures,
I encourage you to learn Python, because it can really become a very
powerful tool for you to have at your disposal.
This seems to be an easy request, but for you to make this happen, first
you need to learn a little about how Nuke’s Python customization setup
works. Remember, Nuke is a production tool that relies on hardcore
knowledge of advanced systems. You need to know a little Python
scripting to be able to create a button in the interface. Let’s start.
1. Open Nuke.
2. From the Content menu on the right pane, choose Script Editor
(FIGURE A.1).
⬆
FIGURE A.1 Loading the Script Editor
The Script Editor is split into two panes: The bottom half is where you
write your commands (the Input pane), and the top half provides
feedback about what’s happening when you execute your commands (the
Output pane).
There is a row of buttons at the top. TABLE A.1 explains what each
button is called and what it does, from left to right.
Python uses different terminology for certain user interface elements and
functions. I point out these distinctions as we encounter them.
Under Python, Nuke calls the various panels, panes, and toolboxes
“menus,” and the Nodes Toolbar you use all the time to grab nodes from is
a menu simply called Nodes. Here, you add a new toolbox menu to the
Nodes menu and call it User. You then tell Nuke to place a link to your
Gizmo in the new User menu.
⬆
CREATING A BUTTON WITH PYTHON
Your first line of code calls up the Nodes menu and calls it by a name
(toolbar) so that you can easily refer to it in the next lines of code.
Calling a command by name is referred to as assigning a variable in
scripting. In this case, the Nodes menu has the variable toolbar assigned
to it.
1. As the first line in your Script Editor’s input pane, enter the following:
toolbar = nuke.menu(′Nodes′)
2. Click the Run button at the top of the Script Editor to execute this
command.
The command you wrote disappears and the following appears in your
Output pane:
toolbar = nuke.menu(′Nodes′)
# Result
This means the command executed with no error. Let’s make sure the new
variable toolbar is now defined.
3. In the Input pane, enter toolbar and click the Run button.
Keep adding to the script, line after line, so you end up with one long
script that contains all the commands you wrote, one after another. Use
the Previous Script button to bring the previous command back to the
Input pane.
4. Click the Previous Script button twice to bring back the full command,
toolbar = nuke.menu(′Nodes′)
Now let’s create the new menu inside the newly defined variable,
toolbar.
NOTE
5. Press Enter/Return to start a new line and type the next command:
userMenu = toolbar.addMenu(′User′)
You just created a new menu called “User” with the addMenu command.
(In interface terms, you made an empty toolbox called User in the Nodes
Toolbar.) All this is performed inside the toolbar variable. This new
menu is also assigned a variable called userMenu.
You can now run these two commands to create an unpopulated menu.
If you look carefully at your Nodes Toolbar, you will notice you now have
a new toolbox at the bottom with a default icon (FIGURE A.2).
7. Hover your mouse over this new menu and you will see from the pop-
up tooltip that it is called User.
⬆
If you click the menu, nothing happens because it’s still empty.
9. Press Enter/Return to start a new line and then enter the following:
userMenu.addCommand(′Shape′, “nuke.createNode(′Shape′)”)
10. Click the Run button at the top of the Script Editor to execute this list
of commands.
This new command tells Nuke to look for a Gizmo called Shape (in the
second part of the command) and to give it the name Shape.
12. Now clicking Shape creates a Shape Gizmo (provided you have it in
your .nuke directory from Chapter 12).
Essentially, this is it. You have created a menu called User under the
Nodes Toolbar and placed a link to your Shape Gizmo in it. You can,
however, make it more interesting (keep reading).
1. Click the Previous Script button to bring the three lines of code back to
the Input pane.
2. Change the third line so it looks like this (with the little bit at the end
added):
userMenu.addCommand(′Shape′, “nuke.createNode(′Shape′)”,
′^+z′)
3. Click the Run button at the top of the Script Editor to execute this
command.
NOTE
The last bit of code tells Nuke to use Ctrl-Shift-Z as the hot
key for the SafeAreas Gizmo (FIGURE A.4). (You can also
type that out as Ctrl+Shift+Z or Cmd+Shift+Z—but you have
to type Alt, not Option.) The symbols for the hot keys work as
follows:
^ Ctrl/Cmd
+ Shift
# Alt/Option
⬆
FIGURE A.4 The new menu populated by a link
to the Gizmo and a hot key to boot
You should be very proud of yourself here. Pat yourself on your back. But
there’s one problem: If you quit Nuke and restart it, all this disappears,
which is a shame. But there is a remedy...
NOTE
You can use Nuke’s Script Editor to create this file—but it’s best for testing
things quickly as you write them, not for long coding sessions. You can
use any text editor you prefer instead. I recommend using very simple text
editors (like Notepad on Windows and TextEdit on the Mac) or code-
oriented text editors (such as ConText or Notepad++ on Windows and
TextMate or TextWrangler on the Mac). Whatever you do, don’t use big
word processing software because it adds all sorts of styling and
formatting code to your text file, which Nuke can’t read. The text that
appears in your file is what should be in your file when Nuke reads it. This
is not the case with Microsoft Word’s .doc files, for example.
In this case, you already created all the code inside Nuke’s Script Editor.
You just need to save it in your .nuke directory and call it menu.py.
1. Click the Previous Script button to bring back the three previous lines
of commands (including the hot key part of the code as well).
3. In the browser window, navigate to your .nuke directory. (If you can’t
see the .nuke directory in your home directory, start typing .n and it will
come up.)
Your menu.py file is now saved with the commands for creating the User
menu in the Nodes Toolbar and adding a link to the Shape Gizmo in it.
Really? Can we, please? After all, nothing says “we care” like an icon.
Icons in Nuke are 24×24 pixels in size and are saved as either .png or
.xpm files in the .nuke directory. Let’s make a couple.
The first one is an icon for the User toolbox itself: simple white text on a
black background. The second is an icon for the Shape Gizmo: a white
square and circle on a black background.
If you want to design your own icons instead, that’s up to you. Just
remember to make them with the requirements I’ve mentioned in mind.
Otherwise, follow these steps:
3. Name the new Format icon and set the File Size to 24 and 24. Click
OK.
4. With Constant1 selected, insert a Text node after it from the Draw
toolbox.
When you select the text in the Message field later, property
adjustments change the selection. Not having anything
selected means that other properties have no effect. By the
way, you can also select the text in the Viewer itself.
6. In Text1’s Message property field, type User. Select the text you just
typed by click-dragging over it.
The icon for the User menu is ready! Now you will render it (FIGURE
A.6).
11. Click the File Browser icon and navigate to your .nuke directory.
13. Click the Render button and, in the Render panel that opens, click OK
(FIGURE A.7).
⬆
FIGURE A.7 The first icon’s tree
Now let’s make the second icon, this time for the Gizmo itself.
14. Copy and paste the three existing nodes to get another, unattached
tree.
15. View Write2 in the Viewer.
16. Double-click Text2 and change the Message property to Shape. Make
sure the word Shape is selected.
17. Change the Size property to 8. Due to a small bug, you might have to
do this twice for the change to take effect.
20. Copy Shape1 and paste it to insert another Shape node after Shape1.
23. Click the Render button and, in the Render panel that opens, click
OK.
24. You can save your icon project in your student_files directory if you’d
like.
Both icons are ready and saved in the .nuke directory, making them
available to be called via Python.
Now you need to tell Nuke (in the menu.py file) to attach these icons to
the menu and Gizmo commands.
25. Open a Script Editor in the right-hand pane the same way you did at
the beginning of the chapter.
NOTE
26. Click the Load Script button, navigate to the .nuke directory, and
double-click menu.py.
toolbar = nuke.menu(′Nodes′)
userMenu = toolbar.addMenu(′User′)
userMenu.addCommand(′Shape′, “nuke.createNode(′Shape′)”,
′^+z′)
NOTE
27. Change the second and third lines so they look like this:
toolbar = nuke.menu(′Nodes′)
When you add the last bit of code, you tell Nuke to use an icon that
contains the name between the apostrophes. You don’t need to include a
full directory path to the icon file name since Nuke simply looks for the
icon in the .nuke directory.
29. In the File Browser, navigate to your .nuke directory and click the
menu.py file.
30. Click Save. When asked, approve overwriting the existing file by
clicking OK.
You should now have icons associated with your toolbox and Gizmo
(FIGURE A.9). Hurrah!
First, you should know that you can add many more Gizmos to your
already created User menu. Do that by simply duplicating the third line
and changing the name of the Gizmo, the hot key (if you want any), and
the name of the icon file.
But that’s just skimming the surface. You can add many more things to
the menu.py file. One example is adding a format to the default format
list. By adding a line like the following, you add a format called 720p to
the format list with all the right sizes. Just add this line at the bottom of
your menu.py (you can add a couple of spaces to separate the Gizmo code
and the format code) and then choose File > Save:
PREV NEXT
⏮ ⏭
Recommended
12. Customizing
/ Queue
Nuke/with
History
Gizmos
/ Topics / Tutorials / Settings / Get the App / Sign Out Index
© 2017 Safari. Terms of Service / Privacy Policy
⬆
Nuke 101: Professional Compositing and Visual E ects, Second Edition
PREV NEXT
⏮ ⏭
Appendix. Customizing Nuke with Python Nuke 101: Professional Compositing and Visual E ects, S…
🔎
Index
NUMBERS
2D images, viewing as 3D objects, 266–271
3D engine, 264–266
creating, 266–268
3D scenes
3D toolbox, described, 7
A
Add math function, explained, 102 ⬆
alpha channel, viewing, 25
autosaves, timing, 28
B
background image example, 72. See also foreground image
beauty pass
building, 73
described, 65
transparent A input, 76
camera element
described, 260
using, 268–271
camera projection
using, 287–291
CameraTracker node
availability, 286
features, 288–291
properties, 290
using, 286–287
explained, 61
channel sets
Col, 65, 81
depth, 65
explained, 62
GI, 65
ID1-ID5, 65
Lgt, 65, 81
Mv, 65
Nrm, 65
Ref, 65
RGBA, 62, 65
Spc, 65
SSS, 65
channels, defined, 62
Clone brush
aligning, 328
color
8-bit, 100
32-bit, 100
float, 100
color correcting
images, 36
color correction
explained, 99
functions, 102
color matching
practicing, 115
starting, 18
compositing, defined, 23
Content menu, 3 – 6
cross-platform, explained, 6
Curve Editor
accessing, 171
locating, 2 – 3 ⬆
navigating, 173
panel, 3
using, 171–174
curves
D
DAG (Directed Acyclic Graph). See Node Graph
Dope Sheet
locating, 2 – 3
described, 3
Dots
downloading Nuke, x
F
File Browser
using, 9 –11
⬆
File menu, described, 4
frames, advancing, 54
G
gamma, explained, 101
GI channel set, 65
Gizmos
copying, 372
defined, 344
testing, 371–373
Grade node
using, 115–119
proxies, 238–242
hot keys
color sampling, 87
Merge node, 23
saving scripts, 63
using, 6
Viewer navigation, 13
HueKeyer, 199–204
I
IBK (Image Based Keyer), 199–200, 204–210
images
color correcting, 36
comparing, 58–60
compositing, 23
controlling premultiplication, 26
merging, 23–29
X and Y resolution, 62
K
keyboard shortcuts. See hot keys
Keyer nodes
dilates, 217–219
creating, 41
identifying, 54
markings in Timebar, 54
selecting, 171
keying terminology
chroma, 198
different, 198
luma, 198
Keylight node
described, 200
explained, 200
using, 210–215
L
LayerContactSheet, using, 66
layouts
restoring, 6
saving, 5
M
Mask input
Merge node
creating, 23, 48
foreground image, 25
matte image, 47
motion blur
adding, 94–95
Mv channel set, 65
Node Graph
locating, 2 – 3
panel, 3
nodes
arranging, 33
branching, 30–31
cloning, 331–333
connecting, 23, 32
ContactSheet, 65 ⬆
creating, 7 – 8 , 30–31
deleting, 33–34
deselecting, 32–33
downstream, 18
Flare, 142–144
Grade, 115–119
input, 6
inserting, 29–31
Keymix, 186–187
masking input, 6
naming, 6 , 8
output, 6
Read, 8 – 9
replacing, 30–31
representing, 6
selecting, 32–33
Shuffle, 50–51
ShuffleCopy, 84–85
Tracker, 139–142
upstream, 18
Write, 42–43
described, 3
identifying, 2
toolboxes in, 7
Nuke
downloading, x
⬆
installing, ix– x
media files, xi
O
objects. See also 3D objects
panels, identifying, 3 – 4
pipes
defined, 11
described, 3
using, 122–125
premultiplication
controlling, 26–27
creating, 19–22
Dots, 77
example, 18–19
explained, 18
⬆
keeping organized, 33
node anatomy, 20
organizing, 53, 91
picking passes, 80
RGBA channel, 78
splitting, 77–83
unpremultiplying passes, 78
properties
adjusting, 38–39
Properties Bin
clearing, 38
displaying floating, 38
identifying, 2
locking, 38
panel, 3
undo/redo functionality, 38
Properties panel
loading, 34
proxies
creating, 245–248
using, 238–242
Python
R
⬆
Radial nodes
Read nodes
creating, 8 – 9
using, 20–22
render passes, 65
changing compression, 45
resolution
defining, 232–233
RotoPaint node
inserting, 327
Toolbar, 160–161
S
Saturation math function, explained, 102
ScanlineRender nodes
cloning, 332–333
buttons, 376
described, 4
saving, 28–29, 63
shapes
animating, 169–170
drawing, 166–169
editing, 166–169
sky texture
adding, 329–331 ⬆
color correcting, 333–334
slap comp
example, 73
explained, 72
strokes
deleting, 166
editing, 164
erasing, 166
painting, 162–163
T
TCL scripting language, using, 78, 80
text editors
Time toolbox
described, 7
using, 248–251
TMI sliders
locating, 40
using, 133
toolboxes
in Nodes Toolbar, 7
tracking
explained, 137
scaling, 145
tracks
improving, 152
stopping, 148
UI (user interface), 2 – 6
user interface, 2 – 6
V
vectors, painting in, 164–165
Viewer
components, 12
fps field, 16
identifying, 2
inputs, 14–15
menu, 4
navigating, 13
using, 13–14
W
world position pass, using with 3D objects, 266–268 ⬆
Write node, 42–43, 46