80% found this document useful (5 votes)
3K views300 pages

Nuke 101 Professional Compositing and Visual Effects Second Edition PDF

Uploaded by

MauroTabares
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
80% found this document useful (5 votes)
3K views300 pages

Nuke 101 Professional Compositing and Visual Effects Second Edition PDF

Uploaded by

MauroTabares
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 300

 Nuke 101: Professional Compositing and Visual E ects, Second Edition

NEXT

Title Page
   🔎

NEXT

Recommended / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out Title Page
© 2017 Safari. Terms of Service / Privacy Policy

Find answers on the fly, or master something new. Subscribe today. See pricing options.
 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
Cover Page Copyright Page
   🔎

Nuke 101
Professional Compositing and Visual Effects

Second Edition

Ron Ganbar

PREV NEXT
⏮ ⏭
Recommended
Cover Page / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out Copyright Page
© 2017 Safari. Terms of Service / Privacy Policy

Find answers on the fly, or master something new. Subscribe today. See pricing options.
 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
Title Page Dedication page
   🔎

NUKE 101

Professional Compositing and Visual Effects, Second Edition

Ron Ganbar

Peachpit Press
Find us on the Web at www.peachpit.com (http://www.peachpit.com)
To report errors, please send a note to [email protected]
Peachpit Press is a division of Pearson Education

Copyright © 2014 by Ron Ganbar

Senior Editor: Karyn Johnson


Senior Production Editor: Lisa Brazieal
Technical Editor: Jonathan Mcfall
Copyeditor: Rebecca Rider
Proofreader: Emily K. Wolman
Indexer: Valerie Haynes Perry
Interior Design: Kim Scott, Bumpy Design
Composition: Danielle Foster
Cover Design: Aren Straiger
Cover images: Goose (2012) by Dor Shamir, Shai Halfon, and Oryan
Medina

Notice of Rights

All rights reserved. No part of this book may be reproduced or transmitted


in any form by any means, electronic, mechanical, photocopying,
recording, or otherwise, without the prior written permission of the
publisher. For information on getting permission for reprints and
excerpts, contact [email protected].

Footage from This Is Christmas and Grade Zero directed by Alex Norris,
www.alexnorris.co.uk (http://www.alexnorris.co.uk).

Goose footage (Chapter 3) by Dor Shamir, Shai Halfon, and Oryan


Medina.

Keying footage (Chapter 7) from Aya by Divine Productions and Cassis


Films. All rights reserved.

Chapter 9 footage by Or Kantor.

Pan and Tile panorama footage (Chapter 10) by Assaf Evron,


www.assafevron.com (http://www.assafevron.com).

Camera and geometry for Chapter 11 was solved by Michal Boico.

Notice of Liability

The information in this book is distributed on an “As Is” basis without


warranty. While every precaution has been taken in the preparation of the
book, neither the author nor Peachpit shall have any liability to any
person or entity with respect to any loss or damage caused or alleged to be
caused directly or indirectly by the instructions contained in this book or

by the computer software and hardware products described in it.

Trademarks
Many of the designations used by manufacturers and sellers to distinguish
their products are claimed as trademarks. Where those designations
appear in this book, and Peachpit was aware of a trademark claim, the
designations appear as requested by the owner of the trademark. All other
product names and services identified throughout this book are used in
editorial fashion only and for the benefit of such companies with no
intention of infringement of the trademark. No such use, or the use of any
trade name, is intended to convey endorsement or other affiliation with
this book.

ISBN-13: 978-0-321-98412-8
ISBN-10: 0-321-98412-9

987654321

Printed and bound in the United States of America

PREV NEXT
⏮ ⏭
Recommended
Title Page / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out Dedication page
© 2017 Safari. Terms of Service / Privacy Policy


 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
Copyright Page Contents
   🔎

To The Foundry, who updates its products so often,


it makes me wish I hadn’t started down this road.

PREV NEXT
⏮ ⏭
Recommended
Copyright Page/ Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out Contents
© 2017 Safari. Terms of Service / Privacy Policy

Find answers on the fly, or master something new. Subscribe today. See pricing options.
 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
Dedication page Introduction
   🔎

Contents
Introduction

CHAPTER 1: Getting Started with Nuke


Components of the Graphic User Interface

Nodes

The Viewer

CHAPTER 2: Touring the Interface with a Basic Composite


Working with Process Trees

Creating a Simple Process Tree

Merging Images

Inserting and Manipulating Nodes in the Tree

Changing Properties

Rendering

Delving Deeper into the Merge Node

Creating Animation with Keyframes

CHAPTER 3: Compositing CGI with Bigger Node Trees


Working with Channels

Working with Contact Sheets

Using the Bounding Box to Speed Up Processing

Linking Properties with Expressions

Slapping Things Together: Foreground over Background

Building the Beauty Pass

Using the ShuffleCopy Node

Placing CGI over Live Background

Manipulating Passes

Using the Mask Input

Using Auxiliary Passes

CHAPTER 4: Color Correction


Understanding Nuke’s Approach to Color

Color Manipulation Building Blocks

Using an I/O Graph to Visualize Color Operations

Creating Curves with ColorLookup


Color Matching with the Grade Node

Achieving a “Look” with the ColorCorrect Node

CHAPTER 5: 2D Tracking
Tracker Node Basics

Stabilizing a Shot

Tracking Four Points

CHAPTER 6: RotoPaint
Introducing RotoPaint’s Interface

The Curve Editor

RotoPaint in Practice

Combining RotoPaint and Animation

Using the Dope Sheet

CHAPTER 7: Keying
Introducing Nuke’s Keying Nodes

HueKeyer

The Image Based Keyer

Keylight

Combining Keyer Nodes Using the Tree

CHAPTER 8: Compositing Hi-Res Stereo Images


Using the Project Settings Panel

Setting Up a High-Res Stereo Script

Compositing a Stereo Project

Rendering and Viewing Stereo Trees

CHAPTER 9: The Nuke 3D Engine


3D Scene Setups

Viewing a 2D Image as a 3D Object

Manipulating 3D Node Trees in 3D Space

Turning 3D Objects into 2D Pixels

Applying Materials to Objects

CHAPTER 10: Camera Tracking


Calculating Reflection Movement Using Camera Tracking

3D Tracking in Nuke

Loading a Pre-generated CameraTracker Node

Aligning the Scene

Creating the Reflection

CHAPTER 11: Camera Projection


Building a Camera Projection Scene

Animating the Camera

Tweaking the Texture

Using a SphericalTransform to Replace Sky

Compositing Outside the ScanlineRender Node

2D Compositing Inside 3D Scenes

Rendering the Scene



CHAPTER 12: Customizing Nuke with Gizmos
About Shape Creation Tools

Building the Gizmo’s Tree


Creating User Knobs

Using Text to Create a Radial

Using a Switch Node to Choose Branches

Wrapping in Groups

Manipulating the Nuke Script in a Text Editor

Turning a Group into a Gizmo

More about Gizmos

APPENDIX: Customizing Nuke with Python


Python Scripting Basics

Creating a Button with Python

Adding a Hot Key

Making Customization Stick with the menu.py File

Index

PREV NEXT
⏮ ⏭
Recommended
Dedication page
/ Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out Introduction
© 2017 Safari. Terms of Service / Privacy Policy


 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
Contents 1. Getting Started with Nuke
   🔎

Introduction
The Foundry’s Nuke is fast becoming the industry leader in compositing
software for film and TV. Virtually all the leading visual effects studios—
ILM, Digital Domain, Weta Digital, MPC, Framestore, The Mill, and Sony
Pictures Imageworks—now use Nuke as their main compositing tool. This
is not surprising, as Nuke offers a flexible node-based approach to
compositing, has a native multichannel workflow, and boasts a powerful
integrated 3D compositing environment that delivers on the artist’s
needs.

Nuke was first developed as the in-house compositing tool at Digital


Domain, the visual effects studio behind the Terminator series, The Fifth
Element, Tron: Legacy, The Curious Case of Benjamin Button, and other
major films. The software has been developed by artists for artists to meet
the immediate needs of actual top-level productions. Nuke is now
developed by The Foundry (www.thefoundry.co.uk
(http://www.thefoundry.co.uk)), which remains committed to making Nuke the
best tool for compositing artists working in the trenches.

ABOUT THIS BOOK


Learning Nuke is a must for visual effects artists who want to master
high-end compositing techniques and artistry. My goal with this book is to
get you up and running with the program and give you the skills you need
for doing your own compositing projects in Nuke.

Whom this book is for


This book is for anyone interested in learning Nuke. Whether you’re an
artist experienced in using Adobe After Effects, Autodesk Flame, Apple
Shake, or eyeon Fusion, or you have only a basic understanding of
compositing and image manipulation concepts, this book will guide you
through the necessary theory and practice you need to use Nuke—from a
basic level to Nuke’s more advanced toolset.

How this book is organized


This book was written as a series of lessons, each focusing on a part of the

interface, a tool, or a technique. Chapters 1 through 3 discuss Nuke

basics, which are important for understanding where things are and how

to create simple composites. Chapters 4 through 7 cover important

tools and techniques. In Chapter 8 and onward, advanced tools and


techniques are explained.

What this book covers


This book teaches how to use Nuke from its very basic interface to its very
advanced toolsets, including the 3D engine, Camera Projection, and
Camera Tracking. Although the book teaches a fair amount of
compositing theory, there is not enough space here to cover that topic in
depth. Some of the theory discussed in this book may be new to you, but
my intention is to cover just enough so you understand how to use Nuke. ⬆
If you want to dive further into the theory, two of my favorite books are
Ron Brinkmann’s The Art and Science of Digital Compositing and Steve
Wright’s Digital Compositing for Film and Video.
How to use this book
As you advance through the chapters in this book, the later lessons rely on
knowledge you learned in the previous lessons. Chapter 2 relies on what
you learned in Chapter 1, and so on. Because of this, I recommend
completing the exercises in the chapters in order.

In the book you will find explanatory text and numbered steps. Ideally,
you should complete each numbered step exactly as it is written—
without doing anything else (such as adding your own steps). Following
the steps exactly as written will give you a smooth experience. Not going
through the steps as they are written might result in the next step not
working properly, and could well lead to a frustrating experience. Each
series of steps is also designed to introduce you to new concepts and
techniques. As you perform the steps, pay attention to why you are
clicking where you are clicking and doing what you are doing, as that will
truly make your experience a worthwhile one.

You can use this book on your own through self-study or in a classroom.

Using the book for self-study: If you’re reading this book at your
own pace, follow the instructions in the previous paragraph for your first
read-through of the chapters. However, as you are not limited by any time
frame, I recommend going through chapters a second time, and trying to
do as much of the work as possible without reading the steps. Doing so
can help you better understand the concepts and tools being taught. Also,
the book leaves a lot of room for further experimentation. Feel free to use
the tools you’re learning to take your compositions further the second
time you run through a chapter.

Using the book in a classroom setting: You can use this book to
teach Nuke in a classroom. As a course, the material is designed to run for
roughly 40 hours, or five eight-hour days. I suggest that the trainer run
through a chapter with the students listening and writing down notes; the
trainer should explain the steps to the class as they are shown on screen
while taking questions and expanding on the text where necessary. Once a
chapter has been presented from start to finish, the instructor should give
students time to run through the same chapter on their own in the
classroom in front of a computer, using the book to read the instructions
and follow the steps. This second pass will reiterate everything the trainer
has explained and, through actual experience, show the students how to
use the software while the trainer is still there to answer questions and
help when things go wrong.

INSTALLING NUKE
While this book was originally written for Nuke version 8.0v1, The
Foundry updates Nuke on a regular basis and the lessons can be followed
using more recent updates. Small interface and behavior updates might
slightly alter the Nuke interface from version to version, especially for so-
called “point” updates (for example, if Nuke version 8.1 was released). I
recommend using this book with Nuke version 8.0v1 if you haven’t
already downloaded the most current version and you want the exact
results that are shown in the book.

You can download Nuke in a variety of versions from The Foundry’s web
site, www.thefoundry.co.uk (http://www.thefoundry.co.uk), as discussed in the
next sections.

Di erent flavors of Nuke


Nuke comes in three different flavors with different features at different
prices. There is only a single installation file for Nuke, but the license you
purchase determines which type of Nuke you will be running. The
Foundry offers a 15-day trial license, so you can try it before you purchase
it (see “Getting a trial license,” later in this section).

Here are the three flavors of Nuke.

1. Nuke PLE (Personal Learning Edition): This license (or lack of) is
free—as in, you pay nothing. You can install Nuke on your computer and
not purchase a license. With the PLE you can use Nuke as much as you
want, although certain limitations apply. These include the placement of a
watermark on the Viewer and on renders, and the disabling of WriteGeo,
Primatte, FrameCycler, and Monitor Output. Keep in mind that Nuke
project files saved with the PLE version can be opened only with the PLE
version.

2. Nuke: This is regular Nuke—the flavor this book covers. Nuke requires

a trial license or regular paid license, which should cost about $4,200.

3. NukeX: This license includes all the regular Nuke features with a few
additional high-end tools. These tools include, among other things, the
Camera Tracker, Particles System, PointCloudGenerator, LensDistortion,
DepthGenerator, FurnaceCore plug-ins, and PRman-Render (allowing for
RenderMan integration). NukeX costs around $8,150. Chapter 10 covers
the Camera Tracker and shows how to use it under the NukeX license;
however, the exercises in the chapter can also be done without a NukeX
license.

Downloading Nuke
To download Nuke, follow these steps.

1. Go to http://www.thefoundry.co.uk/products/nuke-product-
family/nuke (http://www.thefoundry.co.uk/products/nuke-product-family/nuke).

2. In the bar on the right, click Product Downloads.

3. Register with your email address and a password.

4. Select the latest copy of Nuke for your operating system (Mac,
Windows, or Linux). You can also download older versions of Nuke if
necessary.

5. Follow the instructions for installation on your specific operating


system.

Getting a trial license


After successfully installing Nuke, when you double-click the Nuke icon it
explains that because you don’t have a license yet, you can use Nuke
under the PLE license. If you would like to use a fully licensed Nuke or
NukeX, you will have to buy Nuke, rent Nuke (both available on The
Foundry’s web site shown below), or get a free 15-day trial license.

As there is no functional difference between getting a Nuke trial license or


a NukeX trial license, I recommend getting a NukeX trial license. To get
your free 15-day trial NukeX license, do the following:

1. Go to http://www.thefoundry.co.uk/products/nuke-product-
family/nuke/trial (http://www.thefoundry.co.uk/products/nuke-product-
family/nuke/trial).

2. In the Free Trial page, fill in the form and click Continue.

The System ID, which is the first entry to fill in on this next page, is the
unique code of your computer—the free license will be locked to that
computer. The section below the entry field called Finding The System ID
explains where to find this number on your computer.

3. After you complete the form and click Continue, follow the rest of the
instructions on The Foundry’s web site for how to install the license on
your operating system.

ADDITIONAL TECHNICAL REQUIREMENTS


Nuke is a very powerful piece of software, even though its system
requirements are pretty low. If you bought your computer in the last
couple of years, you are probably OK. The requirements are listed on The
Foundry web site, but here are three things you should really check to
make sure you have:

Workstation-class graphics card, such as NVIDIA Quadro series, ATI


FireGL series, R3D Rocket, or newer. Driver support for OpenGL 2.0.

Display with at least 1280×1024 pixel resolution and 24-bit color.

Three-button mouse. This kind of mouse is really a must as Nuke uses


the middle mouse button extensively. A scroll wheel, by the way, can serve
as the middle mouse button.

For a full list of Nuke’s system requirements, visit


http://www.thefoundry.co.uk/products/nuke-product-family/nuke/sys-
reqs (http://www.thefoundry.co.uk/products/nuke-product-family/nuke/sys-reqs).

ABOUT THE MEDIA FILES


You can follow along with the exercises throughout this book by using the
files that are available online. The files are a mix of real productions that
my colleagues or I created in recent years and some shot elements I
intended for use with this specific book.

To access the download and install the files on your computer, follow
these steps:

1. Register your book at www.peachpit.com/nuke1012E
(http://www.peachpit.com/nuke1012E). If you don’t already have a Peachpit
account, you will be prompted to create one.

2. Once you are registered at the Peachpit web site, click the Account link,
select the Registered Products tab, and click the Access Bonus Content
link that appears next to the Nuke 101 book image.

A new page opens with the download files listed as individual zip files
ordered by chapter number.

3. Download each zip file and copy them to your hard drive.

4. Unzip the lesson files for each chapter. Each chapter has its own
directory. Some chapters use files from other chapters, so you need to
unzip all the files.

5. Create a directory on your hard drive and name it NukeChapters.

6. Drag the chapter folders you unzipped into the NukeChapter directory
(after you have done so, you can delete the zip files from your system).

ACKNOWLEDGMENTS
I’ve been teaching compositing since 2001. When Nuke started becoming
the tool of choice for a lot of the studios around me, I decided to write a
course that focused on it. I started writing the course in the spring of
2009 with help from The Foundry, whose staff was very kind and
forthcoming. I would specifically like to thank Vikki Hamilton, Ben
Minall, Lucy Cooper, John Wadelton, and Matt Plec.

I finished writing the original course in the autumn of 2009. I taught it


several times at Soho Editors Training in London; they were kind enough
to let me try out the new course at their training facility. The course was
well received, so between sessions I updated, corrected, and expanded on
the original course.

About a year after that, I approached Peachpit Press with the idea of
turning the course into a book. Karyn Johnson, the book’s senior editor,
took on the project and after a long digestion period I sat down and
started adapting the course into a book. Karyn made sure I had the best
support I could possibly have, and with the help of the wonderful team at
Peachpit, including Corbin Collins and Kelly Kordes Anton, I managed to
complete the book to the high standard that Peachpit expects of their
writers. Thanks also go out to the kind friends and colleagues who gave
me materials to use for the book: Alex Orrelle, Alex Norris, Hector
Berrebi, Dror Revach, Assaf Evron, Menashe Morobuse, and Michal
Boico.

For the second edition, started at the end of 2013, Karyn again stepped up
and pushed me to make the book even more than it was before—striving
for perfection. Rebecca Rider taught me a lot about how badly I write,
fixing my English as much as I would let her. I also got help from a lot of
friends and colleagues again (to add to the previous list): Oded Binnun,
Dor Shamir, Or Kantor, Jonathan Carmona, Itay Greenberg, Paul Wolf,
and Shani Hermoni. Wow. I owe a lot to a lot of people.

The book uses the following footage:

Alex Norris gave permission to use footage from two of his short film
productions: This Is Christmas and Grade Zero. This is used in Chapters

1, 6 , and 8 .

The footage in Chapter 3 is taken from a personal short film called Goose
by Dor Shamir (dorshamir.blogspot.com (http://dorshamir.blogspot.com)), Shai
Halfon, and Oryan Medina (oryanmedina.com (http://oryanmedina.com)).
Special thanks to Geronimo Post&Design. Jonathan Carmona did the 3D
rendering specifically for this book.

The footage in Chapter 7 is taken from a short film called Aya directed by
Mihal Brezis and Oded Binnun (who was also DP), starring Sarah Adler
and Ulrich Thomsen, with thanks to production companies Divine
Productions and Cassis Films.

The bullet in Chapter 8 was rendered especially for this book by Dror
Revach.

The footage in Chapter 9 is taken from a yet unnamed short film by Or


Kantor who wrote, designed, modeled, animated, and composited the
whole thing. Thanks again to Jonathan Carmona, who helped take Or’s
files and render usable elements just for this book.

The footage for the panorama in Chapter 10 is by Assaf Evron.

The camera and geometry for Chapter 11 was solved by Michal Boico.

And finally: The second edition was a breeze to write compared to the first
one. Maybe that was because I knew what I was getting into—but more
important, my family knew what I was getting them into, and so things
were a lot more relaxed. So thank you to my wife, Maya, and my two sons,
Jonathan and Lenny, who had to bear with my long days and long nights
of writing, gave me the quiet and solitude I needed, and believed (and
prayed) that I would finish this second edition quickly so life could get
back to normal. And so I have.

PREV NEXT
⏮ ⏭
Recommended
Contents / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out 1. Getting Started with Nuke
© 2017 Safari. Terms of Service / Privacy Policy


 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
Introduction 2. Touring the Interface with a Basic Composite
   🔎

1. Getting Started with Nuke


Nuke uses a node-based workflow to drive image manipulation.
Starting from a source image or images, you can add various
types of processors, called nodes, in succession until you
achieve your desired result. Each node performs a specific,
often very simple, function, and its output is passed on to the
next node’s input using a connector called a pipe. This series of
nodes is usually called a process tree or a flow.

A Nuke project usually starts when you bring in images from


your hard drive. You then insert more nodes after the image
nodes and connect them with pipes until you achieve a desired
look. Render the process tree to disk for the final result; you
can also save it in what’s called a Nuke script, which you can
open later if you need to make changes.

In this lesson, I explain the nuts and bolts of the Nuke interface so you
will feel at ease clicking where you need to. It may seem boring, but it’s a
must. You need to know the components of the Nuke interface because
that is the foundation of all the cool stuff you’ll do later.

COMPONENTS OF THE GRAPHIC USER INTERFACE


Nuke’s interface consists of a large window that can be split in various
ways. By default the Nuke interface appears as it does in FIGURE 1.1.

FIGURE 1.1 The default Nuke interface is split into four


panes.

The default layout is split into four key areas, called panes, populated by
six panels. Yes, that’s right, panes are populated by panels. Confusing


terminology, I agree. The first pane, the strip at the very left, is populated
by the Nodes Toolbar panel. The black pane that takes up the top half of
the screen is populated by the Viewer. Beneath that, there’s the pane
populated by the Node Graph, which is also called the DAG (Directed
Acyclic Graph), the Curve Editor, and the Dope Sheet. The large empty
pane on the right is populated by the Properties Bin.
At the top left of every pane there’s a tab with the name of the panel on it
(except for the Nodes Toolbar). Remember, the pane containing the Node
Graph panel also contains the Curve Editor panel and the Dope Sheet
panel. You can click their respective tabs to switch between the three.

The Content menu


The Nuke interface is completely customizable. You can split the interface
into as many panes as you want and have as many tabs in each of them as
you want, populated by whichever panels you prefer. Use the Content
menu to do all this, which is the gray box in the top-left corner of each
pane (FIGURE 1.2).

FIGURE 1.2 Use the Content menu to control the Nuke


window layout.

You should become familiar with the Content menu. It enables you to split
the current pane either vertically or horizontally, creating another pane in
the process. It also lets you detach the pane or tab from the rest of the
interface, which allows it to float above the interface (I cover several uses
for this technique later on). You can also use the Content menu to
populate the associated pane with any panel, whether it is a Curve Editor,
a Node Graph, a Script Editor, and so on.

When you hover your mouse pointer between the Node Graph and the
Viewer, the cursor changes to show that you can move the divide between
the two panes to make the Viewer bigger or the Node Graph bigger. You
can drag any separating line to change the size of the panes.

Hover your mouse pointer in the Node Graph and press the spacebar on
your keyboard to turn the whole window into the Node Graph. Click again
to get the rest of the interface back. You can do this with any pane; simply
hover your mouse pointer in that pane. This procedure is very useful if
you want to look at only the Viewer.

A rundown of the various panels


The different Nuke panels are as follows:

Curve Editor enables you to edit animation curves.

Dope Sheet is a timeline representation of your clips and keyframes.

Nodes Toolbar contains all the different nodes you can use to drive
Nuke. These are split into several sections or toolboxes represented by
little icons.

Scope comprises three scopes (Histogram, Waveform, and Vector)


designed to show the colors of the images in three different graphical
ways.

Node Graph or DAG allows you to build the process tree.

Pixel Analyzer helps you pick out and analyze colors from the ⬆
Viewer.

Properties Bin contains sliders and knobs to control your various


nodes.
Progress Bars is the panel that tells you how long to wait for a
process to finish, whether it is a render, a tracking process, or anything
else that takes a sigificant amount of time. This panel pops up whenever
it’s needed, but you can dock it in a pane, which some find convenient.

Script Editor is a text window where you can write Python scripts to
automate various features of Nuke.

Error Console is used as a display for the error log.

New Viewer opens a new Viewer where you can view, compare, and
analyze your images.

Using the Content menu, you can change the interface to fit the specific
needs and preferences of different users.

The menu bar


The menu bar at the very top of the Nuke interface holds more
functionality. Here’s an explanation of the various menus (FIGURE 1.3).

FIGURE 1.3 The menu bar (on Mac OS X)

File contains commands for disk operations, including loading, saving,


and importing projects—but not images.

Edit contains editing functions, preferences, and project settings.

Layout facilitates restoring and saving layouts.

Viewer helps you add and connect Viewers.

Render is used to launch a render as well as perform various other


related commands.

Cache handles saved data in various locations for later retrieval.

Help contains access to a list of hot keys, user documentation, training


resources, tutorial files, and Nuke-related email lists.

Using the Content menu, you can customize the user interface. You can
then use the Layout menu to save and retrieve the layout configuration.

Let’s practice this in order to place the Progress Bars panel at the bottom
right of the interface.

1. Launch Nuke.

2. Click the Content menu next to the Properties tab near the top right of
the screen (top of the Properties panel).

3. From the Content menu, choose Split Vertical (FIGURE 1.4).

FIGURE 1.4 Using the Content menu to split a pane into


two panes

You now have another pane, which holds no panel, at the bottom right of
the interface (FIGURE 1.5). Let’s populate it with the Progress Bars
panel (FIGURE 1.6).


FIGURE 1.5 The newly created pane holds no panel yet.

FIGURE 1.6 Creating a Progress Bars panel to populate the


new pane

4. Click the Content menu in the newly created pane and choose Progress
Bar.

The Progress Bars panel has been created and is populating the new pane
(Figure 1.6). You don’t need too much space for this panel, so you can
move the horizontal separating line above it down to give more space to
the Properties Bin.

5. Click the line separating the Properties Bin and the Progress Bars panel
and drag it down.

I like having the Progress Bars panel docked in a pane at the bottom right
of the interface. It means I always know where to look if I want to see a
progress report. And because it is docked, the Progress Bars panel doesn’t
jump up and interfere when I don’t need it. If you like this interface
configuration, you can save it. You can use the Layout menu bar item to
do that.

6. From the Layout menu, choose Save Layout 2 (FIGURE 1.7).

FIGURE 1.7 Saving the Layout using the menu option

The layout has now been saved. Let’s load the default layout and then
reload this new one to make sure it has been saved. ⬆
7. From the Layout menu, choose Restore Layout 1.

You can now see the default window layout. Now let’s see if you can reload
the window layout you just created.
8. From the Layout menu, choose Restore Layout 2.

Presto! You now have full control over Nuke’s window layout.

The contextual menu


Using a two-button mouse, you can right-click an interface or image
element to quickly access appropriate editing options. To display a
context menu using a one-button mouse, Ctrl-click the interface or image
element. (Generally, you right-click on Windows and Ctrl-click on Mac.)
The contextual menus vary depending on what you click, so they are
covered within discussions throughout this book (FIGURE 1.8).

FIGURE 1.8 An example of a contextual menu

Hot keys
Like most other programs Nuke uses keyboard shortcuts or hot keys to
speed up your work. Instead of clicking somewhere in the interface, you
can press a key or combination of keys on your keyboard. You are
probably familiar with basic hot keys such as Ctrl/Cmd-S for saving.

NOTE

Nuke is cross-platform, meaning it works on Windows, Linux,


and Mac OS X. Windows and Linux machines use the
modifier key called Control (Ctrl), but on the Mac this is called
Command (Cmd). I use the short form for these, but will
always mention both options. Ctrl/Cmd-S means press
Control or Command and the S key simultaneously.

Keyboard shortcuts in Nuke are location specific as well. Pressing R while


hovering the mouse pointer in the Node Graph will produce a different
result than pressing R while hovering the mouse pointer in the Viewer.

NODES
Nodes are the building blocks of the process tree. Everything that happens
to an image in Nuke happens by using nodes.

Let’s explore the node’s graphical representation (FIGURE 1.9).

FIGURE 1.9 A node’s anatomy

A node is usually represented by a rectangular shape with the name of the


node in the middle. There’s usually an input (or more than one) at the top
of the node, and a single output at the bottom. On some nodes there is a
mask input on the side of the node, the functionality of which we discuss
in more detail in Chapter 3.

Creating a node
There are several ways to create nodes. Probably the best way is to choose
a node from the Nodes Toolbar. The Nodes Toolbar is split into several
toolboxes, as shown in TABLE 1.1.

TABLE 1.1 The Various Toolboxes in the Nodes Toolbar

As mentioned, there are other ways to create nodes. You can choose a
node from the Node Graph’s contextual menu, which mirrors the Nodes
Toolbar (FIGURE 1.10).

FIGURE 1.10 The contextual menu in the Node Graph

The easiest way to create a node, if you remember its name and you’re
quick with your fingers, is to press the Tab key while hovering the mouse
pointer over the Node Graph. This opens a dialog box in which you can
type the name of the node. As you type, a “type-ahead” list appears with
matching node names, beginning with the letters you are typing. You can
then use the mouse, or the up and down arrows and the Enter key, to
create that node (FIGURE 1.11).


FIGURE 1.11 Pressing the Tab key in the Node Graph allows
you to create and name a new node.

The Read node


Unlike in many other applications, you import footage into Nuke in the
same way you do everything else in Nuke—using a node.

Let’s practice creating a node by importing a bit of footage into Nuke. This
will also give you something to work with. Do either one of the following:

Click the Image toolbox in the Nodes Toolbar and then click the Read
node.

Hover the mouse pointer over the Node Graph, or DAG, and press the
R key (FIGURE 1.12).

FIGURE 1.12 The Image toolbox gives access to image


creation nodes.

The Read node is a bit special: When creating it, you first get the File
Browser instead of just a node.

The File Browser


You use the File Browser whenever you are reading or writing images to
and from the disk drive. It is a representation of your file system, much
like the normal file browser found in Windows or Mac OS X.

Nuke doesn’t use the basic operating system’s file browser, as it requires
extra features such as video previews and it needs to be consistent across
all operating systems.

See FIGURE 1.13 for an explanation of the browser’s anatomy.

FIGURE 1.13 Anatomy of Nuke’s File Browser


NOTE

Please read the introduction of this book carefully to learn


how to copy the files from the download onto your hard drive.
Let’s walk through the File Browser together by using the files you
downloaded from www.peachpit.com/nuke1012E
(http://www.peachpit.com/nuke1012E).

1. Browse to the Nuke101 folder that you copied onto your hard drive and
click the Chapter01 folder.

2. Click the little arrow at the top-right corner of the browser to bring up
the File Browser’s viewer.

3. Click once (don’t double-click) the file named backplate.####.jpg 1-72


(FIGURE 1.14).

FIGURE 1.14 Use the File Browser’s viewer to preview


images before you import them.

4. Scrub along the clip by clicking and dragging the timeline at the bottom
of the image.

This Viewer makes it easy to see what you are importing before you
import it in case you are not sure what, for example, Untitled08 copy 3.tif
is.

5. Click the Open button.

There is now a node in your Node Graph called Read1 (FIGURE 1.15)!
Hurrah!

FIGURE 1.15 The new Read node is now in your DAG near
your Viewer1 node.

Another thing that happened is that a Properties panel appeared in your


Properties Bin. This Properties panel represents all the properties that
relate to the newly created Read1 node (FIGURE 1.16).

FIGURE 1.16 Using the Properties panel, you can change


the functionality of your nodes. You’ll learn more about
this in Chapter 2.

You will learn about editing properties in Chapter 2.

THE VIEWER
Another important part of the Nuke user interface is the Viewer. Without
it, you will be lost as far as compositing goes. You use the Viewer to look
at your work as you go, to receive feedback when you are editing nodes, to

compare different images, and to manipulate properties on the image.
You will explore each of these as you work through this book.

Notice that aside from the Read1 node you created, there’s also another
node in your Node Graph called Viewer1. Also notice that your Viewer is
called Viewer1. Every Viewer node represents an open Viewer panel. To
view an image in Nuke, simply connect the Viewer node’s input to the
output of the node you want to view. It will then appear in the Viewer
itself.

You can connect any node to a Viewer node in two ways:

You can click the Viewer node’s little input arrow and drag it from the
input of the Viewer node to the node you want to view.

You can do the reverse and drag the output of the node you want to
view to the Viewer node.

Either method connects the node to the Viewer node’s first input called
input 1.

The connecting line that appears between the two nodes is called a pipe. It
simply represents the connections between nodes in the Node Graph
(FIGURE 1.17).

FIGURE 1.17 A process tree is a series of pipes.

Another way to connect a node to the Viewer node’s input 1 is to select the
node you want to view and press the number 1 on the keyboard.

1. Select the Read node by clicking it once.

2. Keep hovering over the Node Graph and press 1 on the keyboard.

Viewer1 now shows the image you brought into Nuke—an image of a man
walking under an arch. FIGURE 1.18 illustrates the Viewer’s anatomy.
The buttons are explained in more detail in further chapters.

FIGURE 1.18 The Viewer’s anatomy

Navigating the Viewer


While compositing, you often need to view different parts of an image,
zoom in to closely inspect an aspect, and zoom out for an overall view.
Because you perform these tasks often, a few hot keys are worth
remembering; these are listed in TABLE 1.2.


TABLE 1.2 How to Navigate the Viewer

NOTE

All these hot keys will also let you navigate the DAG and
Curve Editor.

Using the Viewer


Let’s say you want to view the different color channels of the image you
just brought in. Here’s how you would go about it:

1. While hovering the mouse pointer over the Viewer, press the R key to
view the red channel.

The channel display box on the far left is now labeled R. You can also
change which channel you are viewing by clicking in the channel display
drop-down menu itself (FIGURE 1.19).

FIGURE 1.19 The channel display box shows R for the red
channel.

2. Press R again to display the three color channels again.

You’re now back to viewing all three color channels, and the channel
display box changes back to RGB (FIGURE 1.20).

FIGURE 1.20 The channel display box is back to showing


RGB.

Now let’s load another image and view it.

3. Hover your mouse pointer over the Node Graph and press R on the
keyboard. The File Browser opens again.

Notice how pressing R while hovering the mouse pointer in the Viewer
and in the Node Graph produces different results.

4. Double-click the file named backplate_graded.####.jpg to bring it into


Nuke.

You now have another Read node in your interface. If one Read node is
overlapping the other, you can click and drag one of them to make space.
Let’s view the new Read node as well.

5. Select the new Read node by clicking it once and pressing 1 on the
keyboard.

You now see the new image, which is a graded (color-corrected) version
of the previous image. Other than that, the two images are the same.


Viewer inputs
Any Viewer in Nuke has up to 10 different inputs and it’s very easy to
connect nodes to these inputs in order to view them. Simply by selecting a
node and pressing a number on the keyboard (like you pressed 1 before),
you’re connecting the node to that number input of the Viewer. This
results in several different parts of the process tree being connected to the
same Viewer. You are then able to change what you’re looking at in the
Viewer with ease.

Let’s connect the first Read node to the first Viewer input and the second
Read node to the second Viewer input.

1. Select the first Read node (Read1) and press 1 on the main part of the
keyboard.

2. Select the second Read node (Read2) and press 2 on the keyboard
(FIGURE 1.21).

FIGURE 1.21 Two nodes connected to two different inputs


of Viewer1

To view the different inputs of a Viewer, hover the mouse pointer over the
Viewer itself and press the corresponding numbers on the keyboard.

3. Hover your mouse pointer over the Viewer and press 1 and 2 several
times. See how the images change from one input to another.

Using this method you can keep monitoring different stages of your
composite as you are working. This is also a good way to compare two
images.

Playing a clip
Playing a clip is kind of mandatory in a compositing package. Let’s see
this piece of moving image playing in realtime.

This notion of realtime is very subjective in relation to the specific footage


you are using. For film, realtime is 24fps (frames per second). For PAL
video, it’s 25fps. For some other purpose it might be 100fps. Some
computers will simply never be able to play realtime footage of some
resolutions and frame rates simply because they are too slow, no matter
what tricks you try to apply to them.

This aside, you can strive to play clips in realtime. You will need to define
what realtime is for your footage. This footage is 25fps. Let’s set the
Viewer to 25fps. You do this using the fps Input field above the Timebar.
When playing, this Input field shows how fast playback really is instead of
how fast it is supposed to be playing.

1. Set the Viewer fps field to 25 (FIGURE 1.22).

FIGURE 1.22 The Viewer fps field

Now when you click Play, Nuke loads each frame from the disk, applies
any calculations to it, caches the result in RAM, and presents it in the
Viewer. It will then move on to the next frame. Once Nuke caches all the
frames, it starts to play the cached frames in the Viewer instead of going

back to the originals. This allows Nuke better speed in playback. Nuke
now attempts to play the given frame rate. The fps field displays the actual
frame rate that is playing, whether it’s realtime or not.
Frames that have been cached in RAM are represented as a green line
under the playhead in the Timebar as shown in Figure 1.23. Once the
whole Timebar is green, you should be able to get smooth playback.

FIGURE 1.23 The green line represents RAM cached


frames.

TIP

The hot keys for playing in the Viewer are easy to use and
good to remember. They are exactly the same as in Avid
Media Composer and Final Cut Pro: L plays forward, K
pauses, and J plays backward. Pressing L and J one after the
other enables you to easily rock ’n’ roll your shot.

2. Click the Play Forward button on the Viewer controls. Let it loop a
couple of times to cache, and then see how fast it is actually playing.

3. Click Stop.

Chances are it played pretty well and close to, if not exactly, 25fps. Let’s
give it something more difficult to attempt.

4. Change the value in the fps field to 1000.

5. Click Play and again watch how fast Nuke is playing the footage.

Nuke probably isn’t reaching 1000fps, but it should be telling you what it
is reaching. How thoughtful. My system reaches around 150fps at this
resolution.

At the end of each chapter, quit Nuke to start the next chapter with a fresh
project. You can play around more if you like, but when you’re finished...

6. Quit Nuke.

That’s it for now. We will do actual compositing in the next lesson. Thanks
for participating. Go outside and have cookies.

PREV NEXT
⏮ ⏭
Recommended
Introduction / Queue / History / Topics / Tutorials / Settings / Get the App / Sign
2. Out
Touring the Interface with a Basic Composite
© 2017 Safari. Terms of Service / Privacy Policy


 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
1. Getting Started with Nuke 3. Compositing CGI with Bigger Node Trees
   🔎

2. Touring the Interface with a Basic Composite


Digital compositing is about taking several widely different
sources and combining them into one seamless whole. As you
will learn in this chapter, Nuke gives you the right toolset to do
just that. It helps you create a successful composite, which may
involve you using numerous tools—such as keying, color
correction, matte extraction, 2D or 3D tracking, and other
custom tools—in combination and in a specific order. The final
image will rely on your, the artist’s, talent; however, having an
easy-to-access toolset will drive your vision instead of slowing it
down.

WORKING WITH PROCESS TREES


Using a process tree is a very intuitive way to change the appearance of an
image. Essentially, it represents the process the image or images are going
through in a graphical form. If you know what process you want your
images to go through, it is the process tree that represents the process you
are thinking of.

Nuke has several kinds of image manipulation tools. By using them in a


specific order, one after another, you can create your desired effect. This
process is similar in any compositing application, but in Nuke, everything
is open and under your control. Because each individual process is
exposed, you can achieve a greater level of control, which enables you to
keep tweaking your work to your or your client’s needs quickly and with
relative ease.

Usually, you start a composite with one or more images you have brought
in from disk. You manipulate each separately, connect them together to
combine them, and finally render the desired result back to disk. By
taking these steps, you build a series of processors, or nodes, which when
viewed together, look like a tree, which is why it’s called a process tree.
Nuke uses a second analogy to describe this process: that of flowing water.
A tree can also be called a flow. As the image passes from one node to
another, it flows. This analogy uses terms such as downstream (for nodes
after the current node) and upstream (for nodes before the current
mode). Nuke uses these two analogies interchangeably.

A Nuke process tree is shown in FIGURE 2.1.


FIGURE 2.1 This is what a basic Nuke tree looks like.

In this figure, you can see a relatively basic tree that is made up of two
images: the smiling doll on the top left and the orange image on the top
right. The images pass through several nodes—a resolution-changing
node called Reformat1, a color-correction node called Premult1, and a
transformation node called Transform1—until they merge together at the
bottom of the tree with another node called Merge1 to form a composite.
The lines connecting the nodes to each other are called pipes.

Trees usually flow in this way, from top to bottom, but that is not strictly
the case in Nuke. Trees can flow in any direction, though the tendency is
still to flow down.

FIGURE 2.2 shows another, much more complex, example of a Nuke


process tree.

FIGURE 2.2 This is what a more complex Nuke tree looks


like.

The black boxes are all images being processed and connected together
with a large variety of nodes. The flow of this tree is down and to the left.
At the very end of this tree is a yellow box, which is where this tree ends.
This makes for a pretty daunting image. However, when you are the one
building this flow of nodes, you know exactly what each part of the tree is
doing.

CREATING A SIMPLE PROCESS TREE


Let’s start building a composite so you better understand the process tree.
As you’ve seen, the tree is a collection of nodes connected via pipes. A
node’s anatomy looks like FIGURE 2.3.


FIGURE 2.3 The node’s anatomy

The tree flows from the output of one node to the input of the next. Not all
nodes have all the elements shown in Figure 2.3. A Read node, which you
will use again in a moment, has an output only because it is the beginning
of the tree and has no need for an input. Some nodes don’t have a Mask
input (explained in Chapter 3), and some nodes have more than one
input.

In Chapter 1 you learned how to read images from disk using a Read node,
so let’s do that again and learn a few more options along the way.

1. Launch Nuke.

2. While hovering the mouse pointer over the DAG (Directed Acyclic
Graph, also called the Node Graph), press the R key.

3. Navigate to the Chapters directory, which you selected when you


copied the files from the download as described in the Introduction.
Navigate into the Chapter02 directory.

In Chapter 1 you used a File Browser to bring in one sequence, then


another to bring in another sequence. However, if you select more than
one sequence, or image, more than one Read node will be created.

You can multiple-select files in the File Browser in several ways:

• Click and drag over several file names.

• Shift-click the first and last file to make a continuous selection.

• Ctrl/Cmd-click to select several files not immediately next to each other.

• Press Ctrl/Cmd-A to select all.

• You can also use Ctrl/Cmd-click to deselect files.

• Select one file name, then click the Next button at the bottom right,
select another file, and click the Next or Open button again. This is the
way you’ll multiple-select in this exercise.

4. Select the file called maya.png and click the Next button, then select
background.####.png and click the Open button or press the
Enter/Return key on your keyboard.

You now have two new Read nodes in the DAG (FIGURE 2.4). For some
reason, Nuke reverses the order of the Read nodes. The second image you
brought in is labled Read1 and should also be labeled
background.0001.png. The first, Read2, should also be labeled maya.png.
This second line of the label is generated automatically by Nuke and
shows the current image being loaded by the Read node.


FIGURE 2.4 After bringing in two files, your DAG should
look like this.

Let’s view the images to see what you brought in.

NOTE

If the file called maya.png is Read1, not Read2, you did not
complete the steps as presented. This is not necessarily a
problem, but you’ll have to remember that my Read2 is your
Read1.

5. Click Read2 (maya.png), hover the mouse pointer over the DAG, and
press 1 on the keyboard.

You’re now viewing Read2 (maya.png), which is an image of a little doll. If


you look at the bottom-right or the top-right corners of the image in the
Viewer (you might need to pan up or down using Alt/Option-click-drag or
middle mouse button-drag), you will see two numbers: 518, 829. These
numbers represent the resolution of this image—518 pixels×829 pixels
(FIGURE 2.5). This odd resolution simply came about when I placed
this doll on my flatbed scanner, scanned her, and then cropped the
resulting image. Nuke can work with any image resolution and can mix
different resolutions together.

FIGURE 2.5 The resolution of the image is shown at the


bottom-right corner in the Viewer.

Let’s look at the other image.

6. Click Read1 (background.####.png) and press 1, like before.


This image is part of a file sequence. It’s a rather dark image of a painter’s
toolbox. I shot this image in HD and, indeed, if you look at the same
corners of the image in the Viewer, you can see that the resolution for this
sequence is 1920×1080. Since this is a defined format (more on this in
Chapter 8), the bottom-right corner displays the name of the format
rather than the resolution. In this image, the bottom-right corner shows
HD.
Your goal in this chapter is to place the doll image inside the artist’s
toolbox—and for it to look believable. Let’s start by placing the doll image
over the background image.

MERGING IMAGES
The definition of compositing is combining two or more images into a
seamless, single image. You definitely need to learn how to do this in
Nuke. In layer-based systems, such as After Effects and Flame, you simply
place one layer on top of another to achieve a composite. In Nuke, you
combine two images by using several different nodes—chief among which
is the Merge node.

The simplest way to create and connect a node to an existing node is to


select the existing node (after which the new node will be inserted), and
then call the node you want to create, either by clicking it in the Nodes
Toolbar or pressing the Tab key. The new node is then created, and its
input is connected to the selected node’s output. Let’s do this.

TIP

You can also use the hot key M to create a Merge node.

1. Select Read2 (maya.png) and choose Merge from the Merge toolbox.

The Merge node has a slightly different anatomy than a standard node. It
has two inputs rather than just the one (FIGURE 2.6).

FIGURE 2.6 The Merge node you created has a free, still
unconnected, input.

NOTE

To see all the different layering operations the Merge node is


capable of, click the Operation property in the Merge node’s
Properties panel. To see the math of the operations, hover
your mouse pointer over the Operation drop-down menu until
a tooltip appears. Tooltips are available on practically every
button in Nuke.

The Merge node can connect two or more images together using various
layering operations such as Overlay, Screen, and Multiply. However, the
default operation called Over simply places a foreground image with an
alpha channel over a background image. The A input, already connected,
is the foreground input. The unconnected B input is the background
input. Let’s connect the B input.

2. Click and drag Merge1’s B input toward Read1 and release it over Read1
(FIGURE 2.7).


FIGURE 2.7 Both inputs are now connected, creating a
composite.

3. Select Merge1 and press 1 on the keyboard to view the result in the
Viewer (FIGURE 2.8).

FIGURE 2.8 All this light purple discoloration wasn’t here


before.

The image, however, looks wrong—it is washed out in an odd light purple
color.

Maybe you don’t have an alpha channel? Let’s have a look.

4. Click Read2 and press 1 on the keyboard to see it in the Viewer.

5. While hovering your mouse pointer over the Viewer, press the A key to
view the alpha channel (FIGURE 2.9).


FIGURE 2.9 The doll’s alpha channel

You can clearly see an alpha image here. It represents the area where the
doll is. The black area represents parts of the doll image that should be
discarded when compositing. So why aren’t they being discarded?

6. Press the A key again to switch back to viewing the RGB channels.

Notice that the black areas in the alpha channel are a light purple color,
the same as the discoloration in your composite. Maybe this is the source
of your problems?

Normally the Merge node assumes the foreground input is a


premultiplied image. What’s that you say? You’ve seen that term
premultiplied before but you’ve never quite figured out what it means?
Read on.

Merging premultiplied images


The Merge node expects the foreground image to have both RGB channels
and an alpha channel. The alpha is used to determine which pixels of the
foreground image will be used for the composite. Nuke also assumes that
the foreground RGB channels have been multiplied by the alpha channel.
This results in an image where all black areas in the alpha channel are
also black in the RGB channels. This multiplication is usually the result of
3D software renders or that of a composite part way through the tree. It is
also the user’s choice, in whatever software package, to multiply the RGB
channels with the alpha channel, thus producing a premultiplied image,
whether users are aware of this or not.

Nuke’s Merge node expects the foreground image to be a premultiplied


one, so it is important to know whether your image has been
premultiplied or not. An image that hasn’t been premultiplied is called
either a straight image or an unpremultiplied image. How do you know
whether an image is premultiplied? Generally speaking, ask the person
who gave you the image with the incorporated alpha if he or she
premultiplied it (that person may very well be you). But here are a few
rules of thumb:

Most 3D software renders are premultiplied.

Images that don’t have incorporated alpha channels are straight.

If black areas in the alpha channel are not black in the RGB channels,
the image is straight.

If you are creating the alpha channel (using a key, a shape-creation
node, or another method), you should check whether your operation
created a premultiplied image or not. As you learn the alpha-channel
creation tools, I will cover premultiplication in more detail.
Another reason you need to know the premultiplication state of your
image is because of color correction. Why? Because you can’t color
correct premultiplied images. You are probably not aware of this because
other software packages hide it from you. In Nuke, however,
premultiplication is something you need to take care of on your own. To
get your image to a state where you can color correct it, you will need to
unpremultiply it. You do this by dividing the premultiplied RGB channels
with the alpha, thus reversing the multiplication. This produces a straight
image that you can then color correct; after you’ve done so, you can then
reapply the multiplication. You will learn how to do this later in the
chapter.

Here are some do’s and don’ts:

Make sure you supply the Merge node with a premultiplied image as
the foreground image for its Over (most basic and default) operation.

Never color correct premultiplied images. Chapter 4 is all about color


correction.

Always transform (move around) and filter (blur and so forth)


premultiplied images.

You control premultiplication with two nodes in the Merge toolbox:


Premult and Unpremult.

The rule says that if an image’s RGB channels aren’t black where the alpha
channel is black, then it isn’t a premultiplied image. When you looked at
Read2 you noticed exactly this. The black areas in the alpha channel were
a light purple color in the RGB channels (FIGURE 2.10). This means
this image is a...what? A straight image!

FIGURE 2.10 This image shows both the alpha and RGB
channels.

Because the Merge node expects a premultiplied image as its foreground


input, you need to premultiply the image first.

1. While Read2 is selected, press Tab and type pre. A drop-down menu
appears with the Premult option. Use the down arrow key to navigate to
it and press Enter/Return. Alternatively, you can choose the Premult node
from the Merge toolbox (FIGURE 2.11).


FIGURE 2.11 The Premult node sits between Read2 and
Merge1.

There’s now a Premult node called Premult1 connected to Read2. The area
outside the doll’s outline is now black (FIGURE 2.12), indicating that it
is now a premultiplied image. You can proceed to place this image over
the background.

FIGURE 2.12 The area that was light purple is now black.

2. Click Merge1 and press 1 on your keyboard.

All that purple nastiness has disappeared. You are now seeing a composite
of the foreground image over the background (FIGURE 2.13).

FIGURE 2.13 Your first composite should look like this.

Is it really the Premult node that fixed this? Let’s double-check by


disabling it and enabling it. ⬆
3. Select Premult1 and press D on the keyboard to disable the node
(FIGURE 2.14).
FIGURE 2.14 You have just disabled Premult1.

An X appears on Premult1, indicating that it is disabled. If you look at the


image, you can see you are back to the wrong composite. This shows that,
indeed, it is the Premult1 node that fixed things. Let’s enable it again.

NOTE

When you disable a node, the flow of the pipe runs through
the node without processing it. You can disable any node, and
doing so is a very handy way of measuring what you did.

4. With Premult1 selected, press D on the keyboard.

What you now see is a classic Nuke tree: two streams flow into one. The
foreground and background connect through the Merge node.

Incidentally, to combine more images, you simply create another Merge


node.

Before you go further, and there’s much further to go, save your Nuke
script. What’s a script? You’re about to find out.

Saving Nuke scripts


Nuke saves its project files, or scripts, as ASCII files. An ASCII file is a text
file that you can open in a text editor. ASCII files stand in contrast to
binary files, which don’t make any sense when you open them in a text
editor and are readable only by specific applications. The ASCII file
format of the Nuke script is great, and you explore this feature in Chapter
12. Nuke scripts have the .nk file extension.

All Nuke script-related functions are accessible through the File menu and
respond to standard hot keys for opening and saving files.

1. Press Ctrl/Cmd-S, or choose File > Save, to save the file.

The plan is to create a folder in your Nuke101 folder called student_files,


in which you will save all scripts and images.

2. In the File Browser that opens, navigate to the Nuke101 directory (that
you copied onto your hard drive from the downloadable files that come
with this book).

3. Click the Create New Folder icon at the upper left of the browser.

4. Create a folder called student_files and click OK.

5. Name the script by adding the name chapter02_v01 to the end of the
path in the path Input field at the bottom of the File Browser. Nuke adds
the file extension automatically.

6. Press Enter/Return.

Nuke just saved your script. You can quit Nuke, go have a coffee, come
back, open the script, and continue working. ⬆
NOTE

You can change the amount of time between autosaves in the


Preferences pane.

By default Nuke autosaves your project every 30 seconds, or if you are not
moving your mouse, it will save once after 5 seconds. But it doesn’t
autosave if you haven’t saved your project yet. You have to save your
project the first time. That’s why it is important to save your project early
on.

Another great feature in Nuke is the Save New Version option in the File
menu. You will save different, updated versions of your script often when
compositing, and Nuke has a smart way of making this easy for you. If you
add these characters to your script name, “_v##”, where the # symbol is a
number, using the Save New Version option in the File menu adds 1 to
that number. So if you have a script called nuke_v01.nk and you click File
> Save New Version, your script will be saved as nuke_v02.nk. Very
handy. Let’s practice this with your script, which you already named
chapter02_v01.nk before.

TIP

If you are not sure what a hot key does or you don’t
remember the hot key for something, you can choose Key
Assignment from the Help menu.

7. With nothing selected, press Q on your keyboard. This brings up the


Current Info panel (FIGURE 2.15).

FIGURE 2.15 The Current Info panel is a quick way to see


which script you are working on.

The Current Info panel shows that the name of your current script
contains “_v01”. Let’s save this as “_v02”.

8. Click OK to close the Current Info panel.

9. Choose File > Save New Version.

10. Press the Q key again. Notice that your script is now called
Chapter02_v02.nk.

This same treatment of versions, using “_v01”, is also available when you
are working with image files. For example, you can render new versions of
your images in the same way. You will try this at the end of this chapter.

INSERTING AND MANIPULATING NODES IN THE


TREE
In the composite you are working on, the doll is obviously not in the right
position to look like it’s following basic laws of physics. You probably want
to move it and scale it down, right? To do that, you need to (what else?)
add another node to the doll stream—a Transform node this time. The
Transform node is a 2D axis that can reposition an image on the X and Y
axes as well as rotate, scale, and skew it (skew is sometimes called shear).

Because you want to reposition only the foreground image, you need to
connect the Transform node somewhere in the foreground (the doll)
stream. Remember from the explanation of premultiplication earlier in
this chapter that you should always strive to transform premultiplied
images. Now you’re going to insert the Transform node after the image
has been premultiplied. It is important, when compositing in the tree, to
think about the placement of your nodes, as in this example.

The Transform node resides in the Transform toolbox in the Nodes


Toolbar. It also has a hot key assigned to it (T).

You have already inserted a node—the Premult node—but let’s cover how
to do this properly.
Inserting, creating, branching, and replacing nodes
When you create a node with an existing node selected in the DAG, the
new node is inserted between the selected node and everything that
comes after it. This is called inserting a node. If there is nothing selected,
a node will be created at the center of the DAG, and will not be connected
to anything. This is called creating a node.

1. Click Premult1 once and then press the T on the keyboard to insert a
Transform node after it (FIGURE 2.16).

FIGURE 2.16 After adding the Transform node, your tree


should look like this.

A new node called Transform1 is inserted between Premult1 and Merge1.

Sometimes what you want to do is to start a new branch from the output
of an existing node. That lets you manipulate it in a different way, and
then later, most likely, connect it back to the rest of the tree. This is called
branching. To branch, select the node you want to branch from and then
hold Shift while creating the new node. You don’t need a branch now, but
let’s practice it anyway.

2. Select Transform1, hold down Shift, and press T (FIGURE 2.17).

FIGURE 2.17 Branching to create a new stream

A new node called Transform2 is now branching away from Transform1.


If you want to create the node by clicking it in a toolbox or using the Tab
key, all you have to do is hold Shift and it will work in the same way.

But hold on, I was mistaken. I didn’t want you to create another
Transform node, I wanted you to create a Blur node. This happens a lot.
You create a node by mistake and you need to replace it with another
node. This is called replacing. To do this, select the node you want to
replace and then hold Ctrl/Cmd while creating a new node. Let’s try this.

3. Select Transform2 and, while holding down Ctrl/Cmd, click Blur from
the Filter toolbox in the Node’s toolbar (FIGURE 2.18).


FIGURE 2.18 You have now replaced Transform2 with
Blur1.

Here’s a recap of the different ways to create nodes.

To insert a node, select a node in the DAG and create a new node.
(Note that creating a node means creating an unconnected node. You do
this by not having anything selected in the DAG before you create a node.)

To branch a node, hold Shift.

To replace a node, hold Ctrl/Cmd.

The next section covers a few more things you need to know about how to
manipulate nodes in the DAG.

Connecting nodes
If you have an unconnected node that you want to connect, or if you have
a node you want to reconnect in a different way, here’s how:

To connect an unconnected node, drag its input pipe to the center of


the node to which you want to connect.

To move a pipe from one node to another, click the end you want to
connect (input or output) and drag it to the node you want to connect it to
(FIGURE 2.19).

FIGURE 2.19 Grabbing the output part of the pipe

To branch an existing output to an existing node, Shift-click the


existing outgoing pipe and drag the newly created pipe to the existing
node (FIGURE 2.20).


FIGURE 2.20 Holding Shift and dragging the input end of
the pipe

Selecting nodes
You’ve been selecting nodes for a while now, but I never really explained
this properly. Well done for succeeding without detailed instructions.

To select one node, click once in the center of the node.

To select multiple nodes, click and drag around them to draw a


marquee, or Shift-click to select more than one node.

To select all upstream nodes from a node, Ctrl/Cmd-click the bottom


node and drag a little. Upstream nodes mean all the nodes that go up
from the current node. In FIGURE 2.21, notice that Merge1 isn’t
selected, as it’s not upstream from Blur1.

FIGURE 2.21 Holding Ctrl/Cmd and dragging to select all


upstream nodes

To deselect nodes, Shift-click a selected node.

Arranging nodes
It is also good practice to keep a tree organized. The definition of
organized is subjective, but the general idea is that looking at the tree
makes sense, and following the flow from one node to the next is easy.
Here are a few ways to keep an organized tree:

Nodes snap to position when they get close to other nodes both
horizontally and vertically. The snap makes keeping things in line easy.

To arrange a few nodes together, select them, and then press L on the
keyboard. Use this with caution—you can get unexpected results because
Nuke uses simple mathematical rules instead of common sense. You
might expect that the Read nodes would be on top, but sometimes they’re
on the side.

You can also create backdrops around part of your tree that remind

you where you did certain things—a kind of group, if you will. Access
these Backdrop nodes through the Other toolbox.

Disabling and deleting nodes


In addition to adding nodes, you need to be able to remove them when
you change your mind.
To disable a node, select it and press D on the keyboard. To enable it,
repeat.

To delete a node, select it and press Backspace/Delete on the keyboard.

Now that you know how to do all these things, let’s delete the unneeded
Blur1 node.

1. Click Blur1 to select it.

2. Press the Delete key to delete the node.

Your tree should now have only the added Transform1 node and should
look like FIGURE 2.22.

FIGURE 2.22 Your tree should look like this.

Now let’s get back to using the Transform node to place the doll in the
correct place in the artist’s toolbox.

CHANGING PROPERTIES
Notice that the Viewer now displays a new set of controls. Some nodes,
such as the Transform node, have on-screen controls, which mirror
properties in the node’s Properties panel but are more interactive and
easier to use. On-screen controls display when a node’s Properties panel is
loaded in the Properties Bin. When a node is created, its Properties panel
is loaded automatically into the Properties Bin. The controls that are now
on screen belong to the newly created Transform1 node. You can use these
controls to change the position, scale, rotation, skew, and pivot of the
image, as shown in FIGURE 2.23.

FIGURE 2.23 Transformation controls explained

1. Go ahead and play around with Transform node controls to familiarize


yourself with their functionality.

When you are finished playing, reset the node’s properties so you can
start fresh. You do that in the node’s Properties panel. ⬆
2. Right-click (Ctrl-click) an empty space in Transform1’s Properties
panel and then choose Set Knobs to Default from the contextual menu
(FIGURE 2.24).
FIGURE 2.24 This is how you reset a node’s properties.

Now that you have reset the doll, you can proceed to position the doll at
the front, bottom-right of the painter’s toolbox as shown in FIGURE
2.25.

FIGURE 2.25 This is where you want the doll image to end
up.

TIP

A good way to do the last fine bit of repositioning is to click


the axis once to highlight it (a dot appears in the center of the
axis to represent this) and then use the numeric keypad to
nudge the image into place. It’s very intuitive, as the 4 on
most keyboards’ number keypads is left, 6 is right, 2 is down,
8 is up, and numbers 1, 3, 7, and 9 are diagonals. Hold down
Shift to move in bigger increments, and hold down Ctrl/Cmd
to move in smaller increments. You need a keyboard with a
numeric keypad for this.

3. Position the doll according to Figure 2.25 by using the on-screen


controls, or simply type in the following values for the corresponding
properties in Transform1’s Properties panel:

• Translate.x = 1048

• Translate.y = –20

• Scale = 0.6

Color correcting the image


The color of the doll isn’t exactly right. It’s almost a match, but the doll is
too cold in color and too bright in relation to the background. You can fix
this using the Grade node in the Color toolbox.

The preceding note poses a question: How, or rather, where, are you going
to color correct the doll image? One way to color correct it is to unpremult
the image after the transformation, color correct it, and then premult it
again. However, there is really no reason to do that. The great thing about
the node/tree paradigm is that you have access to every part of the comp
all the time. You can simply color correct the doll image before you
premult it—right after the Read node.

1. Select Read2.

2. Press G on the keyboard to create a Grade node between Read2 and


Premult1. Alternatively, you can pick a Grade node from the Color
toolbox. ⬆
NOTE

Remember that you cannot color correct premultiplied


images. Find the right place in the branch to insert the color
correction node.

The Grade node gives you control over up to four channels using the
following properties: Blackpoint, Whitepoint, Lift, Gain, Multiply, Offset,
and Gamma. These properties are covered in Chapter 4.

You are now going to learn how to manipulate various types of properties
in a Properties panel. Grade1’s properties will come to your aid.

Using the Properties Bin


Let me quickly explain the Properties Bin so you will understand the
correct way to use properties. The Properties Bin holds nodes’ Properties
panels (FIGURE 2.26). The Properties panels hold a node’s properties,
controls, sliders, or whatever it is you called them until now. Nuke calls
them properties, and the user interface elements used to change them are
called knobs. So for example, you use a slider knob to change a color
property. Or you can use an Input field knob to change a position
property.

FIGURE 2.26 The panels in the Properties Bin

You should have Grade1’s and Transform1’s Properties panels loaded into
the Properties Bin. This happened when you first created the nodes.
However, if you closed it by mistake, or want to learn how to close it, keep
reading (FIGURE 2.27).

FIGURE 2.27 Explanations of the Properties Bin and


Properties panel’s buttons

To load the node’s Properties panel in the Properties Bin, all you need
to do is double-click the node. Newly created nodes’ Properties panels
load automatically.

To remove a node’s Properties panel from the Properties Bin, click the
Close Panel button. You do this for various reasons, chief of which is to
get rid of on-screen controls that are in the way.

NOTE

If you hover your mouse pointer over the Empty Properties


Bin button, a tooltip will come up that reads “Remove all
panels from the control panel bin.” Don’t let that confuse you.
The button is called Empty Properties Bin. The tooltips don’t ⬆
always use the same terminology as the rest of the interface
or the Nuke User Manual.
Sometimes you want to clear the Properties Bin for specific functions,
and to keep things organized. To do this, click the Empty Properties Bin
button.

You can open more than one Properties panel in the Properties Bin at a
time. How many depends on the Max Panels number box at the top left of
the Properties Bin.

The Lock Properties Bin button locks the Properties Bin so that no new
panels can open in it. If this icon is locked, double-clicking a node in the
DAG displays that node’s Properties panel as a floating window.

Another way to display a floating Properties panel is to click the Float


Panel button.

The Node Help button provides information about the functionality of


the node.

There is an interesting undo/redo functionality in Nuke. In addition to


a general undo/redo function in the Edit menu, there is also an individual
undo/redo function for every node’s Properties panel, as shown in Figure
2.27. Using these buttons undoes only the last actions in the specific node
being undone, while actions performed in the interface and in other nodes
are ignored.

Using these options, you can create a convenient way to edit properties in
the Properties Bin.

Adjusting properties, knobs, and sliders


You will now use Grade1’s properties to change the color of the doll image
to better fit that of the background image. To do this, you need to
understand how these properties are controlled.

You manipulate properties using interface elements called knobs. These


can be represented as sliders, input fields, drop-down menus, and so on.
Whatever they actually are, they are called knobs.

When changing a slider, for example, you are changing the knob (the
property’s value). A good thing about Nuke is that all values are
consistent. All transformation and filtering values are in pixels, and all
color values are in a range of 0 for black to 1 for white.

Look at the Gamma property under Grade1’s Properties panel in the


Properties Bin. A color slider is one of the most complex slider knobs you
can find in Nuke, so it’s a good slider to learn (FIGURE 2.28).

FIGURE 2.28 The anatomy of the Gamma property’s knob

Here are some ways to manipulate properties:

Click in the Input field and type in a new number.

Place the cursor in the Input field and use the up and down arrows to
nudge digits up and down. The magnitude of change depends on the
initial position of your cursor. For example, to adjust the initial value of
20.51 by 1s, insert your cursor to the left of the 0.

Use the virtual slider by clicking and holding the middle mouse button
and dragging left and right in the numeric box. To increase the strength of
your drag, hold Shift. To decrease it, hold Alt/Option.

Hold down Alt/Option and drag up and down with the left mouse
button. In the same way as when you use the arrow keys, the magnitude
depends on where in the Input field the cursor was when you clicked it.

Use the scroll wheel on your mouse if you have one.

Use the slider (how original!).

The next two options refer only to color-related properties that have a
Color swatch and a Color Picker button.

Use the Color swatch to sample colors from the Viewer. Sampling ⬆
colors changes the value of the Color property. To do so, click the Color
swatch to turn it on, then hold Ctrl/Cmd and drag the screen to pick a
color. This will change the value of your property to mirror the value of
the picked color. When you are finished picking colors, click the Color
swatch to turn it off (to make sure you don’t accidentally pick colors from
the Viewer).
Use the Color Picker button to load the In-panel Color Picker.

Using the In-panel Color Picker


The In-panel Color Picker is a great tool for working with color. It is
invoked every time the Color Picker button is clicked in a color correction
node (FIGURE 2.29).

FIGURE 2.29 Use the In-panel Color Picker to choose a


color.

NOTE

Prior to Nuke 8.0v1 this functionality was available only as a


floating panel called the Floating Color Picker panel. This
panel is still available and, if you prefer, you can still invoke it
by Ctrl/Cmd-clicking the Color Picker button. In other areas of
the interface, the old floating panel will pop up.

You can pick color using three different methods: the RGB (red, green,
blue), HSV (hue, saturation, value), or TMI (temperature, magenta,
intensity). Three areas represent the three methods. On the left, the color
wheel represents HSV. Use it by manipulating these three elements:

Change hue by moving the triangle widget around the color wheel, or
by holding down Ctrl/Cmd and clicking and dragging left or right on the
color wheel.

Change saturation by moving the dot from the center of the color wheel
toward the edge, or by holding down Shift and clicking and dragging left
or right on the color wheel.

To change value, hold down Shift-Ctrl/Cmd and click and drag left or
right on the color wheel.

In the middle, you have the horizontal RGB sliders and Fine Tune buttons
that control RGB.

Use the RGB sliders and Input fields as explained earlier.

Use the Fine Tune buttons by left-clicking to reduce value in


increments of 0.01 and right-clicking to increase value by 0.01. Hold
down Shift to increase or decrease by 0.1, and hold down Alt/Option for
0.001 increments.

The controls for TMI are on the right. You have the three sliders and
that’s it.

Take a few moments to familiarize yourself with the In-Panel Color


Picker.

1. Click the Color Picker button for Grade1’s Gamma property.

2. Go ahead and play with the controls. When you’re finished, close it by
clicking the Color Picker button again.

Notice you no longer have a slider. Instead, you have four numeric Input
fields (FIGURE 2.30).

FIGURE 2.30 Your single gamma slider turned into four


numeric Input fields.

Where has your slider gone? Every property in the Grade node can control
all four channels, RGB, and alpha separately, but how can this happen
with just one slider? As long as all four values remain the same, you don’t
need four different Input fields, so just one Input field with a slider is
enough. However, when the colors for the different channels are different,
the slider is replaced by four numeric Input fields—one for each channel.

You can switch between the one slider and the four fields using the ⬆
Individual Channels button to the right of the Color Picker button. If you
have four fields and click this button, the interface switches back to
having only a single field. The value you had in the first field is now the
value of the new single field; the other three values are lost.
Using the Animation menu
The Animation button/menu on the right side of any property deals with
animation-related options. Choose options from this menu to create a
keyframe, load a curve onto the curve editor, and set the value of a
property to its default state. The Animation menu controls all values
grouped under this property. So, in the following example, the Animation
menu for the Gamma property controls the values for R, G, B, and alpha.
If you right-click/Ctrl-click the Input field for each value, you get an
Animation menu for just that value (FIGURE 2.31).

FIGURE 2.31 The Animation menu controls a property’s


value curve and animation-related functions.

Color correction is covered in depth in Chapter 4, so for now, I just want


you to input some numbers in the Input fields. First, let’s reset what you
did.

1. Click the Animation menu at the far right of the Gamma Property’s
knob and choose Set to Default.

2. Click the Individual Channels button to the left of the Animation menu
to display the four separate Input fields.

3. Enter the following numbers into the Gamma numeric fields:

• R = 0.778

• G = 0.84

• B = 0.63

•A=1

Entering these numbers into the Input fields makes the doll image darker
and a little more orange. This makes the doll look more connected to the
background, which is dark and has an orange tint to it.

RENDERING
You should be happy with the look of your composite now, but you have a
way to go. To be safe, however, let’s render a version of the Nuke tree to
disk now. To render means to process all your image-processing
instructions into images that incorporate these changes.

Using the Write node


To render your tree, you need to use a Write node, which contains several
functions. The Write node defines which part of the tree to render, where
to render to, how to call your file or files, and what type of file you want to
create. You’re now going to add a Write node at the end of your tree. The
end of your tree is where you can see the final composite. In this case, it’s
Merge1.

1. Click Merge1 at the bottom of the tree and press the W key. You can
also choose Write from the Image toolbox in the Nodes Toolbar or use the
Tab key to create a Write node by starting to type the word write
(FIGURE 2.32).


FIGURE 2.32 Inserting a Write node at the end of your tree

You need to define the path you’re going to write to, the name of the file,
and the type of file you’re going to write to disk. You use the same
student_files folder you made earlier in your Nuke101 folder and render a
TIFF image sequence there called doll_v01.

2. In the Write1 Properties panel, look at the File property line. At the far
right is an icon of a folder (the top one; the bottom one belongs to
something else called Proxy). Click it (FIGURE 2.33).

FIGURE 2.33 The File Folder button is small, so make sure


this is what you are clicking.

3. In the File Browser that opens, navigate to your student_files folder.

By navigating to this path, you have chosen the path to which you want to
render. The first step is complete.

In the field at the bottom, you need to add the name of the file you want to
create at the end of the path. When you do so, make sure you do not
overwrite the path.

You are going to render a file sequence instead of a movie file such as a
QuickTime file. File sequences are generally faster for Nuke to process
because Nuke doesn’t have to unpack the whole movie file to load in just
one image.

But how will you call this file sequence? How are you going to define the
number of digits to use for frame numbers? The next section explains
these issues.

Naming file sequences


You can render moving images in two ways. One is by rendering a single
file that holds all the frames, as is the case with QuickTime, MXF, and
WMV (Nuke only supports writing QuickTime files). The other, which is
preferred for various reasons, is the file sequence method. In this method,
you render a file per frame. You keep the files in order by giving every file
a number that corresponds to the number of the frame it represents. This
creates a file called something like myrender.0001.jpg, which represents
the first frame of a sequence. Following this example, frame 22 would be
myrender.0022.jpg, and so on.

When you tell Nuke to write a file sequence, you need to do four things:

Give the file a name. Anything will do.

Give the file a frame padding structure—for example, how many digits
will be used to count the frames? You can tell Nuke how to format
numbers in two ways. The first is by using one # symbol for each digit. So
#### means four digits and ####### means seven digits. The second
method is using %0xd, where 0x means the number of digits to use. If you ⬆
want two digits, for instance, you write %02d. I find this second method
easier to decipher. Just by looking at %07d, I can tell that I want seven
digits. If I use the other method, I actually have to count. Please note that
you have to add frame padding yourself; Nuke won’t do this for you. Nuke
uses the # symbol by default when it displays file sequence names.
Give your file an extension such as png, tif, jpg, cin, dpx, iff, or exr
(there are more to choose from).

Separate these three parts of the file name with dots (periods).

The first bit can be anything you like, though I recommend not having any
spaces in the name due to Python scripts, which have a tendency not to
like spaces (you can use an underscore if you need a space). The second
bit needs to be defined in one of the ways mentioned previously, either the
%0xd option or the #### option. The last bit is the extension to the file
type you want to render, such as jpg, exr, sgi, tif, and so on.

An example of a file name, then, is file_name.%04d.png, or


file_name.####.jpg.

To render a single file, such as a QuickTime file, simply give the file a
name and the extension .mov. No frame padding is necessary.

1. In the field at the bottom, at the end of your path, add the name of your
file sequence. For this example, use doll_v01.####.tif. Don’t forget the
dots and the v01 (FIGURE 2.34).

FIGURE 2.34 Enter the name of your file sequence at the


end of the file’s path.

2. Click Save.

You now have the path and file name under the File property. You might
have noticed that the Write1 Properties panel changed a little. It now
accommodates a property called Compression, which is, at the moment,
set to a value called Deflate.

Let’s change the compression settings.

3. Choose LZW from the Compression drop-down menu (FIGURE


2.35).

FIGURE 2.35 Changing the compression property to LZW

LZW is a good, lossless compressor that’s part of the TIFF architecture.


Lossless means that, although the file size will be smaller, no quality is
lost in the process. In contrast, JPG uses lossy compression.

You are now ready to render using the Render button on Write1’s
Properties panel (FIGURE 2.36). You can also use the Render All and
Render Selected commands in the Render menu in the menu bar.

FIGURE 2.36 The Render button is the quickest way to


start a render.

Call me superstitious, but even with the autosave functionality, I still like
to save on occasion, and just before a render is one of those occasions.

4. Save your Nuke script by pressing Ctrl/Cmd-S.

5. Click the Render button. ⬆


6. In the Render panel that appears, click OK.

The Render panel lets you define a range to render. The default setting is
usually fine.
The render starts. A Progress Bar panel displays what frame is being
rendered and how many are left to go (FIGURE 2.37).

FIGURE 2.37 The Progress Bar shows the progress of each


frame as it’s being rendered.

TIP

Using the Content menu, you can place the Progress Bar
panel in a specific place in the interface. I prefer the bottom
right, where you placed it in Chapter 1. This way the progress
bar doesn’t pop up in the middle of the screen and hide the
image, which I find annoying.

You now need to wait until the render is finished. Once it is, you probably
want to look at the result of your render. Let’s do that.

Every Write node can double as a Read node. If the Read File check box is
selected, the Write node will load the file that’s written in its File property
(FIGURE 2.38).

FIGURE 2.38 The little Read File check box turns the Write
node into a Read node.

TIP

All nodes can have thumbnails. You can turn them on and off
by selecting a node and using the Alt/Option-P hot key.

7. In Write1’s Properties panel, click the Read File check box to turn it on.

Your Write node should now look like the one in FIGURE 2.39. It now
has a thumbnail like a Read node does.

FIGURE 2.39 The Write node now has a thumbnail.

8. Make sure you are viewing Write1 in the Viewer and then click the Play

button to play forward.

Let the green line along the Timebar fill up; when it does, you should be
able to see realtime playback.
9. When you’re finished, remember to click Stop and deselect Read File.

So, not too shabby. However, you still have a little work to do. If you look
carefully, you might notice that the doll’s feet are actually on top of the
front edge of the artist’s toolbox instead of behind it, so the doll does not
yet appear to be inside the toolbox. Another problem is that around
halfway through the shot, the background darkens, something that you
should mirror in the doll. Let’s take care of the doll’s feet first.

DELVING DEEPER INTO THE MERGE NODE


To cut the doll’s feet so the doll appears to be behind the front edge of the
artist’s toolbox, you need another matte. You will learn to create mattes in
Chapter 6. Until then, you will read the matte as an image from the hard
drive.

1. While hovering over the DAG, press R on the keyboard to create


another Read node.

2. In the File Browser that opens, navigate to the chapter02 folder and
double-click mask.tif to import that file.

3. Select Read3 and press 1 on the keyboard to view this image in the
Viewer.

What you see in the Viewer should be the same as what appears in
FIGURE 2.40. You now have a red shape at the bottom of the image.
Mattes are usually white, not red. How will you use this? Do you need to
key it, perhaps? Let’s take a better look at this image.

FIGURE 2.40 The matte image looks like this.

4. While hovering the mouse pointer over the DAG, press the R key to
view the red channel (FIGURE 2.41).

FIGURE 2.41 Viewing just the red channel

This is what I usually expect a matte to look like: white on black. Let’s see
what the other channels look like.

5. While hovering the mouse pointer over the DAG, press the B key to
view the blue channel, then the G key to view the green channel, then the
A key to view the alpha channel, and finally the A key again to view the
RGB channels.

Did you notice that all the other channels are black? This image was saved
like this to conserve space. There is information in only one channel, the
red channel, rather than the same information being in all four channels,
which would add nothing—it would just make a bigger file. Just
remember that your matte is in the red channel.

The Merge node’s default layering operation is Over, which places one
image over another. But Merge holds many more layering operations. You
look at a few throughout this book. ⬆
Now you will use another Merge node to cut a hole in the doll’s branch
before it gets composited over the background. Because you want to cut
this hole after the doll has been repositioned—but before the composite
takes place—place the new Merge node between Transform1 and Merge1.
6. Select Read3 and press M on the keyboard to create another Merge
node (FIGURE 2.42).

FIGURE 2.42 The newly created Merge2 node’s A input is


connected to Read3.

Merge2 has been created with its A input connected to Read3. You need to
connect Merge2’s B input to Transform1 and Merge2’s output to Merge1’s
A input. You can do this in one step.

7. Drag Merge2 on top of the pipe in between Transform1 and Merge1,


and when the pipe turns white, as in FIGURE 2.43, release the mouse
button.

FIGURE 2.43 Using this method, you can insert a node to


an existing pipe.

Look at Merge2’s Properties panel (FIGURE 2.44). It shows that the


Operation property is still set to Over. You need to change that to
something that uses the A input to punch a hole in the B input.

FIGURE 2.44 The Operation drop-down menu

The Stencil operation does exactly that: It creates a hole in the B input
where the alpha channel of the A input is white. Let’s change the
operation to Stencil.

8. Change Merge2’s Operation property from Over to Stencil.

9. Select Merge1 and press 1 on the keyboard to view it.

Look at the image in the Viewer (FIGURE 2.45). It appears unchanged.


The doll’s feet are still visible in front of the artist’s toolbox. This is
because the Stencil operation uses the A input’s alpha channel, but your
matte is in the red channel. To solve this, move the red channel into the
alpha channel using a node called Shuffle.


FIGURE 2.45 The doll’s feet are still in front.

Using the Shu le node


The Shuffle node is one of the most useful nodes in Nuke. A lot of times
you need to move channels around from one location to another—taking
the alpha channel and placing a copy of it in the three color channels, for
example. Think of this as if you are moving a piece of paper from one
location in a stack to another (FIGURE 2.46).

FIGURE 2.46 The Shuffle node’s anatomy

The Shuffle node looks like a matrix built from a source at the top to the
destination on the right. To understand the flow of the Shuffle node,
follow the arrows from the In property at the top until you reach the Out
property on the right.

You need to move the red channel of the mask image, so you’ll have a copy
of it in the alpha channel. Using the check boxes, you can tell the Shuffle
node to output red into the alpha.

1. Select Read3, and in the Channel toolbox, click Shuffle.

A new node called Shuffle1 has been inserted between Read3 and Merge2
(FIGURE 2.47). Shuffle1’s Properties panel was loaded automatically
into the Properties Bin.


FIGURE 2.47 Shuffle1 is now inserted after Read3.

2. In Shuffle1’s Properties panel, check all the red boxes on the very left
column, as shown in FIGURE 2.48. This places the R channel in the R,
G, B, and alpha channels.
FIGURE 2.48 Setting up the Shuffle node to copy the red
channel into all the four channels

Now that you copied the red channel into the alpha channel, Merge2
works and the doll’s feet now appear to be behind the wooden box
(FIGURE 2.49).

FIGURE 2.49 That’s more like it. The doll’s feet are now
behind the front of the toolbox.

Viewing a composite without rendering


Let’s look at your composite so far. To do that, all you need to do is click
Play. You don’t need to render every time.

1. Select Merge1 and press 1 on the keyboard to make sure you are viewing
it in Viewer1.

2. Press the Play button in the Viewer. Let the clip cache once, and then
enjoy your handiwork.

3. Click Stop in the Viewer.

Did you notice the dark flash that occurred during the playback? It starts
at frame 42. You will need to make the doll mimic this light fluctuation.

So far, all the values you set for various properties were constant values—
they did not change from frame to frame. Now you need to change those
values over time. For that purpose you have keyframes.

CREATING ANIMATION WITH KEYFRAMES


Keyframes specify that you want animation. When something is
animated, the property needs to change its value over time. So, if you
want your image to have a Blur value of 20 to begin with and then, at
frame 10, to have a value of 10, you need to specify these two pieces of
information: a value for frame 1 and a value for frame 10. The application
interpolates the in-between values.


NOTE

You could have used the existing Grade node instead of


adding another one, but using another Grade node gives you
greater freedom. Each node has a purpose. The first matches
the color of the foreground to the background, and the second
takes care of the brightness change. You can always delete or
disable one without affecting the other, for example, which is
quite handy.

In Nuke, practically every property can be keyframed. Here, you are going
to create another Grade node and use that to change the brightness of the
doll branch to match the changes to the lighting in the background.

1. In the Viewer, go to frame 42 by clicking the Timebar (FIGURE 2.50).

FIGURE 2.50 Move to frame 42 using the Timebar.

2. Select Grade1 and press G on the keyboard to create another Grade


node. Create more space in the tree if you need to.

Your tree should now have a new Grade node in it called Grade2
(FIGURE 2.51).

FIGURE 2.51 Grade2 is inserted after Grade1.

TIP

If, unlike my tree, your tree is really messy, it will be hard for
you to understand the flow of information and read your tree.
Sure, this is a simple tree. But if it were bigger, and you saved
it and went back to it over the weekend, things might no
longer make sense. Keeping an organized tree is always a
good idea.

Frame 42, which you are on at the moment, is the last frame of bright
lighting you have before the background starts to darken. This will be the
location of your first keyframe, where you lock the brightness of the doll
to the current brightness level.

5. In Grade2’s Properties panel, click the Gain property’s Animation


menu and choose Set Key (FIGURE 2.52).


FIGURE 2.52 Set Key holds the current value at the current
frame in place.

This creates a keyframe for the four values (R, G, B, and Alpha) associated
with the Gain property. Notice that the Input field turns a blue color—this
is to show that a keyframe is present on this frame for this property
(FIGURE 2.53).

FIGURE 2.53 The blue Input field indicates this frame has
a keyframe.

The Timebar also displays a blue-colored little box to indicate on which


frame you have keyframes (FIGURE 2.54). The markings on the
Timebar are shown for all open Property panels that have keyframes. To
see the keyframes on the Timebar for a specific node, make sure only that
node’s Properties panel is open in the Properties Bin.

FIGURE 2.54 A blue box on the Timebar indicate a


keyframe.

6. Advance one frame by hovering over the Viewer and pressing the right
arrow on your keyboard.

Notice that the color of the property’s Gain field is now showing a subtler
blue color (FIGURE 2.55). This indicates that there is animation for the
property, but there is no keyframe at this point in time.

FIGURE 2.55 A light blue Input field indicates that this


property has animation but no keyframe at the current
frame.

7. Play with the Gain slider until you reach a result that matches the doll’s
brightness to that of the background. I stopped at 0.025.

A keyframe is now automatically added. Once the first keyframe is set,


every change in value to the property will result in a keyframe on the
frame you are currently on. Notice the color of the numeric box changed
from the light blue color to a bright blue color, indicating a keyframe has
been created for this frame.

8. Advance another frame forward to frame 44 and adjust the gain again.
I stopped at 0.0425.

TIP

Remember that one way of manipulating values is to click in


the Input field and then use the arrow keys to change every
decimal point number by placing the mouse cursor to the left
of it. This makes fine adjustments easy!


9. Repeat the process for frames 45 and 46 as well. I had 0.41 and then
1.0.

We have now created several keyframes for the Gain property, resulting in
animation. The animation can be drawn as a curve in a graph called an
Animation curve. The X axis will represent time and the Y axis value.

Let’s set the Animation curve for this property in the Curve Editor.

10. Choose Curve Editor from the Gain property’s Animation menu
(FIGURE 2.56).

FIGURE 2.56 Using the Animation menu to load a curve


into the Curve Editor

You can see the curve for the animation you just created (FIGURE 2.57).
The Curve Editor is explained in more detail in Chapter 6.

FIGURE 2.57 This is what an Animation curve looks like.

11. Click the Node Graph tab to go back to the DAG.

Look at Grade2. What’s that on the top right? Notice it has a little red
circle with the letter A in it (FIGURE 2.58). Do you wonder what that’s
for? It’s an indicator, which I explain in the next section.

FIGURE 2.58 Node indicators can be very useful at times.

Indicators on nodes
Several indicators may appear on Nodes in the Node Graph, depending on
what you are doing. TABLE 2.1 describes what each indicator means.


TABLE 2.1 Node Indicators

Having little “tells” like these indicators on the nodes themselves really
helps you read a tree. The A indicator, for example, can help you figure
out which of your two Grade nodes is the one you added animation to.

You should now be happier with your comp (FIGURE 2.59). The doll
appears to be stading inside the artist’s toolbox, and the light change is
matched. Let’s save and render a new version of the composite.

FIGURE 2.59 The final tree should look like this.

Rendering a new version and comparing


Since you updated the Nuke project and the shot now looks so much
better, it is a good idea to create another version of your render. Sure, you
can always overwrite what you rendered before, but it would be a shame
not to have something to compare to.

Nuke’s versioning system, which you used at the beginning of this chapter
to save a new version of the script, also works with Read and Write nodes.

Remember from earlier in this chapter how you set up your Write node to
have a “_v01” in the file name? Well, that’s what’s going to change. Using
a hot key, change this to one version up, meaning “_v02”.


1. Select Write1 and press the Alt/Option-Arrow Up key to change the file
name to v02.

The whole file name should now be doll_v02.####.png. Going up a


version (and going down with Alt/Option-Arrow Down) is as easy as that.

2. To render a new version, press F5 on your keyboard (hot key for


Render All) and then click OK.
Comparing images
It would be great to compare the two versions. Any Viewer in Nuke has up
to 10 different Inputs, and it’s very easy to connect nodes to these Inputs
to view them as you did in Chapter 1. You can also split the screen in the
Viewer as long as the images are connected to two of the Viewer’s inputs.

When the Render is finished, compare the previous render to this one in
the Viewer. Let me walk you through this.

1. While hovering your mouse pointer over the DAG, press R on the
keyboard to display a Read File browser.

2. Navigate to the student_files directory and double-click


doll_v01.####.tif to load it in.

You now have Read4 with the previous render; Write1 can become the
new render by turning it into a Read node.

3. Double-click Write1 and, in the Properties panel that opened, click the
Read File check box.

To compare the two versions, you will now load each one to a different
buffer of the Viewer and then use the Viewer’s composite controls to
compare them.

4. Click Read4 and press 1 on the keyboard.

5. Click Write1 and press 2 on the keyboard (FIGURE 2.60).

FIGURE 2.60 Using two Viewer inputs

6. Go to frame 43 in the Timebar.

7. From the Composite button/menu at the center of the Viewer, choose


Wipe (FIGURE 2.61).

FIGURE 2.61 Turning on the Wipe option in the Viewer


Composite controls

With the Composite controls set in this way, there’s a new axis on the
Viewer—the image to the left of this axis is Write1 and the image to the
right of this axis is Read4.

You can move the axis using the controls shown in FIGURE 2.62.


FIGURE 2.62 The Viewer Composite control’s axis

8. Reposition the axis at the center of the doll (FIGURE 2.63).

FIGURE 2.63 Repositioning the axis at the center of the


doll

You can clearly see that the half doll on the left has been darkened, while
the half doll on the right is still bright. You can even compare the two
halves while playing in the Viewer.

9. Click the Play button in the Viewer.

Look at them roll! The two streams are playing side by side, and you can
see that only the stream on the left shows the doll darkening when
needed. Also, only the doll on the left has its feet appearing behind the
artist’s toolbox. Well done! See how much you advanced? And it’s only
Chapter 2!

10. Use the hot key Ctrl/Cmd-S to save your script for future reference.

11. Quit Nuke.

This ends the practical introduction. You should now start to feel more
comfortable working with the interface to get results. In the next chapter,
you work with a much bigger tree. Get ready to be dropped into the pan.


PREV NEXT
 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
2. Touring the Interface with a Basic Composite 4. Color Correction
   🔎

3. Compositing CGI with Bigger Node Trees


These days, Computer Generated Imagery, or CGI, primarily
refers to images that are rendered out of 3D software such as
Maya, 3ds Max, and Houdini.

Rendering in 3D software can be a very long process.


Calculating all the physics required to produce a convincing
image is very processor intensive. A single frame can take
anywhere from a minute to eight hours or more to render.
Because of this, 3D artists go to great lengths to give
compositors as much material as possible so that they can
modify the 3D render without having to go back to the 3D
package and re-render.

Compositors find it very hard to change the animation or shape of a 3D


model. But color and quality are compositors’ strong suits, and those are
the main things they deal with when they have to composite a 3D render.
To facilitate the ease of changing color and quality in compositing, 3D
renders are usually rendered in passes. Each pass represents a part of
what makes up the final image. The amount of light that falls on the
object, for example, is rendered separately (represented as a color image
where bright pixels are strong light and dark pixels are low light) from its
color. Reflections, shadows, and specular highlights are all examples of
how an image might be split into different passes. The compositor’s role,
then, is to take in all these elements and rebuild what the 3D program
usually does—creating the beauty pass, the composite of all these passes
together.

Once the layers are composited, it is very easy for the compositor to
change the look of the beauty pass, because he or she has easy access to
anything that makes up the look of the render. For example, because the
light is separate from the color, it’s easy to color correct it so it is brighter
—meaning adding more light—or to change the reflection so it is a little
blurry, making the object look like its material is a little worn, for
example. Rendering just a single final image from the 3D software, on the
other hand, means that changing the look (such as adding blur) will be
more difficult.

WORKING WITH CHANNELS


Down deep, digital images are really an array of numbers representing
brightness levels. Each location in the X and Y resolution of the image has
a brightness value. A single location in X and Y is called a pixel. An array
of a single value for each pixel is called a channel. A color image generally
consists of the four standard channels: red, green, blue, and alpha. Nuke
allows you to create or import additional channels as masks, lighting
passes, and other types of image data. A Nuke script can include up to ⬆
1,023 uniquely named channels per compositing script.

All channels in a script must exist as part of channel set (also called a
layer). You’re probably familiar with the default channel set—RGBA—
which includes the channels with pixel values for red, green, blue, and
also the alpha channel for transparency. Channel names always include
the channel set name as a prefix, like this: setName.channelName. So the
red channel is actually called rgba.red.

Most image file types can hold only one channel set—RGBA. The PNG and
JPEG file formats hold only RGBA; however, TIFF and PSD can hold
more channels. All the layers you create in Photoshop are actually other
channel sets. One of the better multilayer file formats out there is called
OpenEXR. It can support up to 1,023 channels, is a 32-bit float format
(meaning it doesn’t clip colors above white and below black), and can be
saved in a variety of compression types. 3D applications are using this file
format more and more. Luckily, Nuke handles everything that comes in
with OpenEXR very well. OpenEXR has the .exr extension and is simply
called EXR for short.

Bringing in a 3D render
To start the project, bring in a 3D render from your hard drive (files
downloaded per the instructions in this book’s introduction).

1. Using a Read node, bring in chapter03/goose.####.exr.

2. Connect Read1 to Viewer1 by selecting Read1 and pressing 1 on the


keyboard.

3. Click the Play button in the Viewer to look at the clip you brought in.
Let it cache by allowing it to play once; it will then play at normal speed.

4. Click Stop and use the Timebar to go to the last frame: 65 (FIGURE
3.1).

FIGURE 3.1 A 3D render of Goose

This shot is part of a short film called Goose by three talented people: Dor
Shamir, Shai Halfon, and Oryan Medina. Goose was created as a personal
project while the three were working at Geronimo Post&Design. You can
(and should!) view the whole thing here: www.vimeo.com/33400042
(http://www.vimeo.com/33400042).

5. By pressing Ctrl/Cmd-S, save your script (Nuke project files are called
scripts, remember) in your student_files folder. Name it
chapter03_v01.

Viewing channel sets with the Viewer


The 3D render displayed in the Viewer, which you brought in from your
hard drive, is an EXR file that holds the beauty pass (a combination of all
the other passes) in the RGBA channel set along with all the channel sets
that make up the image. To view these other channels, use the Viewer’s
Channel buttons (FIGURE 3.2).

FIGURE 3.2 Use these three Channel buttons to display


different channels in the Viewer.

The three buttons at the top left of the Viewer are the Channel buttons.
The button on the left shows the selected channel set; by default the
RGBA set is chosen. The button in the middle shows which channel to
display when viewing the alpha channel; by default it is rgba.alpha. The
button on the right shows which channel from the set you have currently
chosen to view; by default, it is RGB.

If you want to view the second channel of a channel set called Reflection
Pass, for example, you need to change the leftmost button to show the
Reflection Pass channel set, and then set the third button to the green
channel (G is the second letter in RGBA—hence, the second channel).

1. Click the Channel Set Viewer button (the one on the left) and view all
the channel sets you can now choose from (FIGURE 3.3).
FIGURE 3.3 The list of all the channel sets available in the
stream being viewed

This list shows the channels available for viewing for the current stream
loaded into the Viewer. In this case, Read1 has lots of extra channel sets
besides RGBA, so they are available for viewing.

As a side note, the Other Layers submenu at the bottom shows channels
that are available in the script, but not through the currently viewed node.
For example, if we had another Read node with a channel set called
Charlie, it would show up in the Other Layers submenu.

2. Switch to the Col channel set (short for Color) by choosing it from the
Viewer drop-down menu.

Figure 3.4 and your Viewer show this pass, simply called the color pass,
which represents the unlit texture as it’s wrapped around the 3D object.
Essentially it is the color of the object before it’s been lit.

FIGURE 3.4 The Col channel set represents the unlit


texture.

There are many ways to render separate passes out of 3D software. Not all
of them include a color pass, or any other pass you will use here. It is up to
the people doing the 3D rendering and compositing to come up with a
way to render that makes sense to the production, whatever it may be.
Having a color pass makes sense for this production because we need to
be able to play with the lighting before we apply it to the texture.

3. Go over the different passes in the Viewer to familiarize yourself with


them. Refer to TABLE 3.1 to understand what each pass is supposed to
be.


TABLE 3.1 Lemming Render Passes

4. When you are finished, go back to viewing the RGBA set.

Table 3.1 shows a list of the different render passes incorporated in this
EXR file sequence. We won’t necessarily use all the passes available to us.

WORKING WITH CONTACT SHEETS


A good tool in Nuke is the ContactSheet node, which organizes inputs into
an array of rows and columns. Using the ContactSheet node results in a
handy display of all the inputs and helps you view everything together.
For example, you can connect all the shots you are color correcting to a
ContactSheet node and compare all of them in one sweep.

Another version of the ContactSheet node is called LayerContactSheet. It


works exactly the same way, but instead of several inputs, it is designed to
show you the different channel sets of a single input in an array.

You can use the LayerContactSheet node to look at all the passes you
have.

1. While viewing the RGBA channel set, select Read1 and attach a
LayerContactSheet node from the Merge toolbox.

2. Click the Show Layer Names check box at the bottom of


LayerContactSheet1’s Properties panel (FIGURE 3.5).

FIGURE 3.5 Clicking the Show Layer Names check box will
display the channel set names in the Viewer.

You can immediately see all the different channel sets laid out, with their
names. This makes life very easy. The LayerContactSheet node is a very
good display tool, and you can keep it in the Node Graph and refer to it
when you need to (FIGURE 3.6).

FIGURE 3.6 The output of LayerContactSheet1 in the


Viewer

3. Delete LayerContactSheet1.

USING THE BOUNDING BOX TO SPEED UP
PROCESSING
The bounding box is an element that Nuke uses to define the area of the

image for processing. It is always there. Chapters 1 and 2 have

bounding boxes all over them—you just may not have noticed them. To
understand the bounding box, let’s first look at the image properly.

1. Make sure Read1 is loaded in Viewer1 by selecting it and pressing 1 on


the keyboard; then go to frame 1.

2. Look at Read1’s alpha channel by pressing A while hovering the mouse


pointer over the Viewer. Now go back to view the color channels by
pressing A again.

Normally, as in most other compositing software, Nuke processes the


whole image. If you add a Blur to this image, every pixel is calculated. But
that would be wasteful, because blurring all that blackness at the top of
the image won’t change that part of the image at all. You can tell Nuke
which part of the image to process and which part to disregard without
changing the resolution. You do this with a bounding box, the rectangular
area that defines the area of the image that Nuke should process. Pixels
outside the bounding box should not be processed and should remain
black.

3. From the Transform toolbox, attach a Crop node after Read1.

4. Grab the top edge of the Crop controls and drag it to frame the pilot.

5. Close all Properties panels by clicking the Empty Properties Bin button
to hide the controls.

The dotted line that formed where you placed the top edge of the Crop
controls is the bounding box (FIGURE 3.7). The numbers at the top
right in Figure 3.7 indicate the top-right location of the bounding box, in
pixels, on the X and Y axes. The resolution of the image itself didn’t
change, but pixels outside the bounding box will not be considered for
processing.

FIGURE 3.7 The bounding box contains numbers to


indicate its location on the X and Y axes.

It takes a long time to animate the crop by changing the on-screen


controls and adding a keyframe on every frame, and it isn’t very precise,
so there’s a tool to automate the process: the CurveTool node.

6. Insert a CurveTool between Read1 and Crop1 by clicking Read1 and


then clicking CurveTool in the Image toolbox (FIGURE 3.8).

FIGURE 3.8 Inserting a CurveTool node after Read1


7. View the output of CurveTool1 by selecting it and pressing 1 on the
keyboard.

8. In CurveTool1’s Properties panel, choose Auto Crop from the Curve


Type drop-down menu.

9. Click Go! In the panel that opens, click OK.

The Auto Crop function is now looking for pixels that are black. Black
pixels surrounded by nonblack pixels will remain inside the new bounding
box. However, black pixels that don’t have any nonblack pixels between
them and the edge of the frame will be considered unimportant, because
they are adding nothing to the image, so the new bounding box will not
include them.

10. When the process finishes (it may take some time), click the
AutoCropData tab.

Here you can see the four values (X, Y, Right, and Top) that are now
changing from frame to frame according to the location of nonblack
pixels. Keyframes were created on every frame, as you can tell by the
bright blue color of the Input fields (FIGURE 3.9).

FIGURE 3.9 The AutoCropData tab once the processing is


complete

LINKING PROPERTIES WITH EXPRESSIONS


The CurveTool does not apply any operations; it only accumulates data.
To use the accumulated data, you need to use another node—in this case,
a Crop node. You have one already, you just need to tell the Crop node to
use the properties from the CurveTool’s AutoCropData tab. You will
create a link between CurveTool1’s AutoCropData property and Crop1’s
Box property using an expression.

Expressions are programmatic commands used to change a property’s


value. Expressions can be simple math functions (such as 2 + 4), more
complex math functions (sin (frame / 2)), or a set of available functions
that may be included as part of an expression such as width for the width
of an image, or frame, which is the current frame number. A few of these
functions are explained throughout the book.

You can type an expression by hand (in fact, you will do this in Chapter 5)
or you can make one by clicking and dragging, as you will do now:

1. Double-click Crop1 and then double-click CurveTool1 to display both


their Properties panels, one on top of the other.

2. Ctrl/Cmd-click-drag down from CurveTool1’s AutoCropData animation


menu to Crop1’s Box Animation menu and release the mouse button
(FIGURE 3.10).

FIGURE 3.10 Dragging from one animation menu to


another while holding Ctrl/Cmd creates a linking
expression.

Curve1’s Box property’s four Input fields turn light blue, which means you
have successfully linked Crop1’s property to CurveTool1’s property
(FIGURE 3.11).

FIGURE 3.11 The light blue color of the Input fields shows
there is animation.

Let’s learn a little bit more from this and see what this expression looks
like.

3. From Crop1’s Box property’s Animation menu, choose Edit Expression


to display the Expression panel (FIGURE 3.12).

FIGURE 3.12 Crop1’s Box property’s Expression panel

The Expression panel shows the four expressions for the four
subproperties of the Box property (X, Y, R, and T).

The expression for each of the four subproperties is


parent.CurveTool1.autocropdata. The three parts of this expression
are separated by dots.

• The first, parent, tells the current property to parent to another


property, meaning copy values from another property.

• The second part tells which node, in this case CurveTool1, should look
for a property.

• The third part tells which property to copy from.

You can see this is a very simple command, one that you can easily enter
yourself. Simply use the node name and then the property name you need
to copy values from.

Let’s see the result of this expression in the Viewer—but first close the
open Expression panel.

4. Click Cancel to close the open Expression panel.

5. Clear the Properties Bin by using the button at the top left of the
Properties Bin.

6. View Crop1’s output by selecting it and pressing 1 on the keyboard


(FIGURE 3.13).

FIGURE 3.13 The bounding box is now a lot tighter.

In Figure 3.13 you can see that the bounding box has changed
dramatically. If you move back and forth a couple of frames, you will see
that the bounding box changes to engulf only the areas where something
is happening. Having a bounding box that engulfs only the pilot, thereby
reducing the size of the image that is being processed, speeds up
processing.

SLAPPING THINGS TOGETHER: FOREGROUND OVER


BACKGROUND ⬆
To achieve the desired results, the pilot needs to be composited over a
background. Let’s bring it in.

1. Using a Read node, read another sequence from your hard drive:
chapter03/sky.####.jpg.
2. Load Read2 into the Viewer, make sure you are viewing the RGB
channels, and click the Play button. When done, stop, and go to frame 1 in
the Timebar (FIGURE 3.14).

FIGURE 3.14 The background image

This is your background, some sky for the daredevilish flight of our little
pilot.

When working with 3D renders, a good first step is to start with what’s
technically called a slap comp. Slap comps are exactly what they sound
like—a quick comp slapped together to see if things are working correctly.
Slap comps tell you whether the animation is working with the
background, whether the light direction is correct, and so on.

3. Select Crop1 and press M on the keyboard to create a Merge node.

4. Connect Merge1’s B input to Read2, and view Merge1 in the Viewer.

5. While hovering the mouse pointer over the Viewer, click Ctrl/Cmd-P to
switch to Proxy mode.

Proxy mode, by default, shrinks all images to half resolution, reducing


render and playback times considerably. Proxies are discussed more in
Chapter 8.

6. Click Play in the Viewer to view your slap comp (FIGURE 3.15).

FIGURE 3.15 The slap comp in all its glory

You can see that the result is already not bad. The good thing is that the
pilot and the sky are moving in a similar way. However, the pilot still isn’t
looking as if he belongs in the scene. His color is all wrong for the
background and the scene simply doesn’t feel as if he’s there, shot on
location, which is what compositing magic is all about.

7. Click the Stop button in the Viewer and go back to frame 1 in the
Timebar.

8. You no longer need the slap comp, so select Merge1 and delete it.

BUILDING THE BEAUTY PASS


Next you’re going to take all the passes and start building the beauty pass
as it was built in the 3D renderer. When you’re finished doing that, you’ll
have full access to all the building blocks of the image, which means you’ll
be able to easily change some things, such as the color and brightness of
passes, to make the comp look better.

You build the passes like this: First combine the GI and Lgt passes using a
Merge node’s plus operation; then multiply the result with the Col pass
using another Merge node. Then merge with a plus operation in this
order: the Ref, Spc, and SSS passes. This is the way this image was put
together in the 3D software that created it, so this is how you recombine it
in Nuke.

There are two ways to work with channel sets in Nuke. The first is by
working down the pipe, and the second is by splitting the tree.
Working down the pipe
To start layering passes, first combine the Lgt and GI passes with a plus
operation. Let’s connect a Merge node.

1. Select Crop1 and insert a Merge node after it by pressing the M key.

Merge1 has two inputs. The B input is not connected. You want to
combine two different channel sets that live inside the Read1 branch, so
you simply connect the B input to the same output that the A input is
connected to—the output of Crop1.

2. Move Merge1 below Crop1, connect Merge1’s B input to Crop1, and


view it in the Viewer (FIGURE 3.16).

FIGURE 3.16 Both the A and the B inputs are connected to


Crop1.

So how are you going to tell Merge1 to use different channel sets for this
operation? So far you have been combining the RGBA of images, which is
the default state of the Merge node. However, you can change that using
the pull-down menus for each input (FIGURE 3.17).

FIGURE 3.17 The pull-down menus control which channel


sets are combined.

Let’s go ahead and use these pull-down menus to combine the GI and Lgt
passes.

3. Make sure Merge1’s Properties panel is loaded in the Properties Bin.

4. From the A channel’s pull-down menu, choose the Lgt channel set.

5. From the B channel’s pull-down menu, choose the GI channel set.

6. Since you need to add the two images, change the Operation property,
using the pull-down menu at the top, to Plus (FIGURE 3.18).

FIGURE 3.18 Merge1’s Properties panel should look like


this now.
At this point, you have joined together all your light sources to a single
light pass (FIGURE 3.19).

FIGURE 3.19 Merge1’s output should look like this in the


Viewer.

7. Insert another Merge node after Merge1.

8. Connect Merge2’s B input to Merge1 as well (FIGURE 3.20).

FIGURE 3.20 Inserting and connecting Merge2 should end


up looking like this.

The output pull-down menu for Merge1 was set to RGBA (the default).
You can change that if you want the output to be placed elsewhere. The
combination of the Lgt and GI passes is now the RGBA. You want to
multiply that with the Col pass.

9. From the A channel’s pull-down menu, choose the Col channel set.

10. Since you want to multiply this pass with the previous passes, choose
Multiply from the Operation pull-down menu (FIGURE 3.21).

FIGURE 3.21 Merge2’s Properties panel should look like


this.

What you just created is a diffuse pass. A diffuse pass is the combination
of the light and color of an object. To see what you did, use the mix slider
to mix the A input in and out (FIGURE 3.22).


FIGURE 3.22 Using the mix slider, you can make the A
input transparent.

11. Play with the mix slider to see the Col pass fade in and out, and to see
what it does to the image. When you’re finished, set the mix slider to 1.
You should now have something that looks similar to FIGURE 3.23.

FIGURE 3.23 The tree so far, going down the pipe

You can continue doing this until you combine all the passes. However, I
find working in this way restricts the advantage of the tree—having easy
access to every part of it. I prefer to have the Merge node just combine
two streams and then have the streams available outside the Merge node.
Shuffling the channel sets inside the Merge node restricts that. Having
everything out in the open, as it will be as you build this in the next
section, makes everything a lot more visual and apparent, and this gives
you easier access and better control.

12. If you want, save this script using File > Save As, and give it a new
name.

13. Click and drag to create a marquee to select Merge1 and Merge2, and
then press the Delete key to delete them.

Splitting the tree


This time around, you will make branches for each pass as you go. This
gives you instant access to every pass in your tree, which is very handy,
but it also creates a very big tree. To not lose your bearings, you need to be
very careful about how you build the tree and where you place nodes.

One of the interface elements you’re going to use a lot is the Dot. The Dot
is a circular icon that enables you to change the course of a pipe, making
for a more organized tree.

1. Select Crop1 and press the . (period) key on your keyboard.

2. Select the newly created Dot, and then insert a Shuffle node from the
Channel toolbox.

3. Make sure you are viewing Shuffle1 in the Viewer, and then change the
In 1 property to GI.

4. From the In 2 property’s drop-down menu, pick RGBA instead of


None.

5. Select the In 2 alpha to alpha check box to direct the alpha from input 2
to the alpha output (FIGURE 3.24).

FIGURE 3.24 Using Shuffle you can mix channels from


multiple channel sets.
What you did here was take the R, G, and B from the GI channel set and
the alpha from the original RGBA channel set, and output these four
channels into a new RGBA. You did this so that your GI pass, which
doesn’t come with an alpha channel, will have the correct alpha channel.

Because you will do this a lot, it is a nice reminder if you name your
Shuffle1 node according to the name of the input channel set. You can
simply change the name of the node, but that is less advised. Instead, you
have a label for each node, accessed via the Node tab in each node’s
Properties panel.

NOTE

Text fields in Nuke generally use the scripting language TCL.


Nuke used to be strictly TCL but added Python as well.
However, TCL remains useful for Property panel
manipulation. Check out www.tcl.tk (http://www.tcl.tk) as a good
source for learning TCL scripting. Chapter 12 covers scripting
in more detail.

6. Switch to the Node tab in Shuffle1’s Properties panel.

Whatever you type in the Label Input field will appear on the node in the
DAG.

7. In the Label field, enter GI.

You can see that GI appears under Shuffle1 (FIGURE 3.25).

FIGURE 3.25 Whatever you type in the Label box appears


on the node in the DAG.

You can simply type this for every pass. However, you can also use a little
scripting to automate this process.

8. In the Label property’s field, replace GI with [knob in].

Breaking down what you typed, the brackets mean you are writing a TCL
script. The word knob means you are looking for a knob (knob =
property). The word in is the name of the knob (the pull-down knob, in
this case).

The result of this script shows the value of the property called In.
Therefore, you will see that the node in the DAG still appears as GI.

9. To make this a little more readable, add a space and the word pass
after the script, so it reads like this: [knob in] pass (FIGURE 3.26).

FIGURE 3.26 The label should display this text.

The word pass is just a word—because it’s outside the TCL brackets, it will
simply appear as the word (it’s not part of the script). The node in the
DAG now shows the label GI pass (FIGURE 3.27).


FIGURE 3.27 Shuffle1 in the DAG will display the new label
with the TCL script resolved.

Now, just by looking at the Node Graph, you can see that this is your GI
pass branch. You will have a similar setup for your other passes.

Because the passes came in from the 3D software as premultiplied, and by


multiplying and adding passes together you are actually doing color
correction operations, you need to unpremultiply each of the passes
before doing almost anything else with them. That’s why you need to
make sure you have an alpha channel for the pass by shuffling the
rgba.alpha channel to the new rgba.alpha channel. The node Unpremult
negates any premultiplication.

10. Insert an Unpremult node from the Merge toolbox after Shuffle1.

Use the GI pass as your background for all the other passes. It will serve
as the trunk of your tree. The rest of the passes will come in from the right
and connect themselves to the trunk of the tree. To do the next pass, you’ll
first create another Dot to keep the DAG organized.

11. While nothing is selected, create a Dot by pressing the . (period) key
on your keyboard.

12. Connect the newly created Dot’s input to the previous Dot.

13. Drag the Dot to the right to create some space (FIGURE 3.28).

FIGURE 3.28 Keeping a node organized with Dots

14. Hold down Ctrl/Cmd and drag the yellow Dot in the middle of the
pipe between the two Dots to create a third Dot. Drag it to the right and
up so it forms a right angle between the two Dots (FIGURE 3.29).

FIGURE 3.29 It’s easy to snap elements to right angles in


the DAG.

Now you will create the content for this new branch by copying ⬆
everything, changing a few properties, and rearranging a few nodes.

15. Select both Shuffle1 and Unpremult1, and press Ctrl/Cmd-C to copy
them.

16. Select the bottom-right Dot and press Ctrl/Cmd-V.


17. Arrange Shuffle2 and Unpremult2 so that Shuffle2 is to the left of the
Dot and Unpremult2 is to the left of Shuffle2 (FIGURE 3.30).

FIGURE 3.30 After arranging the nodes, the tree should


look like this.

18. Double-click Shuffle2 to display its Properties panel and choose Lgt
from the drop-down menu (FIGURE 3.31).

FIGURE 3.31 Picking another pass by choosing Lgt from


the drop-down menu in the Shuffle node

Notice how the label changed to reflect this in the DAG, thanks to our TCL
script (FIGURE 3.32).

FIGURE 3.32 The TCL script made the label change


automatically.

19. Select Unpremult2 and press M on the keyboard.

20. Connect Merge1’s B input to Unpremult1.

21. Make sure Merge1’s Properties panel is open and change the
Operation property’s drop-down menu from Over to Plus (plus means to
add).

22. Move Merge1 so it’s directly underneath Unpremult1 and in a straight


line from Unpremult2 (FIGURE 3.33).

FIGURE 3.33 Merge1 should be placed like so.

The two lights are added together much in the same way as light works in
real life (FIGURE 3.34).

FIGURE 3.34 The Lgt pass added. The highlighted line will
be duplicated for each additional pass.

Now for the next pass—the Col pass (short for color pass).

23. Select the horizontal line of nodes that starts with the Dot and ends
with Merge1; press Ctrl/Cmd-C to copy it (it’s highlighted in Figure 3.34).

NOTE

You held Shift to branch a copy of the above group of nodes


rather than to insert.

24. Select the Dot at the bottom right of the tree and press Ctrl/Cmd-
Shift-V to paste. Notice you should hold Shift as well.

25. Drag the newly pasted nodes down a little and connect Merge2’s B
input to Merge1.

26. Double-click Shuffle3. Click Shuffle3’s In property’s drop-down menu


and choose Col.

27. Double-click Merge2. From the Operation property’s drop-down


menu, choose Multiply.

You have just multiplied the Col pass with the composite. This resulted in
the diffuse pass, which is the lit textured object. FIGURE 3.35 shows the
tree making up the diffuse pass. Next you’ll finish adding the rest of the
passes.

FIGURE 3.35 With a few changes, each of these lines will


represent a pass.

Repeat this process three more times to connect the reflection (Ref),
specular (Spc), and sub surface scattering (SSS) passes. You will copy the
same branching line of nodes, and then paste it back into the last bottom
right Dot, connecting the Merge node and changing the Shuffle node, and
sometimes the Merge node’s Operation property.

28. Repeat the process. Select the last line starting from the bottom-right
Dot and ending in the bottom-most Merge node, copy, and then branch-
paste to the bottom-right Dot. Connect the new Merge’s B input to the
previous Merge node.

29. This time, change the Shuffle node’s In property to Ref. From
Merge3’s Operation property, choose Plus.

30. Repeat the process, this time for the Spc pass. The Merge operation
for the Spc pass needs to be Plus.
31. Go through the process a third time. Choose SSS for the Shuffle node’s
In property, and make sure the Merge operation is set to Plus.

32. Make sure you are viewing the output of the last Merge node, which
should be called Merge5 (FIGURE 3.36).

FIGURE 3.36 The main trunk of the tree is always the B


input.

Notice the main trunk of your tree always follows the B input. This is
important, as this is the way the Merge node operates. Now try this:

33. Select Merge5 by clicking it.

34. Click Shift-X to swap your A and B inputs.

In the Viewer, you can see that the image doesn’t change; however, let’s
see what happens when we change the Mix property in Merge5.

35. Play with Merge5’s Mix property.

You’d expect for the SSS pass to fade in and out. Instead, the entire image
with the exception of the SSS pass fades in and out. This is because the
Merge node treats the B input as the background image—the one that’s
not changing—and the A input as the added material.

36. With Merge5 selected, click Shift-X again to swap back the inputs.

37. Play with Merge5’s Mix property again. When you’re done, leave it on
1.

The order of inputs—A and B—is important for another reason: the math
order of operations. For the Plus operation, there’s no difference as 1 + 3
is the same as 3 + 1. But for the Minus operation, the order means A
minus B, which will work on some occasions, but most of the time, you
want to subtract A from B, as B is your untouched background. The
operation From does exactly what Minus does, only the other way around:
It subtracts A from B.

Using these passes, you completed the basic building of the pilot’s beauty
pass. Your tree should look like FIGURE 3.37.

FIGURE 3.37 The tree at the end of the beauty pass build

USING THE SHUFFLECOPY NODE


Now that you have finished building the pilot’s beauty pass, you need to
premultiply your composite to get rid of those nasty edges (FIGURE
3.38) and return to a premultiplied image that can be composited with
other elements. To do this, you need to make sure you have the right
alpha channel.


FIGURE 3.38 Nasty edges

If you look carefully, you might notice that as you added and multiplied
various images together, you were also doing the same to the alpha
channel. This means the alpha channel you now have in the pipe is a
massively degraded one. You need to revert to the original unchanged
one. The original alpha exists elsewhere in the tree—on the right side of
the tree, where all the Dot nodes are, where you source all your branches.

To copy the channel from one branch to another, you use a node that’s
very similar to the Shuffle node you’ve been using. This one is called
ShuffleCopy, and it allows you to shuffle channels around from two inputs
instead of just one. You need to copy the RGBA pass’s alpha channel from
the branch on the right to the RGBA’s alpha channel to your trunk on the
left.

1. Select Merge5 (the one with the SSS pass in the A input) and insert a
ShuffleCopy node after it from the Channels toolbox.

2. Connect ShuffleCopy1’s 1 input to the last Dot on the right, then


Ctrl/Cmd-click the yellow Dot on the pipe to create another Dot and
create a right angle for the diagonal pipe (FIGURE 3.39).

FIGURE 3.39 The ShuffleCopy branch should look like this.

By default, the ShuffleCopy node is set to copy the alpha channel from the
selected channel set in input 1 and the RGB from the selected channel set
in input 2, which is what you’re after.

Since you started this tree by unpremultiplying all the passes, it’s time to
premultiply again now that you have a correct alpha channel.

3. Select ShuffleCopy1 and connect a Premult node from the Merge


toolbox after it.

4. View Premult1 in the Viewer. Make sure to display the last frame.

You have now rebuilt the beauty pass with the correct alpha. You also
ensured that you have easy access to all the various passes for easy
manipulation (FIGURE 3.40).

FIGURE 3.40 The beauty pass, premultiplied

PLACING CGI OVER LIVE BACKGROUND


Now you’re going to composite the pilot tree over the background. Only
after you do this will you really be able to gauge your composite and use
your complex CGI tree to its full potential. You will then be able to make
the pilot feel more like he’s in the scene. Treat this whole tree as a single
element—the foreground.

1. Click Premult1 and insert another Merge node after it.



2. Bring Read2 (the background) down to the area at the bottom of the
tree and connect Merge6’s B input to it.

3. Make sure you are viewing Merge6 to see the whole tree’s output
(FIGURE 3.41).
FIGURE 3.41 The pilot over the background

The pilot doesn’t look like he’s shot with the same camera and at the same
time as the background. There’s a lot we can do to fix this. Let’s start with
some basic tree manipulation.

MANIPULATING PASSES
Now, by playing with the various branches, you can change the colors of
elements that make up the pilot image—such as the amount of light falling
on the objects, how bright or sharp the specular highlights are, and so
forth.

The overall feeling here is that the pilot is too warm in relation to the sky.
Let’s color correct the Lgt pass and the Spc pass to produce cooler colors.

1. Click Unpremult2—this should be the one downstream from the Lgt


Pass Shuffle node—and press the G key to insert a Grade node.

Because you’re after a cooler color that better matches the sky, it would be
a good idea to start by picking the color from the Viewer and then making
it a little brighter.

2. Click the Color swatch next to the gain slider (FIGURE 3.42).

FIGURE 3.42 Honestly, they should have thought of a


shorter name for this little button.

3. Hold Ctrl/Cmd-Shift in the Viewer and drag a box around the clear
blue sky area between the clouds.

NOTE

Three modifier keys change the way sampling colors from the
Viewer works. The Ctrl/Cmd key activates sampling. The Shift
key enables creating a box rather than a point selection. The
resulting color is the average of the colors in the box. The
Alt/Option key picks the input image rather than the output
image—meaning, picking colors before the Grade node
changes them.

4. Click the Color swatch again to turn it off.

5. To make the color brighter, click the Color Picker button for the Gain
property to display the In-panel Color Picker (FIGURE 3.43).

FIGURE 3.43 The Color Picker button, in case you forgot


what it is

6. Drag the far-right slider (the I from TMI) up so that the Green slider
reaches about 1.0.

7. Hold down Shift-Alt and click and drag in the color wheel on the left to
push saturation up a little, then close the In-panel Color Picker.

I ended up with R = 0.725, G = 0.8, and B = 1.05.

8. Ctrl/Cmd-click the Viewer to remove the red sampling box.



This makes the part of the light that’s lighting the object a little cooler.

Let’s do this to the specular pass as well. It contributes a lot to the color of
the light falling on the object as the object accepts a lot of specular
highlight. I also want to reduce the overall strength of the specular pass so
I’ll make it darker overall.

9. Click Unpremult5 to select it and insert another Grade node by


pressing G.

10. Manipulate the Gain properties until you are happy with the result. I
have R = 0.8, G = 0.825, and B = 1.

Next, add a little softening to the specular pass. You’re going to use a blur,
so you need to do this before the Unpremult operation. Remember:
Always apply filters to premultiplied images.

11. Move Grade2 and Unpremult5 to the left to clear up some room
between them and the Spc pass Shuffle node (should be Shuffle5).

12. Insert a Blur node after Shuffle5 by pressing B on the keyboard.

13. Change Blur1’s Size property to 3.

To make the specular pass a little weaker, use Merge4’s Mix property to
mix it back a little.

14. Make sure Merge4’s (the Merge that’s adding the specular pass)
Properties panel is open by double-clicking it.

15. Bring down the Mix property to a value of around 0.8.

The specular pass is softer now and sits better with the other passes. The
specular part of your tree should look FIGURE 3.44.

FIGURE 3.44 The specular pass branch

Let’s soften the reflection pass as well.

16. Move Unpremult4 to the left a bit to make space.

17. Copy Blur1 by clicking it and pressing Ctrl/Cmd-C.

18. Click Shuffle4 and press Ctrl/Cmd-V to paste.

Your tree should now look like FIGURE 3.45.

FIGURE 3.45 The tree up to now

USING THE MASK INPUT


Another great way to use channels in Nuke is with the Mask input. The
Mask input is that extra input some nodes—including all the Merge,
Color, and Filter nodes—have on the right-hand side. You can see the
Mask input in FIGURE 3.46.


FIGURE 3.46 This is the Mask input. It says so when you
click and drag it.

The Mask input limits the area where the node operates. It receives a
single channel and makes the node operate only where that channel is
white. Gray areas in the mask image are going to make the node operate
in that percentage—a 50% gray means a 50% effect, and black means no
effect. This makes it really easy to make a node affect only a specific area.

NOTE

The Mask input has nothing to do with alpha channels and


merging streams together. Its only purpose is to limit the area
a node operates in.

Using an external image with the Mask input


Because the light that’s coming from the right (the Lgt pass) represents
the sun, I would like to break it up a little and have the light change as if
it’s traveling through clouds. Let’s make the clouds first.

1. With nothing selected, create a Noise node from the Draw toolbox and
view it in the Viewer.

The Noise node creates noise that can be controlled. You can make lots of
different things with this node, from cloud-looking elements to TV noise
elements. Let’s use this node to make a moving cloud-like texture.

2. Let’s start with the scale of the clouds. I’d like them bigger and wider.
So click the 2 button on the right of the X/Ysize property in Noise1’s
Properties panel.

3. In the two input fields that are exposed, type 700 in the first and 350
in the second (FIGURE 3.47).

FIGURE 3.47 Turning the single X/Ysize slider to two Input


fields

Let’s have the clouds roll a bit by changing the Z property.

4. Click once inside the Z property’s Input field, then press = on the
keyboard to bring up the Expression panel.

5. In the panel, type frame/20 and click OK.

6. Click Play in the Viewer (FIGURE 3.48).


FIGURE 3.48 Your noise-generated clouds should look
something like this now.

This expression will grow every frame by 1/20. How do I know? Because I
used the term frame in the expression. Frame means the current frame
number, and so in the next frame, the value will be one more. The number
20 simply controls the speed. Changing the number to 2 will make for a
much faster-moving noise.

7. Click Stop in the Viewer and go back to frame 1.

Finally I want the clouds to be moving from left to right. We’ll create
keyframes to create the animation. You need to switch to the Transform
tab.

8. In Noise1’s Properties panel, click the Transform tab.

9. While on frame 1, right-click the Translate.x Input field and choose Set
key.

10. Go to the last frame, frame 65, and type 2500 in the Translate.x
Input field to create another keyframe.

11. In the Viewer, click Play to watch the cloud element you just created.
When you’re done, click Stop and go back to frame 1.

What’s left now is to use this image to darken the sunlight. You use
another Grade node for that, of course.

12. Click to select Grade1. That’s the one we used to make the sunlight
cooler.

13. Press G on the keyboard to create another Grade node.

14. Drag the Mask input out from the right-hand side of Grade3 to Noise1
and let go.

15. In Grade3’s Properties panel, change the Gain property to 0.5.

Look toward the bottom of Grade3’s Properties panel. You can see the
Mask input properties there. At the moment, they show the Mask is active
and the channel used is rgba.alpha, which is fine by me. FIGURE 3.49
shows this as well.

FIGURE 3.49 The Mask input is being used according to


these properties.

It’s a good idea to always keep your tree organized. Do that now.

16. Move Grade3 so it’s to the left of Grade1, and move Noise1 above
Grade3, much like in FIGURE 3.50.

FIGURE 3.50 The Zen of tidy trees

Time to see how this affects your whole image.

17. Click Merge6, then press 1 to view your whole tree in the Viewer. Click
Play.

Can you see how the light moves on the side of the plane? Nice.

18. When you’re done, click Stop and go back to frame 1.

Masking down the pipe


Another way to mask is by using all those channels that are traveling
down the tree. As you pass from node to node, you keep with you all the
other channels that you might not see, but they are still there.

Let’s brighten the eyes of the pilot—the whole area that’s visible through
his glasses. First let’s see if I have a matte for that as one of my ID
channels.

1. View Unpremult1 in the Viewer.



2. From the Channel Set drop-down menu in Viewer1, choose ID1 to view
it.

This is an ID pass. It represents three separate mattes—one held in each


channel. Let’s see them (FIGURE 3.51).
FIGURE 3.51 An ID pass

3. While holding the mouse in the Viewer area, click R, then G, then B,
and finally B again.

The three mattes are real black and white mattes, in the same way an
alpha channel is a matte, it’s just that they’re held in the RGB channels
instead.

4. Go through the five ID passes available and look for a channel that
represents everything that’s behind the glasses. Looking carefully, you
find what we’re looking for in ID3.blue, as shown in FIGURE 3.52.

FIGURE 3.52 The area behind the glasses is shown in white


in ID3.blue.

5. Switch back to viewing the RGBA channel set’s RGB channels.

In order to brighten all the light that’s affecting the shot, you are going to
need to work on both lighting passes at the same time. The right place to
do this is right after where you joined the lighting passes: Merge1.

6. Select Merge1 by clicking it, then press G to create another Grade node.

7. View the output of Grade4 in the Viewer by pressing 1.

8. Change the Gain property for Grade4 to 2.

So now you’ve brightened everything, not just the eyes. You have that
Mask input, but instead of having to connect it, you can simply call up a
channel that already exists in your stream.

9. From the MaskChannelInput property—the long line next to the Mask


check box—choose ID3.blue (FIGURE 3.53). Please note not to change
the property under it: (Un)premult By.

FIGURE 3.53 Choosing a single channel from the stream to


be a Mask input

10. Move the Mix slider up and down to see what Grade4 does now. You
can see it works only on the area where ID3.blue is white.

11. Leave Mix on 1.

12. View the whole tree by clicking Merge6 and pressing 1 on the
keyboard.

We still want to have a bit of contrast in those eyes, so let’s make some by
changing the Gamma value to a lower one.

13. In the Input field for Grade4’s Gamma property, type 0.65.
The glass now feels glossier, and the eyes pop out a little more—which is
great (FIGURE 3.54).

FIGURE 3.54 The eyes do get better looking after this


treatment.

We’re done manipulating the look of the pilot for now. Only a few things
are left to do before the composition is ready. Let’s hit it.

USING AUXILIARY PASSES


So far, most of the passes used in the tree represent the look of the final
image. The passes are part of what makes up the physical nature (light
bouncing off a colored surface) of the image we see. Contrary to that, the
ID passes are not part of the actual look of the image. They can change
how the image looks, but they are not an actual part of the image. These
kind of passes are sometimes called auxiliary passes, and you can render
many of them in 3D software.

The ones we have with Read1 are the depth pass, ID passes, the normals
pass, and the motion vector pass. None of these actually makes up the
shader we built earlier, but they all aid the compositor in fixing problems
and reaching a better looking composite.

Adding motion blur with the motion vector pass


The motion vector pass defines the speed at which and the direction that
every pixel in the image is traveling. With that information it is possible to
create a blurring effect that mimics motion blur.

Motion blur is very processor intensive for a 3D program to render, but


for 2D software, and Nuke in particular, achieving similar results is a
cinch. Let’s see how.

1. To view the MV channel set, make sure you are viewing Premul1 in the
Viewer, then from the Channel Set Viewer button, choose MV.

This is the MV pass. The Red channel describes the movement in X, and
the Green channel describes the movement in Y. The Blue channel isn’t
used.

2. Switch back to viewing the RGB.

3. With Premult1 selected, create a VectorBlur node from the Filter


toolbox (FIGURE 3.55).


FIGURE 3.55 Inserting a VectorBlur node after the
Premult1 node

4. In VectorBlur1’s Properties panel, click the UV Channel’s drop-down


menu and choose MV.
UV is what the motion vectors are called in this node. This is why you
chose the MV channel set for the UV property.

5. The Multiply property defines how much motion blur the image will
have overall. Play with this value to see the image changing. I ended up
leaving it on 7 for this shot.

You character now has motion blur. Aren’t you happy? Now for more
blurring...

Adding depth of field with the depth pass


Using information about the distance from the camera, a Node called
ZDefocus can create a depth of field effect that mimics a real camera’s
drawback to keep only a specific area in focus. Let’s look at the depth
pass.

1. With VectorBlur1 selected, press 1 to view it.

2. From the Channel Set drop-down menu, choose Depth.

Look at that: a pretty red image. But what’s that? It’s cropped at the top?
Why’s that (FIGURE 3.56)?

FIGURE 3.56 The cropped depth pass

At the beginning of this chapter, you cropped the image according to the
auto crop calculation of CurveTool1. This calculation was performed on
the RGBA channel set and so it didn’t include areas that are not black in
other channels. If you call up the depth pass from the bottom point in the
stream, you get a cropped version of that element. Therefore, to get an
uncropped version, you have to go back up to a place before the crop and
pick a pass from there.

3. At the top of the tree, with nothing selected, create a Dot and connect
its input to Read1. Drag it to the right until it passes the rest of the tree.

4. Select the new Dot and insert another Dot, connected to it. Drag the
new Dot down until it passes the rest of the tree (FIGURE 3.57).

FIGURE 3.57 Using Dots to keep the tree organized

Now that we have a branch with uncropped versions of things, let’s


ShuffleCopy the depth pass into our main branch; ZDefocus expects to
have the depth pass in the same branch as the RGB image.

5. Click Merge6, then insert a ShuffleCopy node after it. Connect
ShuffleCopy2’s second input called 1 (confusing, ha?) to the Dot node you
just dragged to the bottom of the tree.
6. In ShuffleCopy2’s Properties panel, choose Depth in both 1 In’s drop-
down menu and Out’s drop-down menu.

7. Click Z to Z, like in FIGURE 3.58.

FIGURE 3.58 Follow this image to depth of field glory.

8. View ShuffleCopy2 in the viewer. You should still be viewing the depth
pass and can see that it is no longer cropped.

9. Switch the Viewer to view the RGB channels again.

10. Insert a ZDefocus node from the Filter toolbox after ShuffleCopy2.

11. In the on-screen controls, drag the focal_point control to the eye of
the pilot (FIGURE 3.59).

FIGURE 3.59 The focal_point on-screen control

Using the on-screen focal_point control actually changes the Focus Plane
property in the Properties panel. Let’s animate it so that the eye is always
in focus.

12. Create a keyframe by clicking Set Key in Focus Plane’s Animation


menu.

13. Go to frame 65 in the Timebar.

14. Drag the focal_point control to the eye again. Notice how this created
a new keyframe for the Focus Plane’s property.

15. To really get a strong depth of field, change both the Size and the
Maximum properties to 50.

16. Click Play in the Viewer.

Not too shabby (FIGURE 3.60).

FIGURE 3.60 The final image



This almost concludes this chapter. But one last thing.

Now you can look at this composite and see the result of all the work you
did. You have the pilot in the sky. Those with more experienced eyes may
notice that there is a lot more work to do to get this composite to look as
good as it can—but that’s more of a compositing lesson than a Nuke
lesson.

What’s important now, though, is that you have access to every building
block of what makes up the way the composite looks. Having the separate
passes easily accessible means your work will be easier from here on out.

For now, though, leave this composite here. Hopefully this process helped
teach you the fundamental building blocks of channel use and how to
manipulate a bigger tree (FIGURE 3.61).

FIGURE 3.61 Your final tree should look like this.

PREV NEXT
⏮ ⏭
Recommended
2. Touring the/ Interface
Queue / History
with a Basic
/ Topics
Composite
/ Tutorials / Settings / Get the App / Sign Out 4. Color Correction
© 2017 Safari. Terms of Service / Privacy Policy


 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
3. Compositing CGI with Bigger Node Trees 5. 2D Tracking
   🔎

4. Color Correction
Wow. This is a bit naive, calling a lesson “Color Correction.” It
should be a whole course on its own. But this book is about
more than that, and limited space reduces color correction to a
single chapter. So let me start by explaining what it means.

Color correction refers to any change to the perceived color of


an image. Making an image lighter, more saturated, changing
the contrast, making it bluer—all of this is color correction. You
can make an image look different as the result of a stylistic
decision. But you can also color correct to combine two images
so they feel like part of the same scene. This task is performed
often in compositing when the foreground and the background
need to have colors that work well together. However, there are
plenty more reasons you may change the color of an image. An
image might be a mask or an alpha channel that needs to have a
different color—to lose softness and give it more contrast, for
example.

Whatever reason you have for color correcting an image, the process will
work according to the way Nuke handles color. Nuke is a very advanced
system that uses cutting-edge technology and theory to work with color.
Therefore, it is important to understand Nuke’s approach to color so you
understand color correcting within Nuke.

UNDERSTANDING NUKE’S APPROACH TO COLOR


Nuke is a 32-bit float linear color compositing application. This is a bit of
a fancy description, with potentially new words. I explain this bit by bit.

32 bit: That’s the number of bits used to hold colors. Most


compositing and image-manipulation programs are 8-bit, allowing for
256 variations of color per channel (resulting in what’s referred to as
“millions of colors” when the three color channels are combined). 8-bit is
normally fine for displaying color but is not good enough for some
calculations of operations and may produce unwanted results such as
banding—an inaccurate display of gradients where changes in color
happen abruptly instead of smoothly. 32-bit allows for a whopping
4,294,967,296 variations of color per channel. That’s a staggering number
that results in much more accurate display of images and calculations of
operations. 8- or 16-bit images brought into Nuke will be bumped up to
32-bit, although that doesn’t add any detail, it just enables better
calculations from that point onward.

Float: Normally the color of an image is represented between black


and white. In 8-bit images, for example, the 256 color variations are split
evenly between black and white—so the value 1 is black, the value 256 is
white, and the value 128 is a middle gray. But what about colors that are ⬆
brighter than white? Surely the whiteness in the middle of a lit lightbulb is
brighter than a white piece of paper? For that reason, colors that are
brighter than white are called super-whites. Also, colors that are darker
than black are called sub-blacks (but I can’t think of a real-world analogy
that I can use here short of black holes). Using 8 bits to describe an image
simply doesn’t allow enough room to describe colors beyond black and
white. These colors get clipped and are simply represented as black or
white. However, in 32-bit color, there is plenty of room and these colors
become representable. As mentioned before, 8-bit color is normally
enough to display images on screen. Furthermore, the computer monitor
can still display only white—and nothing brighter. However, it is still very
important to have access to those colors beyond white, especially when
you’re color correcting. When you’re darkening an image that has both a
piece of white paper and a lightbulb in it, you can leave the lightbulb
white, while darkening the paper to a darker gray color, resulting in an
image that mimics real-world behavior and looks good and believable.
Doing the same with a nonfloating image results in the white paper and
the lightbulb appearing the same gray color—which is unconvincing.

Linear: Linear can mean lots of things, but here, in terms of color, I
mean linear color space. A computer monitor doesn’t show an image as
the image appears in reality, because the monitor is not a linear display
device. It has a mathematical curve called gamma that it uses to display
images. Different monitors can have different curves, but most often, they
have a gamma curve called sRGB. Because the monitor is not showing the
image as it appears in reality, images need to be “corrected” for this. This
is usually done automatically because most image capture devices are
applying an sRGB curve too, in the opposite direction. Displaying a
middle gray pixel on a monitor shows you only middle gray as it’s being
affected by the gamma curve. Because your scanner, camera, and image
processing applications all know this, they color correct by applying the
reverse gamma curve on this gray pixel that negates the monitor’s effect.
This process represents basic color management. However, if your
image’s middle gray value isn’t middle gray because a gamma curve has
been applied to it, it will react differently to color correction and might
produce odd results. Most applications work in this way, and most people
dealing with color have become accustomed to this. This is primarily
because computer graphics is a relatively new industry that relies on
computers that, until recently, were very slow. The correct way to
manipulate imagery—in whatever way—is before the gamma curve has
been applied to an image. The correct way is to take a linear image, color
correct it, composite it, transform it, and then apply a reverse gamma
curve to the image to view it correctly (as the monitor is applying gamma
correction as well and negating the correction you just applied). Luckily,
this is how Nuke works by default.

NOTE

Nuke color values are displayed and calculated in what’s


called normalized values. This means that instead of defining
black at a value of 0 and white at a value of 255, black is still
0, but white is 1. It’s a very easy thing to remember that
makes understanding the math easier.

Still confused? Here’s a recap: Nuke creates very accurate representations


of color and can store colors that are brighter than white and darker than
black. It also calculates all the compositing operations in linear color
space, resulting in more realistic and more mathematically correct results.

Nuke has many color correction nodes, but they are all built out of basic
mathematical building blocks, which are the same in every software
application. The next section looks at those building blocks.

COLOR MANIPULATION BUILDING BLOCKS


Color correction is a somewhat intuitive process. Often compositors just
try something until they get it right. Understanding the math behind color
correction can help you pick the right tool for the job when you’re
attempting to reach a specific result—which is better than trial and error.
TABLE 4.1 explains most of these building blocks.

TABLE 4.1 Basic Color Correction Functions


Dynamic range
When dealing with color correction, I usually talk about dynamic range
and its parts. Dynamic range means all the colors that exist in your
image, from the darkest to the brightest color. The dynamic range changes
from image to image, but usually you are working with an image that has
black and white and everything in between. The parts of the dynamic
range, as mentioned, are split according to their brightness value as
follows:

The shadows or lowlights, meaning the darkest colors in the image

The midtones, meaning the colors in the image that are neither dark
nor bright

The highlights, meaning the brightest colors

In Nuke, and in other applications that support colors beyond white and
black (float), there are two more potential parts to the dynamic range: the
super-whites and the sub-blacks.

Let’s look at these building blocks in several scenarios to really


understand what they do and why you might choose one over another.

1. Launch Nuke.

2. Bring in a clip called car.png by pressing R and navigating to the


chapter04 directory.

3. Click Read1, then press 1 on the keyboard to view it.

It’s an image of a car. Did that catch you by surprise?

4. With Read1 selected, go to the Color toolbox and click Add in the Math
folder.

You have now inserted a basic color-correcting node after the car image.
Let’s use it to change the color of the image and see its effect.

5. In Add1’s Properties panel, click the Color Picker button to display the
In-panel Color Picker. Play with the R, G, and B colors to see the changes
(FIGURE 4.1).

FIGURE 4.1 Using the In-panel Color Picker

You can see that everything changes when you’re playing with an Add
node—the highlights, midtones, and even blacks (FIGURE 4.2). An Add
operation adds color to everything uniformly—the whole dynamic range.
Every part of the image gets brighter or darker.

FIGURE 4.2 The whole image is becoming brighter.

6. When you’re finished, close the In-panel Color Picker.

7. Select Read1 again and branch out by holding the Shift key and clicking
a Multiply node from the Math folder in the Color toolbox.

8. While Multiply1 is selected, press 1 on the keyboard to view it. ⬆


9. In Multiply1’s Properties panel, click the Color Picker button to display
the In-panel Color Picker and experiment with the colors (FIGURE 4.3).
FIGURE 4.3 The changes affect the highlights more than
the rest of the image.

You can see very different results here. The highlights get a strong boost
very quickly whereas the blacks are virtually untouched.

10. Repeat the previous process for the Gamma node. Remember to
branch from Read1 (FIGURE 4.4).

FIGURE 4.4 The midtones change the most when you’re


changing gamma.

You can see that gamma deals mainly with midtones. The bright areas
remain untouched and so do the dark areas.

You should now have three different, basic, math-based color correctors
in your Node Graph that produce three very different results, as shown in
FIGURE 4.5.

FIGURE 4.5 The results from changing Add, Multiply, and


Gamma

Your DAG should look a little like FIGURE 4.6.

FIGURE 4.6 Branching three color correctors from a node

Let’s try some more color correction nodes.



NOTE

I find it really annoying that they chose to call the Contrast


node RolloffContrast, especially since it makes opening it via
the Tab key so much harder because typing “contrast” won’t
display this node.

11. Select Read1 and then Shift-click RolloffContrast in the Color toolbox
to create another branch.

12. While viewing RolloffContrast1, open its Properties panel and play
with the Contrast value (FIGURE 4.7).

FIGURE 4.7 A high contrast value produces a high contrast


image.

You can see how, when you increase the contrast above 1, the lowlights get
pushed down and the highlights are pushed up.

13. Keep the Contrast property above 1 and bring the Center value down
to 0.

The Center property changes what is considered to be the highlight or


lowlight. Colors above the Center value are considered bright and are
pushed up, and colors below the Center value are considered dark and are
pushed down.

Now you can see that the result of the RolloffContrast operation is very
similar to that of the Multiply node. In fact, they are virtually identical.
When setting the center value at 0, you lock that value in place. The value
0 is locked in place when you’re multiplying as well.

14. Bring the Center value up to 1.

You haven’t gone through an operation called Lift yet, but the
RolloffContrast operation is virtually the same as that operation. With
Lift, the value 1 is locked in place, and the further the values are away
from 1, the bigger the effect. You will go through Lift when you learn
about the Grade node later in this chapter.

To wrap up this part of the color introduction, here’s an overall


explanation:

When dealing with color, usually you need to control the lowlights,
midtones, and highlights separately.

The Add operation adds the same amount of color to every part of the
dynamic range.

The Multiply operation multiplies the dynamic range by a value. This


means that a perfect black doesn’t change, lowlights are barely touched,
midtones are affected by some degree, and highlights are affected the
most. It is good to mention that a Multiply operation is virtually the same
as changing the exposure in a camera or increasing light. It is the most
commonly used color operation.

The Gamma control is a specific curve designed to manipulate the part


of the dynamic range between 0 and 1 (black and white, remember?),
without touching 0 or 1.

Contrast is actually very similar to a Multiply, but has a center control.


If you place the center point at 0, you get a Multiply node.


USING AN I/O GRAPH TO VISUALIZE COLOR
OPERATIONS
Studying an I/O graph (input versus output graph) is a great way to
understand color operations. The X axis represents the color coming in,
and the Y axis represents the color going out. Therefore a perfectly
diagonal line represents no color correction. The graph shows what the
color operation is doing and the changes to the dynamic range.

To view an I/O graph like this, you can bring in a premade script I made.

1. Choose File > Import Script to load another script from the disk and
merge it with the script you have been building.

2. In the File Browser that opens, navigate to chapter04 and click


IO_graph.nk to import it into your current script.

Notice that when you imported the script (which is only four nodes), all of
its nodes were selected. This is very convenient as you can immediately
move the newly imported tree to a suitable place in your Node Graph.

3. Make sure the imported tree is not sitting on top of your existing tree.
Move it aside to somewhere suitable, as in FIGURE 4.8.

FIGURE 4.8 You now have two trees in your DAG.

4. Make sure you are viewing the output of Expression1.

Here is a quick explanation of the script you imported, node by node:

• The first node is a Reformat node, which defines the resolution of your
image—in this case, 256×256 pixels. Notice that its input isn’t connected
to anything. This is a good way to set a resolution for your tree.

• The second node is a Ramp. This can be created from the Draw toolbox.
This node generates ramps—in this case, a black to white horizontal ramp
from edge to edge.

• The third node is a Backdrop node used to highlight areas in the tree.
You can find it in the toolbox called Other. It indicates where to add your
color correction nodes in the next step.

• The fourth and last node is an Expression node, a very powerful node. It
can be found in the Color > Math toolbox. It lets the user write an
expression with which to draw or manipulate an image. You can do a lot
of things with this node. From simple color operations (such as adding or
multiplying, though this is wasteful) to complex warps or redrawing of
different kinds of images all together. In this case, you use this node to
draw on screen values of a horizontal black to white ramp (you have the
ramp from above) as white pixels in the corresponding height in the
image. A gray value of 0.5 in the ramp will generate a white pixel halfway
up the Y resolution in the output of the Expression node. The leftmost
pixel is black in the ramp and shows as a white pixel at the bottom of your
screen. The middle pixel is a value of 0.5 and so it shows as a white pixel
in the middle of the screen. The rightmost pixel has a value of 1 and so it
draws a white pixel at the top of the screen. All these white pixels together
form a diagonal line (FIGURE 4.9). Changing the color of the ramp will
change the line. This happens on each of the three color channels
individually.


FIGURE 4.9 The I/O graph in its default state

Let’s start using this I/O Graph tree. You will insert a Color node in
between Ramp1 and Expression1 and look at the resulting I/O graph.

5. Insert an Add node from the Color > Math toolbox after Ramp1, as
shown in FIGURE 4.10.

FIGURE 4.10 Add2 has been inserted after Ramp1 and will
change your I/O graph.

6. Bring the value of Add2’s value property to around 0.1.

You can see, as in FIGURE 4.11, that the Add operation changes the
whole dynamic range of your graph and, therefore, for any image.

FIGURE 4.11 The whole graph is raised or lowered as a


unit.

Let’s replace your Add node with a Multiply node. You’ve never done this ⬆
before, so pay attention.

7. With Add2 selected, Ctrl/Cmd-click the Multiply node in the Color >
Math toolbox to replace the selected node with the newly created one.

8. Increase and decrease Multiply2’s value.


9. You can also open the In-panel Color Picker and change the RGB
channels individually (FIGURE 4.12).

FIGURE 4.12 The graph changes more the further away it


is from 0.

The Multiply operation has more effect on the highlights than the
lowlights. When you are moving the slider, you can see that the 0 point
stays put, and the further away you go from 0, the stronger the effect
becomes.

Let’s try gamma. Maybe you don’t know what a gamma curve looks like.
Well, here’s your chance to learn.

10. Replace Multiply2 with a Gamma node from the Color or Math
toolbox by holding down Ctrl/Cmd and clicking Gamma from the Color >
Math toolbox.

11. Load Gamma2’s In-panel Color Picker and play with the sliders for R,
G, and B.

You should now get a similar result to FIGURE 4.13.

FIGURE 4.13 Notice that only the middle part of the graph
moves.

The Gamma operation changes the midtones without changing the blacks
or whites. You can tell that the points at the furthest left and at the
furthest right are not moving.

Contrast is next.

12. Replace Gamma2 with a RolloffContrast node in the Color toolbox.

13. Bring RolloffContrast2’s contrast value to 1.5.

The contrast operation pushes the two parts of the dynamic range away
from one another (FIGURE 4.14). ⬆
FIGURE 4.14 A basic contrast curve. Though it’s not curvy,
it’s still called a curve.

14. Play around with RolloffContrast2’s center property. When you are
finished, set the value to 0.

Here you can see what actually happens when you play with the center
slider. It moves the point that defines where the lowlights and highlights
are. When leaving the center at 0, you can see that the curve is identical to
a Multiply curve (FIGURE 4.15).

FIGURE 4.15 A center value of 0 makes Contrast behave


like Multiply.

15. Move the Center slider up to 1 (FIGURE 4.16).


FIGURE 4.16 Moving the slider up to 1 is actually a Lift
operation.

This is a Lift operation, which is covered later in this chapter. Your white
point is locked, while everything else changes—the opposite of Multiply.
RolloffContrast has one other property you can see in the I/O graph. This
property, called Soft Clip, is the property that gives this node its name.
This property smooths out the edges of the curve so that colors don’t all of
a sudden turn to black or white and result in a harsh transition.

16. Move the center slider to 0.5 and start to increase the Soft Clip slider.
I stopped at 0.55.

FIGURE 4.17 shows what happens when you increase the soft clip. This
creates a much more appealing result, which is unique to this node.

FIGURE 4.17 This smooth edge to the curve is what gives


RolloffContrast its name.

If you have a fair amount of experience, you must have noticed that the
I/O graph looks a lot like a tool you may have used in the past—something
applications such as Adobe After Effects call Curves. In Nuke, this is
called ColorLookup, and it is discussed in the next section.

CREATING CURVES WITH COLORLOOKUP


The ColorLookup node mentioned at the beginning of this lesson is
actually an I/O graph you can control directly. This makes it the operation
with the most amount of control. However, it’s actually the hardest to
control and keyframe due to its more complicated user interface. After all,
it’s easier to set a slider and keyframe it than to move points on a graph.

Let’s try this node on both the image and the I/O graph itself.

1. Replace RolloffContrast2 with a ColorLookup node in the Color toolbox


(FIGURE 4.18).

FIGURE 4.18 The ColorLookup interface

The interface for this node has the narrow curves list on the left and the
curve area on the right. Choosing a curve at left displays that curve at
right, which enables you to manipulate it. There are five curves. The first
controls all the channels, and the next four control the R, G, B, and alpha
separately. You can have more than one curve appear in the graph
window on the right by Shift-clicking or Ctrl/Cmd-clicking them in the
list.

2. Click the Master curve in the list at left.



In the graph (Figure 4.18) you can now see a curve (a linear one at the
moment). It has two points that define it, one at the bottom left and one at
the top right. Moving them will change the color. For example, moving the
top one will create a Multiply operation.
The ColorLookup’s strength lies in making curves that you can’t create
using regular math functions. However, to do this, you need to create
more points.

3. To create more points on the curve, Ctrl/Cmd-Alt/Option-click the


curve in the graph window. It doesn’t matter where on the curve you click.

You’ve just created another point. You can move it around and play with
its handles. If you look at the I/O graph on the Viewer, you can see that it
mimics what you did in the ColorLookup node. They are exactly the same
(FIGURE 4.19).

FIGURE 4.19 Changing the curve is just like working with


an I/O graph.

Now let’s use ColorLookup on the car image.

4. Select Read1 and Shift-click the ColorLookup node in the Color toolbox
to branch another output.

5. Click ColorLookup2 and press the 1 key to view it in the Viewer.

6. Play around with ColorLookup2’s curves. You can play with the
separate RGB curves as well.

I ended up with FIGURE 4.20—pretty drastic. But that’s the level of


control you have with ColorLookup. The Reset button at bottom left
allows me to reset this mess.

FIGURE 4.20 Extreme color correction courtesy of


ColorLookup

COLOR MATCHING WITH THE GRADE NODE


The Grade node is built specifically to make some color correction
operations easier. One of these operations is matching colors from one
image to another.

When matching colors, the normal operation is to match black and white
points between the foreground and background (only changing the
foreground), then match the level of the gray midtones, and finally match
the midtone hue and saturation.

Using the Grade Node


The Grade node is made out of a few of the building blocks mentioned
earlier. TABLE 4.2 shows a list of its seven properties.


TABLE 4.2 Grade Node Properties

By using Blackpoint and Whitepoint to set a perfect black and a perfect


white, you can stretch the image to a full dynamic range. When you have a
full dynamic range, then you can easily set the black point and white point
to match those of the background using Lift and Gain. You then have
Multiply, Offset, and Gamma to match midtones and for final tweaking.

Let’s practice color matching, starting with a fresh script.

NOTE

If Nuke quits altogether, just start it again.

1. If you want, you can save your script. When you are finished, press
Ctrl/Cmd-W to close the script and leave Nuke open with an empty script.

2. From your chapter04 folder, bring in two images: CarAlpha.png and


IcyRoad.png.

3. Make sure that CarAlpha.png is called Read1 and IcyRoad.png is


Read2. You can change the name of a node in the topmost property.

You will quickly composite these images together and then take your time
in color matching the foreground image to the background.

4. Select Read1 and press the M key to insert a Merge node after it.

5. Connect Merge1’s B input to Read2 and view Merge1 in the Viewer


(FIGURE 4.21).

FIGURE 4.21 The car is over the dashboard—this is wrong.

The composite is almost ready. You just need to punch a hole in the
foreground car so it appears to be behind the snow that’s piling up on the
windshield. For that, you’ll bring in another image (you will learn how to
creates mattes yourself in Chapter 6).

6. From your chapter04 folder, bring in Windshield.png and display it in


the Viewer.

Here you can see this is a matte of the snow. It is a four-channel image
with the same image in the R, G, B, and alpha. You need to use this image
to punch a hole in your foreground branch. To do this, you need another
Merge node.

7. Select Read3 and insert a Merge node after it.

8. Drag Merge2 on the pipe between Read1 and Merge1 until the pipe
highlights. When it does, release the mouse button to insert Merge2 on ⬆
that pipe (FIGURE 4.22).
FIGURE 4.22 Inserting a node on an existing pipe

9. View Merge1 (FIGURE 4.23).

FIGURE 4.23 All that white on the dashboard shouldn’t be


there.

You can see here that this is not the desired result. You still need to
change the Merge2 operation to something that will cut the B image with
the A image. This operation is called Stencil. Stencil is used often to
combine mattes in the same way we’re using it now. The reverse of this
operation, which is just as important, is called Mask, which masks the B
image inside the A image’s alpha channel. Mask holds image B inside the
alpha channel of image A, and Stencil holds image B outside image A.

10. In Merge2’s Properties panel, choose Stencil from the Operation


drop-down menu (you can see the result in FIGURE 4.24).

FIGURE 4.24 The car is now correctly located behind the


dashboard.

Looking at your comp, you can see that it now works—short of a color
difference between the foreground and background. Let’s use a Grade
node to fix this shift.

11. Select Read1 and press the G key to insert a Grade node after it.

As you know from Chapter 2, you are not allowed to color correct
premultiplied images. It is often hard to tell if an image is premultiplied
or not, but in this case it is. You can also look at the RGB versus the alpha
channels and see that the areas that are black in the alpha are also black
in the RGB. ⬆
Since you can’t color correct premultiplied images, you have to unpremult
them. You can do this in one of two ways: using an Unpremult node
before the color correction (in this case, Grade1) and then a Premult node
after it, or using the (Un)premult By Switch in your Color nodes. Let’s
practice both.
12. Bring the Grade1’s Offset property up to around 0.4 (FIGURE 4.25).

FIGURE 4.25 The whole image turned brighter.

You can see that the whole image, except the dashboard area, turned
brighter, even though you are correcting only the car image. This is due to
the lack of proper premultiplication. Let’s do the two-node method first.

13. Click Read1 and, from the Merge toolbox, add an Unpremult node.

14. Click Grade1 and, from the Merge toolbox, add a Premult node and
look at the Viewer (FIGURE 4.26).

FIGURE 4.26 The proper premultiplication fixed the


problem.

The problem has been fixed. This is one way to use proper
premultiplication. Let’s look at another.

15. Select Unpremult1 and Premult1, and press the Delete key.

16. In Grade1’s Properties panel, choose rgba.alph from the (Un)premult


By menu; this automatically selects the associated check box (FIGURE
4.27).

FIGURE 4.27 Using the (Un)premult By property does the


same thing as the Unpremult and Premult nodes workflow.

The resulting image looks exactly as it did before (in Figure 4.26). This
technique does exactly the same thing as the first method, just without
using other nodes. I usually prefer the first method, as it shows clearly in
the DAG that the premultiplication issues are handled. However, if you
look at Grade1 in the DAG now, you will see that, although the change is
not as noticeable, Grade1 is showing that it is dividing the RGB channels
with the alpha channel. The label now says “rgb/alpha” (FIGURE 4.28).


FIGURE 4.28 The node’s label changes to show the
Unpremult and Premult operations are happening inside
the node.

Let’s use the second method you have set up already. You will now be
color correcting an unpremultiplied image but outputting a premultiplied
image.

After a little rearranging, the tree should look like the one in FIGURE
4.29.

FIGURE 4.29 Your tree should look like this at this point.

17. Bring the Offset property back to 0.

Using CurveTool and Pixel Analyzer to match black and white points
Think back to the introduction of this section; how are you going to find
the darkest and lightest points in these two images so you can match them
together?

One way, which is valid and happens often, is by using your eyes to gauge
which are the darkest and brightest pixels. However, the computer is so
much better at these kinds of things, and it doesn’t have to contend with
light reflections on the screen and other such distractions.

The node to use for this is the CurveTool node, which you used in Chapter
3 to find the edges of the pilot element. You can also use this node to find
out other color-related stuff about your image. Let’s bring in a CurveTool
node to gauge the darkest and brightest point in the foreground and use
that data to stretch the foreground image to a full dynamic range.

1. Select Read1 and branch out by Shift-clicking a CurveTool node in the


Image toolbox.

This time you are going to use the Max Luma Pixel Curve Type. This finds
the brightest and darkest pixels in the image.

2. In CurveTool1’s Properties panel, switch the Curve Type drop-down



menu to Max Luma Pixel.

3. While viewing CurveTool1 in the Viewer, click the Go! button.


4. In the dialog box that opens, click OK since you want to process only
one frame.

5. Switch to the MaxLumaData tab and view CurveTool1 in the Viewer


(FIGURE 4.30).

FIGURE 4.30 The MaxLumaData tab’s two sections

The purpose of this operation is to find the darkest and lightest pixels in
the image. When switching to this tab you see two sections, the one
showing the lightest pixel (Maximum) and the darkest pixel (Minimum).
For each, the X and Y location and the RGB values are displayed.

Looking closely, you can see that the value of the minimum pixel is 0 in
every property. This is because this image is a premultiplied image, and as
far as CurveTool is concerned, all that black in the image is as much a part
of the image as any other part of it. You need to find a way to disregard
that black area. Let’s do the following.

6. From the Image toolbox, create a Constant node.

A Constant node creates a solid color with a chosen resolution.

7. Change Constant1’s Color value to 0.5.

8. Select Read1 and branch a Merge node from it by pressing Shift-M.

9. Connect Merge3’s B input to Constant1, and then view Merge3 in the


Viewer (FIGURE 4.31).

FIGURE 4.31 The car is on a gray background.

What you did here was replace, momentarily, the black background with a
middle gray background. This way, you get rid of the black and replace it
with a color that is not the darkest nor the lightest in the image. This new
image is the image you want to gauge using the CurveTool. You’ll need to
move the pipe coming in to CurveTool1 (FIGURE 4.32).

FIGURE 4.32 Moving the pipe from Read1’s output to


Merge3’s output
10. Click the top half of the pipe going into CurveTool1, which will enable
you to move it to the output of Merge3.

11. Double-click CurveTool1 to display its Properties panel in the


Properties Bin. Switch to the CurveTool tab (the first one), click Go! again,
and click OK.

12. Switch to the MaxLumaPixel tab again and have a look (FIGURE
4.33).

FIGURE 4.33 The updated CurveTool1’s MaxLumaData tab

Now you can see that the minimum values are far from being all 0. You
are getting a true result that shows the lightest and darkest pixels. Let’s
make use of them.

13. Close all Properties panels in the Properties Bin to clear some room.

14. Double-click CurveTool1 and then double-click Grade1.

15. View Merge1 in the Viewer.

16. Click the 4 icon next to Grade1’s Blackpoint, Whitepoint, Lift, and
Gain to enable the four fields.

17. Ctrl/Cmd-drag from CurveTool1’s Minimum Luminance Pixel value’s


Animation menu to Grade1’s Blackpoint Animation menu and release the
mouse button to create an Expression link between them.

18. Do the same from Maximum Luminance Pixel value to Whitepoint


(FIGURE 4.34).

FIGURE 4.34 The green arrow shows the Expression link


between the two nodes.

The foreground image’s dynamic range now spans from a perfect black to
a perfect white. This enables you to push those colors to new black and
white points to match these points to the background image. You can use
another CurveTool to find those points in the background image, but just
for fun, let’s use the Pixel Analyzer for that this time.

The Pixel Analyzer is a new panel in Nuke 8.0. It helps you analyze the
pixel values in your image.

19. From the Properties Bin’s Content menu, choose “Split Vertical”.

20. From the newly created pane’s Content menu, choose Pixel Analyzer
(FIGURE 4.35).


FIGURE 4.35 The Pixel Analyzer now lives at the bottom
right of the interface.

This is the aforementioned Pixel Analyzer.

21. While holding Ctrl/Cmd, drag on the screen (FIGURE 4.36).

FIGURE 4.36 Dragging on the screen creates this dotted


line if the Pixel Analyzer is open.

Notice this time around that a line of red dots appears on the screen? All
those points accumulate to fill the five color boxes in the Pixel Analyzer
with values.

The five boxes represent:

• Current: the last pixel dragged

• Min: the lowest or darkest value dragged

• Max: the highest or brightest value dragged

• Average: the average pixel value of all pixels dragged

• Median: out of all values dragged, the middle value

Clicking any of the boxes shows the values of that color below—RGBA and
HSVL (Hue, Saturation, Value, Luminance).

Dragging on the screen is all well and good, but the whole frame is what
you need to know about. There’s a feature for that too.

22. View Read2 in the Viewer.

23. From the Mode menu, choose Full Frame. ⬆


This option checks every pixel of the frame that’s currently in the Viewer
and returns the corresponding values in the five boxes. This is a really
quick way to find your black point and white point in a given frame.
You now have two sets of data to match to: new black points and white
points. Let’s copy them to your Grade node.

24. Close all Properties panels in the Properties Bin to clear some room.

25. Double-click Grade1.

Because the Pixel Analyzer is a panel and not a node, you can’t link to it,
but you can very easily copy the values across from the Pixel Analyzer to
the property where the values are needed by dragging.

26. Drag from the Pixel Analyzer’s Min box to Grade1’s Lift Color swatch
to copy the values across (FIGURE 4.37).

FIGURE 4.37 Dragging from the Pixel Analyzer

27. Do the same from the Max box to the Gain Color swatch.

28. You don’t need the Pixel Analyzer anymore, so from its Content menu
choose Close Pane.

29. View Merge1 in the Viewer.

You have now matched the foreground’s shadows and highlights to those
of the background (FIGURE 4.38).

FIGURE 4.38 Shadows and highlights now match.

As you can see from the image, the shadows and highlights are matched,
but the image is far from looking matched. The midtones, in this case,
make a lot of difference.

Matching midtones by eye


You now need to match the midtones, which is a much more difficult task.
You’ll start by matching its luma level by eye. Because it is hard to tell
what the midtones are, though, you are going to view the luminance of the
image in the Viewer.

1. Hover your mouse pointer in the Viewer and press the Y key to view the
luminance. ⬆
To change the midtones now, use the Gamma property. You can see that
the whitish snow on the right is a darker gray than the whitish car. Let’s
bring down the whitish car to that level.

2. Start dragging the Gamma slider down. I stopped at around 0.6.


Notice that the midtones don’t match well with a higher Gamma value.
Now, however, the lower midtones aren’t matching well. You need to use
the Multiply property to produce a good match.

3. Bring the Gamma slider up to 0.85 and bring the Multiply slider down
a bit to 0.9 (FIGURE 4.39).

FIGURE 4.39 The midtones match better now.

4. Hover your mouse pointer in the Viewer and press the Y key to view the
RGB channels (FIGURE 4.40).

FIGURE 4.40 You still have work to do on the color of the


midtones.

OK, so the midtones’ brightness is better now, but you need to change the
color of the car’s midtones. At the moment, the car is too warm for this
winter’s day. Matching color is a lot more difficult because you always
have three options: red, green, and blue. Matching gray is a lot easier
because you need to decide only whether to brighten or darken it.
However, as each color image is made out of three gray channels, you can
use the individual channels to match color too. Here’s how.

5. Hover your mouse pointer in the Viewer and press the R key to view
the red channel (FIGURE 4.41).

FIGURE 4.41 Viewing the red channel

Now you are looking only at levels of gray. If you change the red sliders,
you will get a better color match while still looking only at gray.

6. Display the In-panel Color Picker for the Gamma property by clicking ⬆
the Color Picker button.

You also want to change the Multiply and Offset values to achieve a
perfect result. This is because, even though you matched the black point
and white point, the distance of the car from the camera means the black
point will be higher and the white point lower. At the end of the day, it will
look right only when it does match, math aside.

Let’s display those extra color wheels.

7. Click the Color Picker button for the Multiply and Offset properties.
Your screen should look like FIGURE 4.42.

FIGURE 4.42 Opening three color wheels to easily control


three properties

8. Since you are looking at the red channel in the Viewer, change the red
sliders for Gamma, Multiply, and Offset until you are happy with the
result; little changes go a long way. I left mine at Gamma: 0.8, Multiply:
0.82, and Offset: 0.02.

9. Display the green channel in the Viewer, and then move the green
sliders to change the level of green in your image. My settings are
Gamma: 0.85, Multiply: 0.89, and Offset: 0.025.

10. Do the same for the blue channel. My settings are Gamma: 1.05,
Multiply: 1, and Offset: 0.065.

11. Switch back to viewing the RGB channels (FIGURE 4.43).

FIGURE 4.43 Not a bad result at the end of it all.

This is as far as I will take this comp. Of course, you can use your already
somewhat-developed skills to make this a better comp, but I’ll leave that
to you.

Save your script if you wish, and we will move on.

ACHIEVING A “LOOK” WITH THE COLORCORRECT


NODE
Giving an image a “look” is a very different practice than matching color.
While matching color has a very specific purpose and methodology, giving
an image a look refers to an artistic practice that gives an image a
different feel to how it was shot. For example, you might want it to look
brighter, warmer, or colder, depending on the feeling you want to create.

Using the ColorCorrect node


The ColorCorrect node is a very good tool to use for this, as it has a lot of
control over the different parts of the image—even more control than the ⬆
Grade node. But as with everything else, it is still made out of the basic
mathematical building blocks covered in the beginning of this chapter.

Let’s bring in an image and give it a look.


1. Press Ctrl/Cmd-W to close the color matching script and start a new
one.

2. Press the R key and bring in, from the chapter04 folder, the car.png
image again.

3. While the newly imported Read1 node is selected, press the C key to
create a ColorCorrect node. You can also find the ColorCorrect node in the
Color toolbox.

4. View ColorCorrect1 in the Viewer (FIGURE 4.44).

FIGURE 4.44 The ColorCorrect node’s Properties panel

As you can see in ColorCorrect1’s Properties panel, the ColorCorrect node


includes controls for Saturation, Contrast, Gamma, Gain (Multiply), and
Offset (Add). These properties are available over either the whole dynamic
range—called Master—or parts of the dynamic range called shadows,
midtones, and highlights. Having individual control over the different
areas of the dynamic range makes creating a look somewhat easier.

This idea of midtones, highlights, and shadows changes from image to


image. An image of a dark room will have no whites, but in that darkness,
you can still pick out the brighter areas that are the image’s highlights, the
slightly lighter blacks that are the midtones, and the darkest colors that
are the shadows. You can also define these in the ColorCorrect node’s
Ranges tab.

5. Click the Ranges tab in ColorCorrect1’s Properties panel.

In this tab (it’s similar to ColorLookup, isn’t it?) you have three graphs, all
selected. One represents the shadows, another the midtones, and a third
the highlights (FIGURE 4.45).

FIGURE 4.45 ColorCorrect’s Ranges is a lookup curve that


defines the brightness ranges.

6. Click the Test check box at the top of the graph (FIGURE 4.46).


FIGURE 4.46 The test shows the parts of the dynamic
range in the Viewer.

This shows a representation in the Viewer of what parts of the image are
shadow, midtone, and highlight. Highlights are represented by white,
midtones as gray, and shadows as black. Green and magenta represent
areas that are a mix of two ranges.

7. Click the Test button at the top of the graph again to turn it off.

The ranges are fine for this image, so we won’t change anything and will
continue working.

8. Switch back to the ColorCorrect tab.

You will now give this image a dreamy, car commercial look—all soft
pseudo blues and bright highlights. If you don’t define the look you are
after in the beginning, you can lose yourself very quickly.

Before changing the color of this image, I’ll show you my preferred
interface setup for color correcting.

9. In ColorCorrect1’s Property panel, click the Float Controls button. This


will float the Properties panel instead of docking it in the Properties Bin
(FIGURE 4.47).

FIGURE 4.47 Click the Float Controls button to float the


Properties panel.

10. Hover your mouse pointer in the Viewer and press the spacebar to
maximize the Viewer to the size of the whole interface (FIGURE 4.48).

FIGURE 4.48 This is a good way to set the interface for


color correction.

Since the Properties panel is floating, it is still there. This way, you can
look at the image at its maximum size without wasting space on things
like the DAG, yet you are still able to manipulate the ColorCorrect node.

What I am aiming for is something like that in FIGURE 4.49. You can
try to reach this look yourself, or you can follow my steps.


FIGURE 4.49 This is the image look I am referring to.

11. Let’s start by desaturating the whole image a little, so in the Master set
of properties, set the Saturation property to 0.5.

Now for the shadows. I would like to color the shadows a little bluer than
normal.

Remember, in addition to the In-panel Color Picker, you can also use the
Floating Color Picker. To use this, Ctrl/Cmd-click the Color Picker button.
The benefit of using the Floating Color Picker is that all sliders also have
Input fields, so you can type things up numerically.

12. Ctrl/Cmd-click the Color Picker button for shadows.gamma


(FIGURE 4.50).

FIGURE 4.50 The Floating Color Picker

13. From the Hue slider, choose a blue hue. I selected 0.6. Now change
the Saturation for the shadows.gamma color. I set it to 0.31. Finally,
adjust the brightness, or Value slider in the Floating Color Picker. I have it
at 1.22 (FIGURE 4.51).


FIGURE 4.51 Setting the shadow’s Gamma properties using
the Floating Color Picker

This results in RGB values of 0.8418, 0.993, and 1.22, respectively. It


gives the image a nice-looking blue shadow tint. Notice that there are
actually no hue and saturation sliders in the real Properties. The hue and
saturation sliders in the Floating Color Picker are there only so it will be
easier to set the RGB sliders.

14. Close this Floating Color Picker.

15. You have a lot more work in the midtones. First, set the Saturation to
0 so that the midtones are tinted black and white.

16. To create a flatter palette to work on, set the Contrast for midtones at
0.9.

17. To darken the midtones, set the Gamma to 0.69.

18. Use the Gain property to tint the midtones by Ctrl/Cmd-clicking the
Color Picker button for Midtones/Gain.

19. In the Floating Color Picker that opens, click the TMI button at the
top to enable the TMI sliders (FIGURE 4.52).

FIGURE 4.52 Turning on the TMI sliders

If you need to make the Floating Color Picker bigger, drag the bottom-
right corner of the panel.

20. Now, for a cooler looking shot, drag the T (temperature) slider up
toward the blues. I stopped at 0.72.

21. To correct the hue of the blue, use the M (magenta) slider to make this
blue either have more magenta or more green. I went toward the green
and left it at –0.11.

22. Close the Floating Color Picker (FIGURE 4.53).

FIGURE 4.53 The values are always in RGB.

As always, only the RGB values affect the image. You just used TMI sliders
to set the RGB values in an easier way.

23. You now increase the highlights a little, so let’s start by setting the
Contrast to 1.5.

24. To color correct the highlights, first click the 4 icon to enable the
individual Gain input fields.
25. Click in the right side of Gain’s first input field (for the red channel)
and use the up and down arrow keys on your keyboard to change the red
value. I left it on 0.75 (FIGURE 4.54).

FIGURE 4.54 The arrow keys make it is easy to nudge


values in input fields.

26. Leave the next field (green) where it is, but use the arrow keys in the
blue field to increase blue. Because I want everything to be a little bluer, I
left mine at 1.5.

The first stage of the color correction is finished. Let’s bring back the rest
of the interface.

27. Close the ColorCorrect1 Property panel (FIGURE 4.55).

FIGURE 4.55 This is how to close a floating Properties


panel.

28. Press the spacebar to bring back all your panes.

Using the mask input to color correct a portion of the image


Let’s say that a commercial’s director asks for the wheels to pop out of the
image and have high contrast. To do this secondary color correction, first
you need to define an area to apply the color correction to, then you need
to use another color node and use this area in its mask input.

You haven’t learned to create complex mattes yet, but in this case, you
really need only two radial mattes. You can create those easily using the
Radial node in the Draw toolbox.

First, to brighten up the wheels, use the Grade node.

1. Select ColorCorrect1 and insert a Grade node after it.

If you use the Grade node as it is, the whole image gets brighter. You’ll
need to use Grade1’s mask input to define the area in which to work.

2. With nothing selected, create a Radial node from the Draw toolbox
(FIGURE 4.56).

FIGURE 4.56 Creating an unattached Radial node

3. View Radial1.

It creates a radial, see? I told you. By moving the edges of the radial box,
you can change its shape and location.

4. View Grade1. ⬆
5. Drag Radial1’s edges until it encompasses the back wheel (FIGURE
4.57).
FIGURE 4.57 Radial1 encompasses the back wheel.

You’ll need another Radial node to define the second wheel. (You can add
as many Radial nodes as you need. Everything in Nuke is a node,
remember?)

6. With Radial1 selected, insert another Radial node after it.

7. Adjust Radial2 to engulf the front wheel (FIGURE 4.58).

FIGURE 4.58 Using Radial nodes to create masks for color


correction

8. To make use of the radials, take the mask input for Grade1 and attach it
to the output of Radial2, as in FIGURE 4.59.

FIGURE 4.59 Attaching the mask input to the mask image

This means whatever you now do in Grade1 affects only where the radial’s
branch is white.

9. Increase the whites by bringing the Whitepoint property for Grade1


down to around 0.51.

10. Some of the deep blacks have become a little too gray, so decrease the
Blackpoint property a bit. I left mine at 0.022.

At this point, the grading is finished. Mask inputs can be very important
in color correction because a lot of times you want to color correct only an ⬆
area of the image. But remember not to confuse mask inputs with mattes
or alpha channels. The use of the mask input is solely to limit an effect—
not to composite one image over another or to copy an alpha channel
across.
 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
4. Color Correction 6. RotoPaint
   🔎

5. 2D Tracking
Tracking makes it possible to gauge how much movement is
taking place from frame to frame. You can use this movement
information to either cancel the movement by negating it—
called stabilizing—or transfer the movement to another
element, called match-moving. Tracking is done via the
Tracker node, which got a big overhaul with Nuke 7.

I refer to 2D tracking as the kind of data you can extract from


the image. Because you are following a 2D image, an image
projected on the flat screen, you can really follow only the
position of pixels as they move around it. The screen and the
image have no depth—remember, they are flat. This situation
results in nothing but 2D data and 2D transformations, much
like the kind of transformation you can create using the
Transform node. So you can move in X and Y, but Z doesn’t
exist. You can scale up and down, and you can rotate around
the screen—but not into the screen. These are 2D
transformations.

3D tracking, also called Camera Tracking, is the process of extracting the


camera information from the image, resulting in 3D transformations.
Although 3D tracking uses some of the same initial concepts as 2D
tracking, 3D tracking is used in a very different way. Chapter 10 covers 3D
tracking.

TRACKER NODE BASICS


Nuke’s Tracker node is designed to perform the two parts of tracking:
accumulating position data and applying position data. The Tracker node
is open enough, though, so you can use the accumulated data in many
other nodes by using a linking expression.

In most applications, including Nuke, a tracker point is built out of a


tracking anchor, a pattern box, and a search area box. The tracking
anchor is a single point that collects the position data for the pattern box.
The pattern box defines the group of pixels the Tracker is to follow from
frame to frame. The search area box defines where to look for those pixels
in the next frame. The tracker point’s controls that define these three
elements are shown in FIGURE 5.1.


FIGURE 5.1 Tracker point anatomy

To track, you need to choose a well-defined pattern—one that has


different colored pixels that don’t look like other things close to them in
the frame—and set the pattern box to engulf that pattern. Then you gauge
how far the pattern is moving from frame to frame and set the search area
box to cover that part of the image. Normally the tracking anchor is set
automatically.

You start with a simple exercise in the next section.

STABILIZING A SHOT
In this first exercise you track a shot in order to stabilize it, meaning that
you stop it from moving. To give you something to stabilize, bring in a
sequence from the files you’ve copied to your hard drive.

1. Using a Read node, load a sequence called stabilize.####.tif from the


chapter05 directory and open it in the Viewer.

2. Click Play in the Viewer.

3. Zoom in close to the area in the image where the spoon is, place your
mouse pointer over the edge of the spoon handle, and don’t move it
(FIGURE 5.2).

FIGURE 5.2 Placing the mouse pointer and not moving it is


a good way to gauge movement.

Notice how much film weave there is in this plate. Film weave is the
result of the celluloid moving a little inside the camera when it is
shooting. You will now fix that weave and add some flares to the candle
flames.

4. Click Stop in the Viewer and go back to frame 1.

5. Select Read1 and attach a Tracker node to it from the Transform


toolbox.

6. Make sure you’re viewing the output of Tracker1 in both the Viewer and
frame 1.

The Tracker node’s Properties panel loads into the Properties Bin, but
we’ll leave that for now. More importantly, a new toolbar that looks like
FIGURE 5.3 appears in the Viewer. Using these controls, you can track a
feature in the image. You’ll start by defining the pattern box.


FIGURE 5.3 The Tracker node’s Toolbar

7. In the Tracker Toolbar in the Viewer, click the Add Track button. It
turns red to show that it’s selected.

8. Click anywhere in the Viewer to create the tracker point.


9. Click the Add Track button again to disable it.

If you leave this button on, every click in the Viewer will create another
tracker point. However, there’s a quicker way to create a tracker point.
Ctrl/Cmd-Alt/Option-clicking in the screen simply creates a tracker point
in that location.

10. In the Viewer, click the center of the tracker point, where it says
track1, and move it to the edge of the spoon handle (FIGURE 5.4).

FIGURE 5.4 Placing the tracker point on a good tracking


location

11. In Viewer1’s Tracker Toolbar, click the forward-play–looking button,


which is the Track Forward button (FIGURE 5.5).

FIGURE 5.5 Press the Track Forward button to start


tracking to the end of the clip.

The Tracker node starts to follow the pixels inside the pattern box from
frame to frame. A progress bar displays showing you how long it will be
until the Tracker (shorthand for Tracker node) finishes. When the Tracker
finishes processing, the tracking part of your work is actually finished.
Anything beyond this is not really tracking—it’s applying a Tracker’s
result.

You can see the Tracker-accumulated tracking data in the track_x and
track_y columns in the Properties panel as keyframes (FIGURE 5.6).
The first tab of the Tracker Properties panel is used mainly for viewing the
accumulated data, not using it.

FIGURE 5.6 The Tracker’s accumulated data is held in


these input fields as animation keyframes.

12. Move back in the Timebar using the left arrow key.

Look at the track_x and track_y input fields and how they change to
reflect the position of the pattern box in each frame.

If you subtract the X and Y values in frame 1 from the X and Y values in
frame 2, the result is the movement you need to match the tracked
movement. If you take that number and invert it (5 becomes –5), you
negate the movement and stabilize the shot. You can do this for any
frame. The frame you are subtracting—in this example, it is frame 1—is
called the reference frame.

Now that you’ve successfully tracked the position of the spoon from frame
to frame, you probably want to use this to stabilize the shot. This is done
in another tab in the Tracker Properties panel.

13. In Tracker1’s Property panel, click the Transform tab.

The Transform tab holds the Properties with which you can turn the

tracked data into transformations.

14. Choose Stabilize from the Transform drop-down menu (FIGURE


5.7).
FIGURE 5.7 Using the tracking data to stabilize

This is all you need to do to stabilize the shot. To make sure the shot is
now stabilized, compare it in the Viewer to the unstabilized shot.

15. Select Read1 in the DAG and press 2 on your keyboard.

16. In the Viewer, change the center Viewer composite control drop-down
menu from – to Wipe.

17. Reposition the axis from the center of the frame to just above the
spoon handle.

Notice that Tracker1’s transform controls are in the way. You need to get
rid of these to see the Viewer properly.

18. Close all Properties panels by clicking the Empty Properties Bin
button at the top of the Properties Bin.

19. Press Play in the Viewer and look at both sides of the wipe (FIGURE
5.8).

FIGURE 5.8 Wiping between the stabilized and jittery


versions of the sequence

You can clearly see the stabilization working.

20. Click Stop and switch the center Viewer composite control drop-down
menu back to –.

21. Click Tracker1 and press the 1 key to make sure you’re viewing that
part of the branch.

Wanting to create an image that is absolutely still is one reason to


stabilize a shot. Another reason is to make things easier when you are
compositing. Instead of match moving each element (as you do in the next
exercise), you can stabilize the shot, comp it, and then bring back the
motion using the tracked data you have already accumulated. Not only
does this make compositing a lot easier, but doing so makes it so your
shot doesn’t stand out from the rest of the movie for being absolutely
steady.

For this shot, the director asked you to “bloom” each candle flame a little.
To do so, you will add a flare to each candle flame using a node called, you
guessed it, Flare from the Draw toolbox.

NOTE

The Flare node is a great, very broad tool for creating various
types of flares and lighting artifacts.


22. Select Tracker1 and, from the Draw toolbox, add a Flare node
(FIGURE 5.9).
FIGURE 5.9 A new node, Flare1, is inserted after Tracker1.

23. Drag the center of the flare onto the center of the rightmost candle
flame using the on-screen controls, as shown in FIGURE 5.10.

FIGURE 5.10 Placing the center of the flare on the center of


the flame

I don’t have room to explain every property of the Flare node. It’s a very
involved and very artistic—and hence, subjective—tool, which makes it
difficult to teach. And let’s face it, it has a lot of properties. I encourage
you to play with it and learn its capabilities, but in this case, I ask you to
copy some numbers from the list in the next step.

24. Copy the following values into the corresponding properties one by
one and see what each does. When you’re done copying, if you want, you
can change them to suit your taste:

• Radius = 0, 0, 50

• Ring Color = 1, 0.8, 0.3

• Outer Falloff = 2.3, 2.65, 2.65

• Chroma Spread = 0.1

• Corners = 12

• Edge Flattening = 6

The result of the Flare node should look a lot like FIGURE 5.11.

FIGURE 5.11 The flare after treatment

Now use this Flare node to place three more flares in the shot.

25. Copy Flare1 by clicking it and pressing Ctrl/Cmd-C.

26. Make sure Flare1 is still selected and paste another flare by pressing
Ctrl/Cmd-V.

You want to move the location of Flare2 to the second candle from the
right. This can get confusing because when you paste a node, its
Properties panel doesn’t load into the Properties Bin automatically, so
moving the on-screen control now moves the position for Flare1. You have
to be careful which on-screen control you’re using.

27. Close Flare1’s Properties panel.

28. Double-click Flare2 to load its Properties panel.

29. Move Flare2’s position to the second candle.

30. Repeat the process twice more for the other two candles.

Your image should have four flares on it—one on each candle. Your tree
should look like FIGURE 5.12.


FIGURE 5.12 Four Flare nodes inserted one after the other

What’s left to do now is to bring back the film weave you removed in the
first step. You already have all the tracking data in Tracker1. You can now
copy it and insert it at the end of the tree to bring back the transformation
that you removed before.

31. Copy Tracker1, click Flare4, and paste.

Tracker2 is inserted after Flare4.

When you return a composited shot, it’s important to return it exactly as it


was provided—except for making the changes requested. Your shot,
remember, is often just one in a sequence.

NOTE

Keep in mind that every time you move an image around,


some filtering occurs. Filtering is the small amount of blur that
degrades the image a little. Moving an image twice like this
creates two levels of filtering. If you do this a lot, you will, in
effect, soften your image. On occasion this is a must, but in
general, you should avoid it. Here it served as a good way to
show you the effects of both stabilizing and match-moving
together.

32. Double-click Tracker2 to open its Properties panel.

33. Click the Transform tab and choose Match-move from the Transform
drop-down menu.

34. Close all Properties panels.

35. Make sure you’re viewing Tracker2 in the Viewer.

36. Click Play in the Viewer.

You can see that the motion data was returned as it was before, except
now it has flares, which are weaving just like the rest of the picture.

37. Save your project in your student_files directory with a name you find
fitting.

38. Press Ctrl/Cmd-W to close the Nuke script and create a fresh one.

Next you will go deeper into tracking.

TRACKING FOUR POINTS


In this section you track more than a simple 2D position. How and what
do I mean by that? Read on.

Understanding tracking points


In the preceding section, you dealt only with movement in the horizontal
and vertical position axes. There was no rotation and no scale. This means
that every pixel in the image was moving in the same way as its neighbor,
which doesn’t happen very often. Usually elements on screen move more
freely and tend to at least rotate (meaning around the screen’s axis) and
move toward or away from the camera—thus scaling. So aside from
position, we also have rotation and scale. All these are strictly 2D
movements (movements you can find in Nuke’s Transform node, for
example) because the element is always facing the camera. However,
some movements include changing the angle an element has to the
camera; I call this type perspective movement.

To track rotational value, scaling, or perspective movement, you need


more than one tracking point. Using just one tracking point, as you did in
the preceding section, can produce only 2D position data. However, by
using more tracking points (or more trackers, though I don’t want to
confuse this term with the Tracker node) and by calculating the
relationship between their positions, you can also figure out rotation,
scale, and even limited perspective movement. Traditionally there are
three types of tracking:

One point tracking: This can produce movement in the horizontal and
vertical positional axes only.

Two point tracking: This can produce movement in the same way as
one point tracking, but it can also produce 2D rotation and scale.

Four point tracking: This kind of track combination produces


perspective movement—movement in the pseudo third dimension (not ⬆
really 3D but it looks like 3D). In other words, you can produce things that
are turning away from the camera.

By tracking the right amount of points, you can tell Nuke to create the
right kind of movement using the Tracker node—whether it stabilizes or
match-moves that movement. You do this by adding more tracks, tracking
each one in turn, and then using the T, R, and S check boxes in the
Properties panel, shown in FIGURE 5.13, to tell Nuke to use a track to
create T (transform), R (rotation), and S (scale) movement.

FIGURE 5.13 These three check boxes tell Nuke what kind
of movement to create.

Another way to produce complicated movement is to use the Tracker node


to accumulate position data on several points and then copy the data to
other nodes and use them however you need to. The Tracker node
(introduced in Nuke 7) has a feature that helps you create other
transformation nodes based on the tracked data.

It’s important to understand that, first and foremost, tracking data is


simply positional data, and you can use it in whatever creative way you
can think of.

Tracking a picture in the frame


You are now going to track four points and then take the tracking
information outside of the Tracker node. For that, you need to start with
something to track.

1. Using a Read node from the chapter05 directory, double-click


frame.####.png.

2. Save this Nuke script to your student_files folder and call it frame_v01.

3. Make sure you are viewing Read1 in the Viewer and press the Play
button to watch the clip (FIGURE 5.14). Remember to stop playback
when you’re done and to return to frame 1.

FIGURE 5.14 A picture frame on a table

What you are seeing is a picture frame on a table with a handheld camera
rotating around it. You need to replace the picture in the frame. The
movement happens in more than two dimensions (though, of course, as
far as the screen is concerned, everything is moving in just two
dimensions). The picture changes its perspective throughout the shot.
That means you have to track four individual points, one for each corner
of the frame, to be able to mimic the movement.

The Tracker node has the ability to track as many tracking points as you
would like. Simply keep adding more tracking points using the same
button we used before.

4. Insert a Tracker node after Read1.

5. Make sure you are viewing Tracker1 and that you are on frame 1 in the
Timebar.

6. Ctrl/Cmd-Alt/Option-click the bottom-left corner of the picture frame


to create a tracking point there.

7. Next, Ctrl/Cmd-Alt/Option-click the bottom-right corner of the picture


frame to create a second tracking point.

8. Repeat this twice more for the top-right and top-left corners of the
picture frame (in that order please).

You now have four tracking points on the four corners of the picture


frame, as shown in FIGURE 5.15.
FIGURE 5.15 Creating all four tracking points

Take a look at the Properties panel for Tracker1; you can see it is now
populated with the four trackers you created. You can select the tracking
points either here or in the Viewer and use this list to delete tracks, or as
another way to add new tracks using the buttons at the bottom (FIGURE
5.16).

FIGURE 5.16 The list of tracking points in the Properties


panel

Let’s adjust the reference pattern and search box for each tracker.

9. Using the on-screen controls, resize the pattern boxes so they’re a lot
smaller and the search area box so it’s bigger, similar to what you see in
FIGURE 5.17.


FIGURE 5.17 Adjusting the reference and search areas

10. Select all tracking points by holding the mouse pointer in the Viewer
and clicking Ctrl/Cmd-A.
11. Click the Track Forward button in the Toolbar and hope for the best.

Chances are the track went well, but in case it didn’t, here’s a list of things
to do to improve your tracking:

Don’t track more than one tracking point at a time, as you just
attempted. It makes things more difficult. Select a single tracker, focus on
getting that one right, then move on.

It’s all about the pattern and search areas. Select good ones and you’re
home free. A good pattern has a lot of contrast in it, is unique in that it
doesn’t look like anything around it, and doesn’t change much in its
appearance except its location from frame to frame.

If you try once, it doesn’t work, and you have to go back, go back to the
last good frame and click the Clear Fwd button (short for forward) to
delete the keyframes already generated (FIGURE 5.18). You can click
Clear All to clear everything, and Clear Bkwd (short for backward) if you
are tracking backward.

FIGURE 5.18 The clear buttons

Use the Traffic Lights and user-generated keyframes to better your


track. Want to know what these are? Read on.

TIP

To stop a track, click the Stop button in the Tracker node’s


Toolbar in the Viewer, press Esc, or, when it comes up, click
the Cancel button on the progress bar.

Using user-generated keyframes to improve your track


First let’s understand the quality of your track to see where it can become
better. In the Tracker Toolbar in the Viewer there’s a button called
Show_error_on_track (FIGURE 5.19), but everyone calls it Traffic
Lights, which is how it will be referred to in this book.

FIGURE 5.19 The show_error_on_track button will be


referred to as Traffic Lights.

This button turns the path the keyframes made in the Viewer into a clear
gauge of the quality of the track.

1. Click the Traffic Lights button in the Toolbar (FIGURE 5.20).


FIGURE 5.20 Clicking the Traffic Lights button makes the
trackers’ paths colorful.
NOTE

The fact that there are red areas doesn’t make your track a
bad one. Rarely does a reference pattern stay as it is. It might
just indicate this is the best match, and it will be correct. The
pattern you are tracking simply changed.

Look at those colors. Green dots show areas where the current reference
pattern matches the original reference pattern. Red dots show areas that
have the most amount of deviation from the original reference pattern.

Let’s focus on track point 1. On my track, halfway through the clip, I have
a red section. Let’s pretend that the tracker actually went off course there
and we want to fix it.

The following instructions rely on the track I have. If you want to follow
my instructions exactly, then do the following:

2. Delete your Tracker1 node.

3. From the File menu, choose Import Script.

4. Navigate to the chapter05 directory and double-click good_track.nk.

5. Connect the newly imported Tracker1 to Read1 and then view it in the
Viewer. Double-click it to open its Properties panel.

6. Make sure Track 1 (the actual tracking point, not the node) is selected
by clicking it in the Properties panel (FIGURE 5.21).

FIGURE 5.21 Selecting a single tracker displays these boxes


at the top left of the Viewer.

At the top left of the Viewer, you see two boxes. The leftmost one, entitled
Track 1, displays the zoomed-in reference pattern on the current frame.
The next box is the keyframe box for the keyframe you created on frame 1.
It is labeled Frame 1 at the bottom of the box, indicating as much. In order
to create another keyframe and make sure it is just like the other
keyframes, be sure to use these boxes, among other things.

Let’s start by locking some good tracking results in place. The first frame,
for example, is good, but so is the last frame—as you can tell, because it’s
green also.

7. Go to the last frame, frame 100, and click the Add Keyframe button in
the Toolbar (FIGURE 5.22).

FIGURE 5.22 Adding more keyframes using this button


creates more keyframe boxes.

Now you have a new keyframe box to the right of the keyframe box for

frame 1. This one is for frame 100.

8. Click the keyframe box for frame 1, then the one for frame 100.

The reference pattern box on the left flicks between showing frame 1 and
100. This is because clicking the keyframe boxes in the Viewer changes
the frame you are on.

The reference pattern box is not just a pretty box to look at, it’s useful too.
Dragging in this box actually changes the location of your tracker at that
frame.

9. Click between frame 1 and frame 100 using the keyframe boxes.

10. Adjust frame 100 so it looks closer to what frame 1 looks like.

Now let’s make another keyframe by finding the worst area and correcting
that.

11. Drag the Timebar until you reach the reddest part of the tracker’s
path, according to the colors produced by having the Traffic Lights button
switched on. I ended up at frame 68.

12. Click Add Keyframe in the Toolbar.

13. Adjust the new keyframe by dragging in the reference box. Switch
between the three keyframes to see what you’re matching to.

I reached something that’s better than what I had before. So that’s good—
for this frame. But what about all the rest of the frames? This is where
keyframe tracking comes in. Keyframe tracking means tracking, not only
to find the reference pattern in the next frame, but also toward the next
keyframe. This functionality is made up of some smart algorithms inside
that tracker.

14. Click the Keyframe Track All button in the Toolbar shown in
FIGURE 5.23.

FIGURE 5.23 This button will start a keyframe track.

You probably ended up with something close to what I have in FIGURE


5.24. Green appears around where the keyframes were and it goes toward
red as we go farther away from the keyframes. That’s natural and will
always be the case. But here’s the killer feature: You can make this better
with a single quick click.

FIGURE 5.24 The colored path shows where there’s a less


than perfect match.

15. Go to frame 86 in the Timebar. This is a rather red frame for me.

16. Drag in the reference box to better match this frame as well, and
release the mouse button.

Look at the tracker go! It starts tracking immediately after you change
that reference pattern. What actually happened is that you created
another keyframe on frame 86 by changing the location of the tracker.
Because this is a keyframe-based track, it updated automatically to allow
for the newly added keyframe. You should now have less red in your
colored path (FIGURE 5.25).

FIGURE 5.25 It is very easy to improve your track with
keyframe-based tracking.

You can carry on fixing Track 1, move on to the other three trackers, or
just leave it all as it is. I’m sure it’s good enough already.

You can’t use the Tracker node’s Transform options to create perspective
transformations. That functionality is reserved for the CornerPin node.
Here’s what is next on the to-do list: how to use the accumulated tracking
data outside the Tracker node.

Replacing the picture


You need a picture to place in the frame.

1. From the Chapter05 directory, bring in the file called statue.jpg with a
Read node and view it in the Viewer (FIGURE 5.26).

FIGURE 5.26 One little Buddha soon to be placed in a


frame

You will insert part of this Buddha image into the frame.

2. Select Read2 and, from the Draw toolbox, insert a Rectangle node.

This node creates, as you might guess, rectangular shapes. In this case,
you want to create the shape only in the alpha channel.

3. From Rectangle1’s Output drop-down menu, choose Alpha. Make sure


you are viewing Rectangle1 in the Viewer.

To create a nice framing for the statue, drag the rectangle until you are
happy with how it looks.

4. Drag the rectangle or enter values in the Properties panel to frame the
image. The Area property will show values similar to 500, 790, 1580, and
2230 (FIGURE 5.27).


FIGURE 5.27 The edges of Rectangle1’s on-screen controls
will end up being the edge of your picture in the frame.

5. You now need to multiply the RGB with the alpha to create a
premultiplied image. Attach a Premult node from the Merge toolbox after
Rectangle1.

Because you’re not going to use the Tracker node to move the picture
around, you need another transformation tool. CornerPin is designed
specifically for this kind of perspective operation, so you will use it in this
exercise. CornerPin moves the four corners of the image into a new
positions—exactly the data the Tracker node accumulates.

The developers at The Foundry know this. Smart people. So they made it
easy for you to create a CornerPin node right from the Tracker node with
all the correct expressions already in place to tie the CornerPin node to
the Tracker node.

6. Double-click Tracker1 to load its Properties panel to the Properties Bin.

7. Click track 1’s row in Tracker1’s Properties panel, then Shift-click track
4’s row to select all four trackers.

Because you can have as many tracking points as you want in the list, you
must choose four trackers to match the four corners of an image.

8. At the bottom of the Tracker tab, under the Export separator, make
sure CornerPin2D (Use Current Frame) is selected, and click the Create
button to the right of it.

9. Drag the newly created CornerPin2D1 node under Premult1 and


connect its input pipe to the output of Premult1. Make sure you are
viewing CornerPin2D1 in the Viewer (FIGURE 5.28).


FIGURE 5.28 Your tree should look like this now.

What actually happened when you clicked the Create button is that a
script (some clever bit of programming, that is) automatically created a
CornerPin2D node and connected its properties to Tracker1’s properties
with expressions. Expressions might sound scary to some, but many
others already use them. You already used expressions in Chapters 3 and

4 , and you’ll write some of your own in the next section.

The green line that appeared in the DAG (shown in Figure 5.28) when you
created the CornerPin2D node shows that a property in CornerPin2D1 is
following a property in Tracker1.

Adjusting the source pins


In the previous section, it seemed as if the whole idea was to align the
corners of the cropped image with the trackers, but that’s not what
happened. This is because the CornerPin2D node can’t guess where the
corners of your cropped image are; you have to tell it. To fix this, you need
to specify source corners, which the CornerPin node allows.

The From pins, available in the From tab, tell the CornerPin node where
the original corners are. To make this happen, follow these steps:

1. In the CornerPin2D1’s Properties panel, switch to the From tab.

TIP

If you change the source pins while viewing CornerPin2D1,


you will find adjusting the pins becomes impossible because
you’ll keep changing what you are viewing, which creates a
kind of feedback. This is why it is important to view the
preceding node—Premult1.

2. Close Tracker1’s Properties panel and view Premult1 in the Viewer.

The location of the four pins should be at the corners of Rectangle1’s Area
property. So let’s type an expression link between the two.

3. Double-click Rectangle1 to load its Properties panel into the Properties


Bin.

The corner called From1 needs to be at the X and Y locations of the


rectangle, whereas From3 should be at the R and T (Right and Top)
locations of the rectangle. We can extrapolate the other two corners from
these assumptions.

4. From From1’s Animation menu, choose Edit Expression.

Surprise, surprise. There’s an expression already there automatically


created by the Tracker node. You need to replace it with your own that
links to Rectangle1.area.

5. In the X field, type Rectangle1.area.x, and in the Y field, type


Rectangle1.area.y (FIGURE 5.29).

FIGURE 5.29 You’re typing your own expressions.

What you actually did here is very similar to what you did automatically in
the previous lesson. You referred the current property so it looks at
another by providing its address as
NodeName.KnobName.SubKnobName. Aren’t you proud. Now let’s do
the rest.

6. Click the three other From Animation menus and edit their expressions
in sequence according to this list (x for the first and y for the second):

• From2.x: Rectangle1.area.r | From2.y: Rectangle1.area.y

• From3.x: Rectangle1.area.r | From3.y: Rectangle1.area.t

• From4.x: Rectangle1.area.x | From4.y: Rectangle1.area.t



The expressions you typed should have snapped the From pins to the four
corners of the image, just like in FIGURE 5.30.
FIGURE 5.30 The expressions should have made the four
pins snap to the corners.

7. View CornerPin2d1’s output in the Viewer (make sure you are viewing
RGB) and switch back to the CornerPin2D tab in CornerPin2D1’s
Properties panel.

You can see here that the To points are sitting on the edges of the image,
which is what you were after (FIGURE 5.31).

FIGURE 5.31 Adjusting the From pins made the To pins


move to the correct place.

Now you need to place the new, moving image on top of the frame
background.

8. Select CornerPin2D1 and press the M key to insert a Merge node after
it.

9. Attach Merge1’s B input to Read1 (FIGURE 5.32).


FIGURE 5.32 Merging the foreground and background
branches

10. View Merge1 in the Viewer (FIGURE 5.33).

FIGURE 5.33 The picture now sits in the frame, but it could
use some more work.

The picture sits beautifully and snuggly in the frame, doesn’t it? But
maybe its too snug.

11. In the Rectangle1 Properties panel, set the Softness value to 20.

You also need to blur the image a little—it was scaled down so much by
the CornerPin node that it has become too crisp and looks foreign to the
background.

12. Attach a Blur node to CornerPin2D1 and set its size value to 2.

You have now added motion to an image that didn’t have it. When that
happens outside the computer, the image gets blurred according to the
movement. You should make this effect happen here as well.

13. Double-click CornerPin2D1 to load its Properties panel into the


Properties Bin.

14. Change the MotionBlur property from 0 to 1.

This sets motion blur at 100% quality. It is that easy to add motion blur to
motion generated inside Nuke. Let’s look at the final result.

15. Click Play in the Viewer and have a look (FIGURE 5.34).

FIGURE 5.34 This is what the final result should look like.

One thing is missing: some kind of reflection. You will take care of that in ⬆
Chapter 10.

16. Click Stop.

17. Save your script. This is very important, because you will need it again
for Chapter 10 (FIGURE 5.35).
FIGURE 5.35 The final Nuke tree should look like this.

PREV NEXT
⏮ ⏭
Recommended
4. Color Correction
/ Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out 6. RotoPaint
© 2017 Safari. Terms of Service / Privacy Policy


 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
5. 2D Tracking 7. Keying
   🔎

6. RotoPaint
Roto and paint are two tools I thought would be long gone by
now. In fact, I thought that back in 1995 when I started
compositing. Nowadays I realize these tools are always going to
be with us—at least until computers can think the way we do.
Fortunately, the tools for roto and paint are getting better and
better all the time and are being used more and more often.
Learning how to use these two techniques is even more
important today than it used to be.

Roto, short for rotoscoping, is a technique in which you follow


the outline and movement of an object to extract it from the
shot, usually with some kind of shape-generation tool.
Normally that tool is called a spline, which generates a matte,
that you use to extract the object from the scene. Because a
spline is designed to generate shapes, nothing is stopping you
from using it to generate any kind of shape for whatever
purpose.

NOTE

A little after releasing the RotoPaint node, at around Nuke


version 6.1, the Roto node was added as well. It is simply a
pared-down version of the RotoPaint node and, as such, will
not be discussed here.

The paint technique is simply a way to paint on the frame using brushes
and similar tools we’ve all grown accustomed to, such as a clone tool
(sometimes called a stamp tool). Tools used for this technique are rarely
available in compositing systems as ways to paint beautiful pictures, but
instead they are used to fix specific problems, such as holes in mattes or
spots on pretty faces, and to draw simple, but sometimes necessary,
doodles (but I can’t really call them paintings).

In Nuke, roto and paint are combined in one tool, RotoPaint, that can
generate both roto shapes and paint strokes as well as handle a few other
tasks. Therefore sometimes I refer to this single node as a system (the
RotoPaint system), because it is more than just a simple node, as you will
soon find out.

INTRODUCING ROTOPAINT’S INTERFACE


Nuke’s RotoPaint node is a full-blown manual matte extraction and
touchup tool designed to do everything from rotoscoping, fixing little ⬆
problems in mattes, and cleaning hairs from dirty film plates to making
shapes, creating simple drawings, and so on. Keep in mind that although
RotoPaint is versatile, it’s not a replacement for Adobe Photoshop or
Corel Painter. You won’t draw your next masterpiece on it. Not that I
haven’t tried.
Go ahead and load a RotoPaint node; it makes it a lot easier to learn its
interface.

1. With a new script open in Nuke, create a RotoPaint node by clicking it


in the Draw toolbox (or by pressing the P key).

2. View RotoPaint1 in the Viewer.

In the Viewer, you will see a third line of buttons (the Tool Settings bar) at
the top and a new line of buttons (the Toolbar) on the left (FIGURE 6.1).

FIGURE 6.1 RotoPaint’s two sets of on-screen controls

In the Toolbar on the left, you can choose from the various tools that are
displayed and click each icon to display a menu of more tools (FIGURE
6.2).

FIGURE 6.2 RotoPaint’s Toolbar

The Tool Settings bar at the top lets you define settings for the selected
tools. This bar changes depending on the tool you choose (FIGURE 6.3).

FIGURE 6.3 These are the Selection tool’s settings.

The first two tools in the Toolbar on the left are the Select and Point tools,
which enable you to select and manipulate shapes and strokes. The rest of
the tools are the actual drawing tools and are split into two sections:
shape-drawing tools (the roto part) and stroke-drawing tools (the paint
part). The shape-drawing tools are listed in TABLE 6.1.

TABLE 6.1 Shape-Drawing Tools

The stroke-drawing tools are listed in TABLE 6.2.


TABLE 6.2 Stroke-Drawing Tools

Painting strokes
Try drawing something to get a feel for how this node works. Start by
drawing some paint strokes.

TIP

In addition to using the Tool Settings bar to resize the brush,


you can also Shift-click-drag on the Viewer to scale the size of
the brush.

1. Make sure you’re viewing RotoPaint1 by clicking it and pressing the 1


key. (So far the Properties panel has been open, so you have been seeing
the on-screen controls in the Viewer, but you haven’t actually been
viewing the output of RotoPaint1 in the Viewer.)

2. Select the Brush tool at the left of the Viewer.

At the top of the Viewer, you will see the Tool Settings bar change to
reflect your selection. You now have controls for the Brush tool, including
opacity, size, color, and more (FIGURE 6.4).

FIGURE 6.4 The Tool Settings bar reflecting the Brush


settings

3. With the Brush tool selected, start painting on screen. Create a few
strokes. Change the brush size, color, and opacity. (Use the Color Picker
button to change the color.)

The settings from the Tool Settings bar are mirrored in the Properties
panel. You can find generic controls, such as color and opacity, for all
shapes and strokes for RotoPaint at the center of the Properties panel
(FIGURE 6.5).

FIGURE 6.5 General properties

4. Switch to the Stroke tab (FIGURE 6.6).

FIGURE 6.6 Stroke-specific properties

Here you can find other controls applicable only to strokes, such as brush
size, hardness, spacing, and more.

You can play with all those controls for as long as you like (FIGURE 6.7).


FIGURE 6.7 Can you tell I didn’t even use a tablet to draw
this?

5. Switch back to the RotoPaint tab.

6. Move one frame forward in the Timebar.

All the strokes you drew have disappeared. The Tool Settings bar allows
you to specify the length of strokes using the last unlabeled drop-down
menu on the right. The menu shows Single, meaning just a single frame
stroke. The good thing is, though, that you can change stroke time lengths
after you draw them.

Editing strokes
In a tree-based compositing application—and in Nuke specifically—
creating paint strokes is done with timing in mind. Sometimes you need
to create a stroke that will last only a frame, sometimes it’s an infinite
stroke (one that lasts throughout the whole length of the composition and
beyond). Sometimes you need a stroke to start at a specific frame and go
all the way to the end of the composition, and sometimes you need it to
appear just in a specific range.

You can change the length of strokes using the Lifetime Type property in
the Lifetime tab (FIGURE 6.8), the drop-down menu, or the buttons
(the functionality is the same). The options are

All Frames: Meaning all frames, beyond your composition, to infinity.

Start to Frame: From the beginning of the sequence to the current


frame.

Single Frame: Just the current frame.

Frame to End: From the current frame to the sequence end.

Frame Range: A defined start and end.

FIGURE 6.8 Lifetime properties

Now click the All Frames button. Your last stroke now exists throughout
the clip and beyond—infinitely. Even if you make your comp longer now,
no matter how long you make it, the stroke will be there.

Painting in vectors
You can make changes on the fly in Nuke because the RotoPaint system is
vector based. In other words, pixels aren’t really being drawn by your
mouse strokes. When you “draw” a stroke, a path is created called a
vector, which mathematically describes the shape of the stroke. You can
then use this vector to apply a paint stroke in any number of shapes and
sizes and, for that matter, functionality. This means you can change your
paint strokes after you draw them, which is very powerful and saves a lot
of time.

To be able to change stroke settings after they have been created, you
need a way to select strokes. You do that in the Stroke/Shape List window,
also called the Curves window (FIGURE 6.9).


FIGURE 6.9 The Stroke/Shape List window

TIP

You can make your Stroke/Shape List window bigger by


dragging the bottom-right corner of the Properties panel.

NOTE

The Stroke/Shape List window also allows you to change the


order of your strokes. Simply select a stroke, drag it up or
down, and let go.

NOTE

You can also edit strokes by changing the vector that draws
them.

1. Click the second-to-last stroke you created in the Stroke/Shape List


window.

2. Click the All Frames button again.

Your second-to-last stroke now exists infinitely. Now for the rest of the
brush strokes.

3. Select all the brush strokes by clicking the topmost one, scrolling down,
and Shift-clicking the last one.

4. Click the All Frames button again.

Now your whole picture is back as it exists infinitely. How exciting!

5. Select a stroke in the list and switch to the Strokes tab.

6. Practice changing the stroke’s properties, such as hardness and size.

7. Click the Select tool (the arrow-shaped tool at the top of the RotoPaint
Toolbar).

8. To select stroke points, you need to enable the Show Paint Selection
button on the Tool Settings bar (FIGURE 6.10).

FIGURE 6.10 Turn on Show Paint Selection to change a


stroke’s vector.

9. Now select a stroke on screen by marqueeing some points together and


moving them around. The box that appears around the selected points
allows for scaling and rotation (FIGURE 6.11).


FIGURE 6.11 The box around selected vector points allows
for greater manipulation.

Erasing and deleting strokes


Say I drew a lovely picture but made a mistake. What if I want to erase or
delete something?

1. Double-click the Paint tool to access the Eraser tool, and then draw on
screen where you drew before.

The Eraser erases previous paint strokes. It does not erase the image (if
you have one connected in RotoPaint’s input) in any way, and it’s a paint
stroke just like any other.

2. Switch back to the Select tool and click to select a stroke to delete.

3. Notice that the selected stroke is highlighted in the Stroke/Shape List


window. You can click it there as well.

NOTE

If you happened to click the actual name of the stroke, then


pressing the Backspace/Delete key will delete the name. You
need to click the icon to the left of the name.

4. Press Backspace/Delete to delete the stroke.

OK, that’s enough editing of brush strokes for now.

Drawing and editing shapes


It’s time to start practicing drawing shapes. For that you will need a new
RotoPaint node.

1. Clear the Properties Bin.

2. With nothing selected in the DAG, press the P key to create another
RotoPaint node, RotoPaint2.

3. View RotoShape2 in the Viewer. You’re going to focus on drawing


Béziers.

4. Choose the Bézier tool by clicking it on the RotoPaint Toolbar, or by


hovering in the Viewer and pressing V on the keyboard.

5. Start drawing a curve by clicking in the Viewer. Make whatever shape


you like. If you simply click, you will get a linear key for the shape,
whereas if you click and drag, you will create a smooth key. It is important
that you finish drawing the curve by clicking the first point again. You can
continue adding points after you draw the shape.

6. To add points after the curve is finished, Ctrl-Alt/Cmd-Option-click.

7. After you’ve closed the shape, you should automatically be set back to
the Select tool. If you are not, switch back to the Select tool by clicking it
in the Toolbar.

You can now keep changing and editing the shape. You can move points
around, change what kind of point you have, delete points, or add points.

You can choose a point and right-click/Ctrl-click to display a contextual ⬆


menu (FIGURE 6.12) with all sorts of editing options, or you can use hot
keys. TABLE 6.3 lists some hot keys to use with selected Bézier points.
FIGURE 6.12 Right-clicking/Ctrl-clicking a point on a
shape opens the contextual menu.

TABLE 6.3 Bézier Hot Keys

8. Go ahead and play with the points and tangents using the hot keys to
move your shape around.

9. You can also click and drag points together to create a marquee and
then use the box that pops up to move points around as a group
(FIGURE 6.13).

FIGURE 6.13 Transforming a group of points together

You can blur the whole spline from the Properties panel.

10. Click the Shape tab, then drag the Feather property’s slider to blur
outside the spline. You can also reduce the slider to below 0 to blur within
the spline.

The Feather Falloff property lets you choose how hard or soft the ⬆
feathering will be (FIGURE 6.14).
FIGURE 6.14 A high Feather setting with a low Feather
Falloff setting

The default Feather Falloff value of 1 is a linear gradient. Values above 1


are a harder falloff, and values below 1 are a softer falloff.

11. Reset the Feather Falloff property to 1 and the Feather property to 0.

You can also create specific feathered areas instead of feathering the
whole shape. I sometimes call these soft edges.

12. Pull out a soft edge from the Bézier itself by holding Ctrl/Cmd and
pulling on a point or several points (FIGURE 6.15).

FIGURE 6.15 Pull out a soft edge by holding Ctrl/Cmd.

NOTE

You don’t have to remember all these hot keys. You can see
all of them in the context menu by right-clicking/Ctrl-clicking a
point.

13. To remove a soft edge (and bring back the secondary curve to its
origin), select the point and press Shift-E, or right-click/Ctrl-click a point
and choose Reset Feather from the drop-down menu.

Animating a shape
Now you will try animating a shape.

1. Advance to frame 11 in the Timebar and, with the Select All tool, click a
point to move your Bézier around a little and change the shape.

2. Move the Timebar from the first keyframe to the second keyframe and
see the shape animating from one to the other.

Keyframes are added automatically in RotoPaint due to the Autokey check


box being selected in the Tools Settings bar (FIGURE 6.16).


FIGURE 6.16 Changing the shape will result in a keyframe
for which the Autokey check box is selected.

The Autokey check box is selected by default, meaning the moment you
start drawing a shape, a keyframe for that shape is created and gets
updated on that frame as you are creating the shape. Moving to another
frame and changing the shape creates another keyframe and thus creates
animation. If, before you start drawing, you deselect the Autokey check
box, drawing will not create a keyframe. If you then move to another
frame and change the shape, no keyframe will be created either, and
neither will animation. If you have a keyframe and then deselect the
Autokey check box, and then you change the shape, this is only temporary
—if you move to the next frame, the shape immediately snaps to the last
available keyframe. You have to turn on the Autokey check box if you
want to keep the new shape after you change it.

3. Go to frame 4 in the Timebar and move the spline again to create


another keyframe.

4. Move the Timebar to see the shape animating between the three
keyframes.

You can also animate other things for the shape, such as its overall
feather.

5. Go to frame 1. In the Shape tab, click the feather’s Animation menu and
choose Set Key (FIGURE 6.17).

FIGURE 6.17 Setting a keyframe for a Shapes property

The color of the Feather field changes to bright blue, indicating a


keyframe.

6. Go to frame 11.

The color of the numeric box is now light blue to indicate that some
animation is associated with this property, but no keyframe is at this
frame.

7. Change the Feather property to 100.

The color of the box changes to a bright blue.

8. Go to frame 4 and change the Feather value to 70.

Using the Animation menu, you can add and delete keyframes. But you
need greater control over animation. Sure, you can add keyframes as you
have been doing, but what about the interpolation between these
keyframes? What kind of interpolation are you getting? And what about
editing keyframes? You know how to create them, but what about moving
them around in a convenient graphical interface? This is what the Curve
Editor is for.

THE CURVE EDITOR


As you add keyframes to a property, you create a curve. The curve
represents the value of the property over time. You can see this curve in
the Curve Editor panel, where each value (the Y axis) is plotted as it
changes over time (the X axis). You can add keyframes, delete keyframes,
and even adjust the interpolation between keyframes without ever looking
at this curve. However, as the animation grows more complex, you may
find it easier to edit the animation by manipulating this curve directly.

In most nodes in Nuke, right-clicking/Ctrl-clicking the Animation menu


gives you an option called Curve Editor, as seen in FIGURE 6.18, but
this functionality has been removed from the RotoPaint node. In
RotoPaint, because all properties are linked to a stroke or a shape, all
curves are associated with a stroke or a shape as well. Simply selecting a
shape or stroke and switching to the Curve Editor panel shows their
associated properties.


FIGURE 6.18 There is usually a Curve Editor option, but
this is not the case with RotoPaint.

1. Click the Curve Editor tab in the Node Graph pane to switch to it
(FIGURE 6.19).

FIGURE 6.19 The Curve Editor as you first see it

In the Curve Editor shown in Figure 6.19, the window on the left shows
the list of properties (I call it the Properties List window), whereas the
window on the right shows the actual curves for the properties as they are
being selected. I call this the Graph window.

You can now see somewhat of a hierarchy in the Properties List window.
The first item we’ll talk about is called Bezier1. If you have already drawn
more than one shape or stroke and you select them in the Stroke/Shape
List window in the Properties panel, they are displayed here instead of
Bezier1. Under Bezier1, you have the name of the property that has
animation in it—Feather, and under that you have W and H, for Width
and Height. True, you operated only the Feather property with one slider,
but this slider is actually two properties grouped together, one for Width
and another for Height.

These curves are already loaded into the Graph window on the right.

2. Select both the W and H Feather properties by clicking the first and
then Shift-clicking the second. Click the Graph window on the right to
select that window.

3. Press Ctrl/Cmd-A to select all keyframes and then press F to fit the
curves to the window (FIGURE 6.20).

FIGURE 6.20 Two identical curves in the Curve Editor

The Curve Editor now shows two curves, and all the keyframes for both
curves are selected. You have selected two curves that are exactly alike,
which is why it appears as if you are seeing only one curve.

4. Click an empty space in the Curve Editor to deselect the keyframes.

One thing you can do in the Curve Editor is change the interpolation of a
keyframe. You can switch between a smooth keyframe and a horizontal
one, for example.

5. Select the middle keyframe on the curve and press the H key to make
the point a horizontal one (FIGURE 6.21).
FIGURE 6.21 Making a point on a curve horizontal

TABLE 6.4 lists some hot keys for different types of interpolation on
curves.

TABLE 6.4 Curve Interpolation

You can move around in the Curve Editor in the same way that you move
around in other parts of the application. You use Alt/Option-click-drag to
pan around. Use + and – to zoom in or out, or use the scroll wheel to do
the same. You can also use Alt/Option-middle button-drag to zoom in a
nonuniform way. Pressing the F key frames the current curve to the size of
the window.

You can create more points on the curve by Ctrl-Alt/Cmd-Option-clicking,


just as you do when drawing shapes in the RotoPaint node.

6. Select the middle keyframe by clicking it (marqueeing won’t do).

Three numbers are displayed: an X, a Y, and a ø. The X and Y are obvious,


but the ø is the angle of the tangents. At the moment it’s set to 0 because
you told this keyframe to be horizontal.

7. Double-click X to activate its Input field and increase or decrease by 1,


then press Enter/Return (FIGURE 6.22).

FIGURE 6.22 Numerically changing the location of points

This makes editing locations of points on a curve very easy. Sometimes it


is easier to edit the location of points numerically, and it’s nice to have
this available.

Many other functions are available in the Curve Editor, but I won’t cover
them all now. Here’s one last function that enables you to edit the whole
curve using simple math.

8. Drag to select the whole curve (or Ctrl/Cmd-A to select them all).

9. Right-click/Ctrl-click and then choose Edit/Move (FIGURE 6.23).


FIGURE 6.23 You can access more features through the
contextual menu.

10. In the Move Animation Keys dialog box that opens, enter x+5 in the X
field to move your curve 5 frames forward (FIGURE 6.24).

FIGURE 6.24 Manipulating the curve function

11. Click OK.

Watch your curve move five frames forward. What happened here is you
asked to move all X values by the current X value plus five. So the whole
curve moved five frames forward.

Let’s try another function.

12. Select your curve again, right-click/Ctrl-click, and choose Edit/Move


again.

13. This time enter x/2 in the X field and press Return/Enter.

Watch your curve shrink. This is because you asked all X values to be half
their current value. Because X is time, your animation will now be twice
as fast.

These examples and the other features in the contextual menu enable you
to manipulate your curve in many ways.

14. If you want to save this project, do so. Otherwise, press Ctrl/Cmd-W
to close this project and create another one. If Nuke quits altogether for
some reason, just start it again.

This concludes the introduction to both the Curve Editor and the
RotoPaint node. That’s enough playtime! Let’s move on to practical uses.

ROTOPAINT IN PRACTICE
The exercise you are about to perform deals with a variety of issues that
RotoPaint can fix. You will paint single frames, clone with the Clone
brush, and create shapes.

This exercise also uses techniques you have already learned: matching
colors and tracking. To save space, I will not explain how to do these
steps, and as a result, you start this comp halfway through. If you’re
feeling confident enough and want to challenge yourself, try and get to
this stage on your own after you look at the starting point provided.

Let’s begin.

1. From the File menu, choose Open, navigate to the chapter06 directory,
and double-click RotoPaint01_start.nk (FIGURE 6.25). ⬆
FIGURE 6.25 This script’s starting point

2. Click to select Read1 and press 1 to view it in the Viewer.

This shot is taken from a new short film by Alex Norris called Grade Zero.
You should check it out on his website, www.alexnorris.tv
(http://www.alexnorris.tv).

What you see here is a foot that shouldn’t be there—yes, that one at the
top. It doesn’t work in that cut and the director (that would be Alex—and
you do what Alex tells you to) says to take it off (FIGURE 6.26).

FIGURE 6.26 This foot needs to go.

3. Click Play in the Viewer.

There’s a fair amount of movement in this shot—not just the camera, but
the actors are moving and the lights change too. All of these need to be
dealt with.

4. Click Read2 and press 1 to view it. Switch between viewing the RGB
and the alpha. When you’re done, stay on RGB.

Read2 is a patch. I made it using just frame 1, some roto, corner pinning,
and paint work. But it’s just a single frame with an alpha channel.

The rest of the nodes in the tree are

• Premult1: The alpha channel for Read2 shows a lot of black, but the RGB
doesn’t. This means it is a straight or unpremultiplied image, and so it
needs premultiplying before it can be filtered or merged.

• Tracker1: You just learned how to use one of these in Chapter 5. I


already used Tracker1 and matched the motion of Read2 to that of Read1.

• Merge1: I’ll let you figure out this one on your own.

• CurveTool1: I showed you how to execute this node to figure out the light
changes in Chapter 4. Here it’s used to gauge the lighting change of the
background.

• Grade1: This node already links to CurveToole1’s Intensity Data knobs.

Grade1 is disabled. Let’s enable it.

5. Click Grade1, view it in the Viewer, then press D to enable it.

The picture went very dark. It shouldn’t be very dark. In fact, on frame 1,
it should not change at all.

CurveTool1’s output is the overall brightness of the image, which in our
case is relatively dark. It doesn’t know what a starting point is. So when
you’re copying the data from CurveTool1, it takes an image that has a
certain brightness and darkens it further. You need the changes in only
brightness, however, much like in the Tracker. In the Tracker, you don’t
need to know the location of the reference in frame 1, instead you want to
know how much it changed in frame 2, frame 3, and so on. What you need
to do is get the inverse of the color correction according to only frame 1’s
data. This leaves frame 1 unchanged and returns only the changes in
relation to frame 1.

6. Click Grade1 and press G to create another Grade node.

7. Double-click CurveTool1 to load it to the Properties Bin, then click the


IntensityData tab.

8. Drag from CurveTool1’s IntensityData property’s Animation menu


(without holding any keyboard key) to Grade2’s Gain property’s
Animation menu and let go.

You have now copied over just frame 1’s values. But remember, you
wanted to invert the color correction. The Grade node has just the
solution for that.

9. In Grade2, click the Reverse check box (FIGURE 6.27).

FIGURE 6.27 This button reverses the Grade’s operation.

The color of the image is now back to normal as you are color correcting
with Grade1 and then inverting the operation with Grade2. On the rest of
the frames you will see the change in color only in relation to frame 1.

Let’s look at the result so far.

10. View Merge1 and click Play in the Viewer.

A lot of things are working already. The patch is moving with the
background and the colors match throughout the shot. The main problem,
seen in FIGURE 6.28, is that the knife and the fingers holding it are
hidden behind the patch where the shoe used to be.

FIGURE 6.28 There’s a knife behind that patch, you know.

Using Roto to create a moving matte


To make the knife look like it is being held above the floor patch, you need
a matte. Use a RotoPaint node to create that matte.

1. With Read1 selected, press Shift-P to branch out a RotoPaint node.

2. Press 1 to view RotoPaint1.

3. In RotoPaint1’s Properties panel, choose Alpha from the Output menu.

4. While on frame 1, click V while hovering the mouse pointer in the


Viewer to start drawing a Bézier shape.

5. Draw a Bézier shape around the knife and the fingers holding it. You
can see my shape in FIGURE 6.29.


FIGURE 6.29 Draw around the knife and fingers.

You immediately have a keyframe on frame 1. Now you need to follow this
hand over time and keep adding keyframes so the shape stays around the
knife throughout the shot. Here are some tips:

• Rotoscoping is just like animation. You should add keyframes where the
motion changes. Adding keyframes arbitrarily produces bad roto.

• Scroll through your clip and write down the frames you should add
keyframes in. The first and last frames are obvious choices; the other
choices depend on what’s mentioned in the previous tip.

• Once you are done adding keyframes in the frames you thought
appropriate, look for the places in between those keyframes that the
difference between the object and the shape is biggest. Adjust these.

• You don’t need the rest of the interface when you’re rotoscoping, so just
press the spacebar while hovering the mouse pointer in the Viewer to
maximize it.

• Move points together. Marquee-select as many points as makes sense,


then translate, rotate, or scale them in unison. This creates less wobble in
the resulting shape.

TIP

If the shape is ever blocking you from seeing where you want
to draw, you can turn it off by hovering the mouse pointer over
the Viewer and pressing the O key. Press O again to redisplay
the overlays.

I checked and it seems like good frames for adding keyframes are these: 1,
8, 12, 18, 19, 23, 28, 32, 42, 47, 55, and 58. Go ahead and check yourself,
and see if your list and mine are similar.

6. Follow and adjust the shape throughout the shot until you have
matched the shape to the knife.

It should take you anywhere between 10 and 30 minutes to complete this


activity. When you’re done, pat yourself on the back. You’ve done well.
Now to connect the roto you just performed to the rest of the tree.

7. With RotoPaint1 selected, press M on the keyboard to insert a Merge


node.

8. Drag Merge2 between Tracker1 and Merge1 (FIGURE 6.30).


FIGURE 6.30 Using another Merge node to cut a hole in
our patch

9. In Merge2’s Properties panel, choose Stencil from the Operation menu.

10. View Merge1 in the Viewer and click Play.

Mine looks pretty good and I hope yours does as well. If not, fix it until it
does. There’s just one thing on mine, and I’m sure it’s on yours too, that’s
bothering me: The matte is too sharp. It is too sharp for two reasons:
First, this area is not 100% in focus, and second, there’s motion blur that
needs to be matched.

11. To fix this, with RotoPaint1’s Properties panel open, select your Bézier
shape and switch to the Shape tab.

12. Enable motion blur by clicking the On check box to the right of the
Motion Blur slider (FIGURE 6.31).

FIGURE 6.31 Enabling motion blur for each shape


separately

13. Select RotoPaint1 in the DAG and press B to insert a Blur node.

14. Set the Blur Size value to about 6.

All is well and looks good, but if your roto is a little like mine, you will see
a little bit of a dark edge around the knife. You can solve this with the
Feather property in RotoPaint1.

15. In RotoPaint1’s Properties panel, with your Bézier selected, go to the


Shape tab.

16. Bring the Feather slider down to about –3 or so—until you lose the
dark edge.

This just about concludes this part of the exercise. But what’s that there?
There’s a remnant of a hand on the patch we used. See it here, in
FIGURE 6.32?

FIGURE 6.32 This patch needs fixing.


Using the Clone brush
The person who made the patch didn’t do their job properly. They (me, it
was me, I admit it) forgot to paint out the hand. Let’s not make too much
of a fuss and fix that now.

For this fix, you will use a brush you haven’t practiced with before: the
Clone brush. This brush uses pixels from one location in the frame and
copies them to another, basically allowing you to copy areas.

1. Make sure Read2 is selected and attach a RotoPaint node to it. Make
sure you are viewing RotoPaint2.

2. From RotoPaint2’s toolset, choose the Clone brush.

3. Change the Output channels menu to RGB (not RGBA as it is


originally) because you don’t want to change the alpha channel.

NOTE

It doesn’t matter which frame you are on when you’re painting


because you are actually painting on all frames.

4. This is a single frame at this point, and because you want the hand to
be gone for the duration, change the Lifetime Type drop-down menu from
Single to All in the Tool Settings bar (FIGURE 6.33).

FIGURE 6.33 Changing the Lifetime Type setting before


drawing a stroke

The Clone brush copies from one location in the image to another. You set
it up by Ctrl/Cmd-clicking the point you want to copy from and then,
without letting go, dragging to the point where you want to copy to.

In this case, you need to keep aligning your brush with the diagonal lines
of the checkered floor.

5. Make sure you’re zoomed in nicely (by using the + and – keys) to the
area where you want to work and then Ctrl/Cmd-click the area you want
to copy from. Then, without letting go, drag toward the hand—where you
want to copy to—and then release (FIGURE 6.34).

FIGURE 6.34 Setting up the Clone brush

6. You can now paint in one direction along the diagonal line. Notice that
your paint strokes are now copying grain and all the other texture from
your source area.

Remember to change the size of your brush and the direction and distance
of your clone source once in a while so you don’t get a repetetive pattern.

As you can see in FIGURE 6.35, I went a bit far with my painting. The
black checker should have stopped earlier. Let’s erase this mistake.


FIGURE 6.35 Good thing there’s an Eraser tool!

7. Select the Eraser and paint out any problem areas.

8. Repeat the process until you are happy with the results. Remember
that you can change properties as well—such as Size and Opacity—that
might help.

I ended up with the image shown in FIGURE 6.36.

FIGURE 6.36 The patch patched

Let’s move on. We have only one little thing left to do...

Dust removal with Paint


Now that you cloned out the hand, it’s time to remove the dust. I’m not
sure you noticed while looking at the clip that there is dust on it—but
there is (FIGURE 6.37). You need to clean this up.

FIGURE 6.37 That’s dust right there. And there’s more


where that came from.

In this case, even though you already have two RotoPaint nodes in the
tree, you still need a third one, and it needs to be in a different location in
the tree. But even if you could use one of the existing RotoPaint nodes, it’s
better to keep things separate. After all, that’s the good thing about the
node tree. So, go ahead and use another RotoPaint node for painting out
the dust. You can also use another little trick to clean up the dust quickly
and easily:

1. With nothing selected, create another RotoPaint node by pressing the P


key.

2. Insert RotoPaint3 between Read1 and Merge1.


NOTE

On-screen controls appear for each node that has controls


associated with it and for which its Properties panel is loaded
in the Properties Bin. This can get very confusing and clutter
the screen, so it’s good practice to close Properties panels
when you’re not using them.

3. Clear the Properties Bin and then double-click RotoPaint3 to open just
that node’s Properties panel.

To make it easy to paint out the dust, you are going to reveal back to the
next frame using the Reveal brush in hope that the speckle of dust won’t
appear in the same location frame after frame. This is faster than
performing cloning on each frame, but it works only if there is little to no
movement in the clip or if the movement in the clip has been stabilized
beforehand.

First, let’s see which frames have dust.

4. Click RotoPaint3 and press 1 to load it into the Viewer.

5. Take a pen and paper and start writing down which frames have dust.
Here’s the list I made for the first 20 frames: 1, 2, 5, 9, 10, 13, 19. You
should find all the rest of the frames that have dust in them and write
them down.

As mentioned earlier, you will use the Reveal brush in this example. Now
let’s set up the Reveal brush and reveal one frame forward from the
current frame.

6. Select the Reveal brush from RotoPaint3’s toolbar in the Viewer. The
Reveal tool is the second tool in the Clone tool’s menu (FIGURE 6.38).

FIGURE 6.38 The Reveal brush is hidden under the Clone


tool’s menu.

7. Once you’ve selected the Reveal brush, make sure Single is chosen from
the Lifetime Type drop-down menu in the Tool Settings bar.

The magic of this setup happens in the Reveal’s Source Time Offset
property (FIGURE 6.39), which determines what offset should be given
for the clip from which you reveal. You are revealing from the background
—the same source as you are painting on—however, you want to reveal to
one frame forward.

FIGURE 6.39 The Source Time Offset field

8. Change the Source Time Offset field at the top of the Viewer from 0 to
1. This is the field represented by the Δt label.

9. Change the drop-down menu to the left of Δt from Bg1 to Bg to reveal


back to the original image (FIGURE 6.40).


FIGURE 6.40 Choosing a reveal source from the drop-
down menu

You are now ready to start painting.

10. This kind of work is more easily done with a big screen, so hover your
mouse pointer over the Viewer and press the spacebar briefly to maximize
the Viewer. In case you need to go back to the full interface, press the
spacebar again.

11. Zoom in so the image fills the frame by pressing the H key.

12. Go to frame 1 and paint over the dust speckle to reveal back to the
next frame and, in effect, remove the speckle.

13. Repeat this process for every frame on your list.

14. When you’re finished, click Play in the Viewer to make sure you didn’t
miss anything. If you did, repeat.

15. Save your Nuke project in the student_files folder and restart Nuke
with a new empty project.

You used RotoPaint three times here for three different purposes. There’s
a lot more you can do with this versatile tool; another example is coming
next.

COMBINING ROTOPAINT AND ANIMATION


The following little exercise is simple enough, but I will overcomplicate it
a little so I can teach you more stuff. How’s that for overachieving?

NOTE

Please make sure CarWindow.png is Read1 and not the


reverse so you can follow the steps accurately.

1. In a new project, bring in two images, called CarWindow.png and


CarWindow_dirty, from the chapter06 folder with Read nodes (FIGURE
6.41).

FIGURE 6.41 These are the two images this production


provided to you.

For this shot, you have been asked to dirty up the windshield and write
CLEAN ME on it, as some people do when they walk past a dirty car. The
magic is going to be that, in this case, the message needs to appear as if
it’s being written by an invisible hand.

Let’s create a matte for the windshield.

2. Select Read1 and press P to insert a RotoPaint node.

3. Since you want to create the Bézier only on the alpha channel, deselect
the Red, Green, and Blue boxes at the top of RotoPaint1’s Properties panel
(FIGURE 6.42).

FIGURE 6.42 You can create images in any combination of


channels.

4. Make sure you are viewing RotoPaint1. Select the Bézier tool and start
drawing to trace the edges of the windshield (FIGURE 6.43).
FIGURE 6.43 My Bézier shape for the windshield

TIP

For fine-tuning, select a point and use the number pad on


your keyboard to move the point around. Down is 2 and 7 is
diagonal left and up, for example. Holding Ctrl/Cmd reduces
the move to a 10th, and holding Shift increases it by 10.

5. Refine your shape by changing the Bézier handles and moving things
around until you’ve created a matte you’re happy with.

6. Hover your mouse pointer over the Viewer and press the A key to view
the alpha channel.

You have now created your first matte! I kept promising in previous
chapters you would. Now the moment has finally arrived.

7. Press the A key again to view the RGB channels.

8. Empty the Properties Bin.

Compositing with the Keymix node


You will use another type of layering node than the usual Merge node
here, just to spice things up. The Keymix node has three inputs: a
foreground, a background, and a matte. Unlike the Merge node, the
Keymix node accepts an unpremultiplied image as its foreground image.
It’s very convenient to use the Keymix node to mix two images that have
no matte. Keymix uses fewer nodes, because you don’t have to use a
ShuffleCopy or a Premult node.

1. Select Read2 and, from the Merge toolbox, click Keymix.

2. Connect Keymix1’s B input to Read1 and the Mask input to RotoPaint1


(FIGURE 6.44).

FIGURE 6.44 Connecting Keymix’s three inputs

3. View Keymix1.

4. I don’t want this much noise, so change Keymix1’s Mix property to 0.2.

5. The matte you created is a bit sharp, so select RotoPaint1 and press the
B key to insert a Blur between RotoPaint1 and Keymix1.

6. Change Blur2’s Size value to 2.

7. Close all Properties panels by using the Clear All Properties panels at
the top right of the Properties Bin (FIGURE 6.45).


FIGURE 6.45 A dirty car window

You finally have a dirty car window. Now let’s move on to punch a hole in
the matte by writing CLEAN ME on it. You’ll use RotoPaint1 to create the
hand-drawn text CLEAN ME.

8. Double-click RotoPaint1 to load its Properties panel.

Working with the Stroke/Shape List window


To make sure things are tidy, you will create a folder in RotoPaint1’s
Stroke/Shape List window. You can use folders there to group things
together and separate different elements. Also, the group has settings of
its own, including an axis, which you can use to move the content of the
folder around. This really comes in handy sometimes.

1. Create a new folder in the Stroke/Shape List window by clicking the +


button at the bottom left of the window (FIGURE 6.46). If the button is
greyed out, click anywhere in the window to activate it.

FIGURE 6.46 Use the + button at the bottom of the


Stroke/Shape List window to add folders.

2. Rename the new folder (Layer1) by double-clicking the name and


typing CLEAN ME (an underscore will be added automatically)
(FIGURE 6.47).

FIGURE 6.47 Renaming the folder to keep things even


tidier

3. To see what you’re doing, temporarily hide Bezier1 by clicking the Eye
icon to the right of the name Bezier1 (FIGURE 6.48).

FIGURE 6.48 The Eye icon controls visibility.

4. Select the Brush tool and hover your mouse pointer over the car
window.

5. Make sure your newly created folder is selected in the Stroke/Shape


List window.

6. Make sure you’re drawing on all frames by choosing All from the drop-
down menu in the Tool Settings bar.

7. Write CLEAN ME on the window. It took me 16 strokes (FIGURE


6.49).

FIGURE 6.49 I wrote this with a trackpad. Don’t blame me.

TIP

The Stroke/Shape List window can become very cramped


very quickly. You can make the whole panel bigger by
dragging the bottom-right corner. This will make the
Stroke/Shape List window grow in size, too.

8. Turn Shape1 back on by clicking to bring back the Eye icon in the
Stroke/Shape List window to the right of the name Bezier1.

You won’t see the writing anymore. This is because both the shape and the
strokes draw in white. However, RotoPaint is a mini-compositing system
in its own right. You can tell all the strokes to punch holes in the shape
just as you can with a Merge node.

9. Select all the strokes called Brush# by clicking the first and then Shift-
clicking the last in the Stroke/Shape List window (FIGURE 6.50).

FIGURE 6.50 Selecting in the Stroke/Shape List window

10. Change the Blending Mode drop-down menu from Over to Minus
(FIGURE 6.51).

FIGURE 6.51 Changing the Blending Mode property

You can now see your writing (FIGURE 6.52)!


FIGURE 6.52 Your car should look something like this.

The only thing left to do is animate the writing of the words. For that, you
use the Write On End property in the Stroke tab and the Dope Sheet.

USING THE DOPE SHEET


The Dope Sheet is another panel that helps control timing in your tree. It’s
easier to change the timing of keyframes with the Dope Sheet than with
the Curve Editor, and it can also change the timing of Read nodes in your
tree.

To get started with a Dope Sheet, the first thing to do is create two
keyframes for all the brush strokes on the same frames. You then stagnate
the keyframes using the Dope Sheet so the letters appear to get drawn one
after the other.

1. All the brush strokes that draw the text should already be selected in
the Stroke/Shape List window. If they aren’t, select them again.

2. Go to frame 1 in the Timebar. This is where the animation will start.

3. Click the Stroke tab in RotoPaint1’s Properties panel.

4. Change the Write On End property to 0.

5. Right-click/Ctrl-click the field and choose Set Key from the contextual
menu (FIGURE 6.53). This sets keyframes for all the selected brush
strokes.

FIGURE 6.53 Setting a keyframe using the Write On End


field

Now for the second keyframe.

6. Go to frame 11 in the Timebar.

7. Bring the Write On End property to 1.

8. Click Play in the Viewer.

At the moment, all the letters are being written at once. You can stagger
this so that the first letter is written first and so on.

9. In the bottom-left pane, switch from the Node Graph tab to the Dope
Sheet tab.

What you should be seeing is something similar to FIGURE 6.54.

FIGURE 6.54 The Dope Sheet


This is the Dope Sheet. It is very similar to the Curve Editor, but it focuses
on the timing of keyframes rather than their value and interpolation. You
can change the timing of keyframes in the Curve Editor as well, but it is a
lot more convenient to do so here. Another function the Dope Sheet serves
is changing the timing of Read nodes—meaning clips in the timeline. (You
will learn this in Chapter 8.)

The window on the left, which I call the Properties List window, shows the
list of properties that are open in the Properties Bin. The window on the
right, which I call the Keyframe window, shows the actual keyframes for
each property.

The Properties List window is hierarchical, meaning it shows the tree as


submenus within submenus. This lets you move groups of keyframes
together, which is very convenient when you’re working on animation-
based work.

At the moment, you are only looking at the Properties window,


RotoPaint1—the only node that is loaded into the Properties Bin. Under
that, you can see the submenu curves, representing all the animation
curves available, in the same way as this information was presented in the
Curve Editor. Let’s see what else is in there.

10. Click the little – symbol to the left of the CLEAN_ME submenu in the
Dope Sheet’s Properties window, then click it again to open it back up.

You now see this holds all the keyframes for the brush strokes you added
to the CLEAN_ME folder (FIGURE 6.55).

FIGURE 6.55 The list of brushes appears in the left window


inside the CLEAN_ME folder.

Each of these strokes has keyframes associated with it—in your case,
keyframes for the actual shape that was drawn and two keyframes for the
Write On End property you animated.

I really want to only stagger the Write On End property. If I move the
stroke’s drawing keyframe, it won’t have any effect on the animation or
the shape. As it’s easier to move both of them as one entity, that’s what
we’ll do.

11. Click the – symbol to the left of all the Brush# properties.

It is a little hard to see the keyframes on the Keyframe window because


you are zoomed out quite a bit. The Keyframe window is set at the
moment to show you frames –5 to 105. You can navigate in the Keyframe
window in the same way that you navigate the Viewer and the Curve
Editor. However, you can also zoom it in using two fields at the bottom of
the window. Zoom in to view frames 0 to 20.

12. Using the Range fields at the bottom right of the Dope Sheet, frame
the Keyframe window between 0 and 20 (FIGURE 6.56).

FIGURE 6.56 This is a very convenient way to frame the


Keyframe window.

You can now see the keyframes more clearly (FIGURE 6.57). Each
vertical box represents a keyframe.

FIGURE 6.57 The Keyframe window shows the keyframes


more clearly now.

13. Click Brush2 in the Property List window.

Dragging the box that appears around the keyframes, you can move them
in unison. You can also use the box to scale the animation by dragging on
its sides. Of course, you can also move each keyframe individually
(FIGURE 6.58).

FIGURE 6.58 The box allows you to control keyframes


together.

14. Click the center of the box and drag the keyframes until the first one is
at frame 6. You can see the frame numbers at the top and bottom
(FIGURE 6.59).

FIGURE 6.59 Using the box to offset your animation

So you are starting to do what you set out to do: stagnate the animation.
You need to keep doing this for every subsequent brush stroke for another
five frames. Here’s another way to do this.

15. Select Brushes 3 until the end by clicking Brush3 and then Shift-
clicking the last Brush stroke—in my case, Brush10 (FIGURE 6.60).

FIGURE 6.60 You can also change a whole bunch of


properties’ keyframes in one step.

At the bottom of the Dope Sheet window there is a Move button and next
to it is a field (FIGURE 6.61). You can use this button to change the
location of keyframes without needing to drag.

FIGURE 6.61 The Move button

16. Enter 5 in the field.

17. Click the Move button twice to move all the keyframes selected by 10
frames (FIGURE 6.62).

FIGURE 6.62 Moving all the selected keyframes by 10


frames

Next time you click Move, you don’t need to move Brush3 anymore, so
you need to deselect it before clicking. Do so again without Brush4, then
again without Brush5, and so on.

18. Deselect Brush3 by Ctrl/Cmd-clicking it.

19. Click the Move button.

It is probably getting hard to see what you are doing because as you are
pushing keyframes forward, they are going off screen. You need to
reframe your Dope Sheet again.

20. Set the Range of fields to run from 0 to 100.



21. You now need to keep offsetting the keyframes five frames at a time,
while deselecting property after property.

When you are finished, the Dope Sheet should look like FIGURE 6.63.
FIGURE 6.63 The staggered staircase of keyframes in the
Dope Sheet

This concludes the animation creation stage. All you have to do now is sit
back and enjoy this writing-on effect you have made.

22. Switch back to the Node Graph and, with Keymix1 selected, press 1 to
view it. Then click Play.

I hope you are enjoying the fruits of your labor (FIGURE 6.64).

FIGURE 6.64 This is how your tree should appear at the


end of this exercise.

The RotoPaint node is indeed very powerful and you should become good
friends with it. I hope going through these examples helped.

PREV NEXT
⏮ ⏭
Recommended
5. 2D Tracking/ Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out 7. Keying
© 2017 Safari. Terms of Service / Privacy Policy


 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
6. RotoPaint 8. Compositing Hi-Res Stereo Images
   🔎

7. Keying
Keying is the process of creating a matte (an image that defines
a foreground area) by asking the compositing system to look for
a range of colors in the image. This is also sometimes called
extracting a key. It’s a procedural method, which makes keying
a lot faster than rotoscoping, for example. However, it has its
own problems.

You have to shoot specifically for keying because you have to


place the foreground object in front of a single-color screen.
The color of the screen can’t include any of the colors of the
foreground object because the computer will be asked to
remove this color. Usually this is done with either a blue or
green backing—called a bluescreen or greenscreen,
respectively. Usually the color you choose is related to the
colors in the foreground object. If it’s an actor wearing blue
jeans, a greenscreen is used. If it’s a green car, go with a
bluescreen.

Because you want the computer to remove a color from the image (blue or
green, normally) you want the screen to be lit as evenly as possible to
produce something close to a single color. This is, of course, hard to do
and rarely successful. Usually what you get is an uneven screen—a screen
that has many different shades of the screen color.

Because you have to shoot especially for keying and you can’t shoot on
location, you have to do a lot of extra work to make a shot like this work.
Extracting a key is not an easy process, and problems—for example, holes
in the matte and fine edges such as hairs—are always an issue. Also,
standing an actor in the middle of a green-painted studio means the actor
will have a strong green discoloration (called spill) that you will have to
remove somehow—a process called spill suppression or just spill for
short.

But still, keying is very often the better method of working, and is used
extensively in the VFX (visual effects) industry.

Most applications try to create a magic keying tool that gets rid of the
screen with a couple of clicks. However, this hardly ever works. Most
likely, you have to create a whole big composition just for extracting the
matte. Nuke’s tree-based approach makes it easy to combine keys, mattes,
and color correction together to reach a better overall matte and corrected
(spill suppressed) foreground.


BASIC KEYING TERMS

There are four basic types of keying techniques. Without going into a
complete explanation of the theory of keying, here are the four
techniques:

Luma-keying uses the black and white (called luminance in video


terms) value of the image to extract a matte.

Chroma-keying uses the hue and saturation (called chrominance in


video terms) values of the image to extract a matte.

Different-keying comprises the difference between an image with a


foreground element and another image with that foreground element
taken away (also called a clean plate).

Color difference–keying utilizes the difference in color between the


three color channels to extract a matte. Spill suppression is a side effect of
this technique.

INTRODUCING NUKE’S KEYING NODES


Nuke has many keying nodes, but I have room to cover only a few of them
in detail. Here is a rundown of the options in Nuke’s Keying toolbox:

Difference: This undocumented node is rarely used. It is a simple


difference-key node that creates a matte for the difference between two
images. It has very little control over the matte and no control for spill
(FIGURE 7.1).

FIGURE 7.1 The Difference node’s Properties panel

HueKeyer: This is a simple, yet very easy-to-use, chroma keyer. It


lets you choose a range of hues and saturation to extract (key out) in a
very handy user interface (FIGURE 7.2).

FIGURE 7.2 The HueKeyer’s Properties panel

IBK: The Image Based Keyer (IBK) was first developed at Digital
Domain, which also originally developed Nuke. This keyer is designed to
work with uneven green- and bluescreen elements. So instead of having
one blue or green shade, you have many. The IBK, which consists of two
nodes—the IBKColor and IBKGizmo—creates a color image representing
all those different screen colors (but without the foreground element) and
then uses that to key instead of a single color (FIGURE 7.3).


FIGURE 7.3 A basic IBK tree that includes the two IBK
nodes

Keyer: This is a basic keyer, used mainly for luminance-based keying


or luma keying (FIGURE 7.4).

FIGURE 7.4 The Keyer node’s Properties panel

Primatte: This plug-in from Imagica Corp. is bundled with Nuke. It is


one of three industry-leading keying plug-ins available. (Primatte,
Keylight, and Ultimatte all come with Nuke.) This is a great keyer that can
key any color and reach very high levels of detail and control. It has
control over matte, edge, transparency, and fill. Primatte is not available
in the PLE (Personal Learning Edition) version of Nuke.

Keylight: This plug-in from The Foundry is included free with Nuke.
It keys only blue- and greenscreens—and it does the job very well,
producing results I find unprecedented.

Ultimatte: This plug-in from Ultimatte is also bundled with Nuke. Yet
another great keyer, it has control over the matte, edge, transparency, and
spill.

Now, let’s jump right in and try a few of these keying nodes.

1. Go to your chapter07 directory and bring in bg.####.png and


gs.####.png.

2. Select Read2 (this should be the greenscreen element) and view it in


the Viewer (FIGURE 7.5).

FIGURE 7.5 The greenscreen element to key, courtesy of


Divine Productions and Cassis Films

This shot is taken from a wonderful short (not that short actually, at 40
minutes) called Aya by Mihal Brezis and Oded Binnun. The actress is
Sarah Adler. I was very lucky to get approval to use the shot in the book.
It’s a small drama between two characters driving and the whole thing is
shot on greenscreen, not that you would think it watching the film.


This is a pretty flat greenscreen element. Then again, even if it is, as
always, it will still pose all sorts of problems. A lot of wispy hairs that we
can hopefully retain, and also a fair amount of green spill on the white
areas in the woman’s shirt and the dark midtones of her skin tone need to
be fixed.
HUEKEYER
The HueKeyer is a straightforward chroma keyer. It has one input for the
image you want to key. HueKeyer produces an alpha channel by default,
and does not premultiply the image.

Let’s connect one HueKeyer node.

1. Select Read2 and insert a HueKeyer node from the Keyer toolbox after
it (FIGURE 7.6).

FIGURE 7.6 The HueKeyer’s Properties panel

HueKeyer’s interface consists of a graph editor where the X axis


represents hue, and the Y axis represents the amount to be keyed. By
creating a point on the graph at a certain hue and pulling the graph up to
a value of 1, you tell HueKeyer that this hue should be 100% gone.

2. Make sure you are viewing the output of HueKeyer1. Switch to viewing
the alpha channel (FIGURE 7.7).

FIGURE 7.7 HueKeyer’s default alpha channel

You can see that already there’s a very promising alpha in there. This is
because, by default, HueKeyer’s designed to key out a range of greens and
cyans. This greenscreen is, surprisingly, very good. Let’s see where it is on
the graph.

3. Switch back to viewing the color channels.

4. Hover your mouse pointer over the greenscreen area and look at
HueKeyer1’s graph (FIGURE 7.8).

FIGURE 7.8 The yellow line in the graph represents the ⬆


hue of the color that the mouse pointer is hovering over.

You can see by the yellow line that’s moving about that the greenscreen
sits somewhere around 3.1 on the hue, or X axis. The dark areas of the car
interior are somewhere around 2.3, by the way.
Now you are going to edit the curve in the graph by moving some of the
points that are already there. You can do it by hand, but I’ll start you off
by typing numbers instead.

5. View the alpha channel.

6. In the Curve List window, click the Amount curve so you see only that
curve in the graph (FIGURE 7.9).

FIGURE 7.9 Clicking a curve in the list selects only that


curve.

The other curve controls the amount of saturation, which you can edit the
same way. However, doing so has little effect in this case, so you don’t
need it.

7. Click the point in the graph that’s at X = 2, Y = 0 (FIGURE 7.10).

FIGURE 7.10 Clicking a point in the graph allows you to


numerically edit it.

8. Double-click the X value next to the point itself. This displays an input
field.

9. Change the value from 2 to 2.6, which tells the keyer not to key out the
dark areas of the car interior. Press Enter/Return (FIGURE 7.11).

FIGURE 7.11 Changing a point on a graph numerically


using the input field

10. Select the point at X = 3, Y = 1 and drag it to the right until it reaches
somewhere around 3.1 on the X axis. This gives a little bit more softness
to the hair.

Notice that when you start to drag in one axis, the movement is already
locked to that axis. This is very convenient because it allows you to change
only what you want to change without having to hold any modifier keys
(FIGURE 7.12).

FIGURE 7.12 HueKeyer’s results in the Viewer

Surprisingly, this is not a bad key for such a simple keyer. Although some

keying nodes can extract only green- or bluescreens, you can use the
HueKeyer to extract a key from any range of colors. The downside to the
HueKeyer is that it can’t key different amounts of luminance (although
the sat_thrsh curve, or saturation threshold, does allow you to control
saturation). In other words, it doesn’t have fine-tuning capabilities.
Put the HueKeyer aside for now and we’ll move on to another keying
node.

THE IMAGE BASED KEYER


The Image Based Keyer (IBK) consists of two nodes: the IBKColour node
and the IBKGizmo node. IBKColour is designed to turn the screen
element into a complete screen image—removing the foreground element
and essentially producing what’s called a clean plate. (Plate is another
word for image, and clean plate means an image without the foreground
object.) By this, I mean it gets rid of the foreground elements and
produces only the screen colors, green or blue.

The IBKGizmo takes this clean plate, by connecting the IBKColour’s


output to the IBKGizmo’s C input, and uses this information to create a
key, including fine detail adjustments and, if you connect a background
image as well, some smart spill suppression. The great thing about this
method is that for each pixel in the screen element, you have a
corresponding clean plate pixel. So it doesn’t matter how uneven the
screen is, as you are giving the IBKGizmo a different color for every part
of the image using the clean plate.

Incidentally, if you have a clean plate that was shot—as in, you asked the
actors to clear out of the frame and you took a picture without them—you
can connect the IBKGizmo’s C input to it instead of the IBKColour’s
output. This way of doing things is even better because it really gives you
the best source to work with.

The screen element you have is pretty flat, but still, the IBK can turn out a
great result here as well, especially on fine hair detail.

1. Select Read2 and hold Shift to branch an IBKColour node from the
Keyer toolbox.

2. Select Read2 again and branch an IBKGizmo node from the Keyer
toolbox.

3. Connect IBKGizmoV3_1’s c input to the output of IBKColourV3_1.

4. Connect IBKGizmoV3_1’s bg input to the output of Read1.

You are connecting the background image to the Keyer so that the spill
suppression factors in the background colors.

Your tree should now look like FIGURE 7.13.

FIGURE 7.13 Setting up an IBK tree

The first thing you need to do is set up the IBKColour node to produce a
clean plate. When that’s done you set up the IBKGizmo node to produce a
key.

5. View IBKColourV3_1 in the Viewer. Also make sure that its Properties
panel is open in the Properties Bin.

You should see black. When adjusting the IBKColour node, the first thing
to do is state which kind of screen you have: green or blue.

6. Change the first Property, Screen Type, to Green.

You now see the greenscreen image with a black hole where the woman
and car used to be (FIGURE 7.14).


FIGURE 7.14 This is the first step in producing the clean
plate.

The main things left to remove are the fine hair details around the edges
of the black patch. If you leave them there, you are actually instructing the
IBKGizmo to get rid of these colors, which is not what you want to do.

The next step adjusts the Darks and Lights properties. For a greenscreen
element, you adjust the G (green) property. For a bluescreen, adjust the B
(blue) property.

7. Click to the right of the 0 in the Darks G property (FIGURE 7.15).

FIGURE 7.15 Place the curser to the right of the available


digits.

To change really fine values, you need another decimal digit.

8. Enter .0. That’s a dot and then a 0 (FIGURE 7.16).

FIGURE 7.16 Using the arrow keys to nudge values is a


convenient way to make fine adjustments.

Your cursor is now to the right of the 0 you just entered. Using the up and
down arrow keys, you can now nudge the value in hundredths.

9. Decrease the value by pressing the down arrow slowly. You are looking
for the magic value that does not bring in too much black into the green
area but still reduces the number of hairs that are visible on the edge of
the black patch. Nudge the value down to –0.12 (FIGURE 7.17).

FIGURE 7.17 The mirror filling with black is not good.

You have actually gone too far. A lot of green areas (especially the mirror)
were filled in black. You need to go back a little.

10. I ended up moving the curser again to the right and adding another
digit (the thousandth), and my final value is –0.085. Anything else started
eating into the mirror. Find your magic value.

Now you should do the same with the Lights property. Slowly bring it
down. But I tried it already. Moving it doesn’t change the green area in
any way that contributes, so leave it as it is. You can always change it ⬆
later.

The next property to adjust is the Erode property. This will eat in further
from the black patch and reduce the amount of hair detail you have. It
might also introduce some unwanted black patches, so be careful.
11. Start dragging the slider to increase the Erode property value. Watch
out for the mirror and the green area behind the woman’s neck. I left
mine at 0.2 (FIGURE 7.18).

FIGURE 7.18 The result so far

Finally you need to adjust the Patch Black property until all your black
patches are filled. There’s a lot of non-greenscreen image area here, so the
value is large.

12. Bring up the Patch Black property until all your black areas are filled
with green. The slider goes up to 5, but you need 15 to get the result you’re
after. To do this, you can enter 15 in the field (FIGURE 7.19).

FIGURE 7.19 A complete green image—the clean plate

Now that you have a clean plate, you need to adjust the IBKGizmo settings
to get a nice-looking key. Let’s move over to the IBKGizmo.

13. Double-click IBKGizmoV3_1 to load its Properties panel into the


Properties Bin.

14. View IBKGizmoV3_1 and its alpha channel in the Viewer.

Again, the first thing to take care of here is setting the screen color.

15. Change the Screen Type property to C-green.

You can already see that the alpha has some really good things happening
in it. First, the whole foreground (where the woman is) is mostly white.
Also, all the fine hair detail is preserved beautifully. There is just a little
noise in the black and white areas that needs to be cleaned up.

The other properties (FIGURE 7.20) worth mentioning include these:

• Red Weight: This property changes the density of the generated matte
by adding or subtracting the red areas of the image.

• Blue/Green Weight: This property changes the density of the


generated matte by adding or subtracting the blue or green areas of the
image. If you’re keying out blue, then it’s the green channel that is being
used. If you’re keying out green, then it’s the blue channel.

• Luminance Match: This property first needs to be turned on with the


check box. Once it is on, the Screen Range slider will affect light areas and
firm up the matte there.

• Luminance Level: This property no longer has any effect and will be
removed in the next version of the software.

FIGURE 7.20 The IBKGizmo’s properties

To properly see the adjustments as you change property values, you


should actually look at a composite. This means you need to place the

keyed foreground on top of your background image.

16. Select IBKGizmoV3_1 and insert a Merge node after it by pressing the
M key.
17. Connect Merge1’s B input to Read1’s output (FIGURE 7.21).

FIGURE 7.21 Your tree should look like this after adding
the Merge node.

18. Make sure you are viewing the RGB channels of Merge1 in the Viewer.

Notice that the foreground is a little transparent—you will fix this. You
can make the matte whiter by using the Red Weight and Blue/Green
Weight properties (FIGURE 7.22).

FIGURE 7.22 The road is showing through the car.

19. While looking at your screen, adjust the Red Weight and Blue/Green
Weight properties until the car isn’t transparent. Don’t go too far or you
will compromise the density of the edges of your matte. I ended up with
0.79 for the Red Weight and 0.425 for the Blue/Green Weight.

To get a little more hair detail, select the Luminance Match check box and
then move the slider a little.

20. View the alpha channel in the Viewer.

21. Select the Luminance Match check box.

You can see that the hairs are a little softer and the edges of the mirror are
tighter as well.

22. Move the Screen Range property a little so that you reduce the
amount of noise on the background a little—without changing the
foreground. I left mine at 0.83.

If you don’t think this property did any good to the overall key, turn it off.
I’m leaving it on, though.

23. If you want to, deselect Luminance Match.

This is as far as you can get the matte. It is hardly perfect, but for this
greenscreen element, it is the best you can do with just the IBK. You will
get to use this matte later, using another technique. For now, you can still
adjust the spill a bit more.

The IBKGizmo has some controls remaining at the very bottom for edge
correction. These properties are Screen Subtraction, Use Bkg Luminance,
and Use Bkg Chroma (FIGURE 7.23). Let’s see what these do.

FIGURE 7.23 The bottom three properties for the


IBKGizmo node

24. Switch to viewing the RGB channels of Merge1.



25. Screen Subtraction is selected by default in IBKGizmov3_1. Deselect
and select it again to see what it does. Leave it on.
This property subtracts the original greenscreen element from the result.
This reduces spill on the edges of the matte, and it does a very good job.

26. Use Bkg Luminance is deselected by default. Select it to see its effect,
and leave it selected when you’re finished.

This property uses the background’s luminance to color correct the


foreground around the edges of the matte.

27. Use Bkg Chroma is deselected by default. Select it to see its effect, and
leave it on when you’re finished.

This property uses the background’s chroma to color correct the


foreground around the edges of the matte. In this case both luma and
chroma corrections create a better composite, so leave them on (FIGURE
7.24).

FIGURE 7.24 The final composite as it stands at this stage

28. In the Viewer, click Play. When you’re done viewing, click Stop and go
back to frame 1.

Even though the matte is noisy, the final composite still looks pretty good.
You know it is not perfect, but it holds pretty well. The noisy black areas
of the matte are getting color corrected in the RGB channels through the
last two check boxes you turned on. This makes all that noise pretty much
invisible.

You will learn to make this key look even better later in this chapter. For
now, move on to the third, and last, keying node: Keylight.

KEYLIGHT
Keylight, like the IBK earlier in this chapter, is a blue- and greenscreen
keyer. It is not designed to key out any color, just green- and bluescreens.
It does that job very well, and many times, all you have to do is choose the
screen color and that’s it. Keylight also tackles transparencies and spill
exceptionally well.

Let’s start by branching it out from the greenscreen element.

1. Select Read2 and Shift-click the Keylight icon in the Keyer toolbox on
the left (FIGURE 7.25).

FIGURE 7.25 Branching out a Keylight

2. Make sure you are viewing the output of Keylight1, and viewing the
RGB channels.

Keylight has four inputs (FIGURE 7.26):

• Source: The first and main input—often the only input you will use.
This is where the element to key should go in. This input should already
be connected to your greenscreen element.

• Bg: You can connect a background image here. Because Keylight also
suppresses spill, it can use the color of the background for that
suppression (and it does so by default if the input is filled). Keylight can
also actually composite over the background, although this is rarely done.

• InM: Stands for inside matte (also called holdout matte). If you have a ⬆
black and white image (roto or other rough key), you can use it with this
input to tell Keylight not to key out this area. This can also be called a core
matte.

• OutM: Stands for output matte (also called garbage matte). If you have
a black and white image (again through a roto or a rough key), you can
use it with this input to tell Keylight to make all this area black.

FIGURE 7.26 Keylight’s four inputs

When using Keylight, the first thing to do is connect the background if you
have it.

3. Connect Read1 to Keylight1’s Bg input.

Now you can begin keying by choosing the screen’s main green pigment.

4. Click the Color Swatch for the Screen Colour property to activate it
(FIGURE 7.27).

FIGURE 7.27 Turning on the Screen Colour property’s


Color Swatch

NOTE

Remember, when capturing colors from the screen, hold


down Ctrl/Cmd to activate the actual selection. Add Alt/Option
to choose colors from the input of the node, rather than the
output. And add Shift to pick several pixels and then average
between them.

5. Now, while holding down Ctrl-Alt-Shift/Cmd-Option-Shift, click and


drag on screen to find the best screen color. Look at FIGURE 7.28—the
red box shows where I ended up dragging. If you drag exactly like I did,
the rest of the exercises in the chapter will be easier for you to follow.

FIGURE 7.28 The red box signifies where to drag.

This is what I have set for Screen Colour property: 0.075, 0.24, 0.1.

This drag action is a very important one. It is the basis for all other
operations that come later. Try several different colors first, and then
choose the best one. Every time you release the mouse button and click
again, you are changing the selected color.

6. Deselect the Color Swatch to turn off Viewer sampling; then do a quick
Ctrl/Cmd-click in the Viewer to get rid of the red box.
TIP

It is very important to remember to deselect the Color Swatch.


If you leave it on and then click on screen, you will change the
selected color.

7. You can look at the alpha channel now by hovering the mouse pointer
over the Viewer and pressing the A key. The matte won’t be perfect, but it
should look something like FIGURE 7.29.

FIGURE 7.29 The starting matte for Keylight1

You can now start to tweak the matte. Keylight has a lot of different
properties, but the main one is Screen Gain. This is a multiplier, which
hardens the matte. By harden I mean that it pushes the contrast up—the
dark grays toward black and the light grays toward white.

8. Change the Screen Gain property to 2 (FIGURE 7.30).

FIGURE 7.30 Fine hair detail is lost when Screen Gain is


pushed too hard.

As you can see, the background part of the matte is completely black now.
However, I’ve achieved this at the expense of the white areas, which are
getting grayer, not to mention the fine detail in the hair that’s been lost.
Be very cautious when you use this value; it should rarely if ever reach
these high values.

You can find controls similar to Screen Gain, but a little finer, under the
Tuning submenu below Screen Gain.

9. Bring back the Screen Gain parameter to 1.1 or so.

10. Click to twirl down the triangular arrow in order to display the Tuning
submenu (FIGURE 7.31).

FIGURE 7.31 The Tuning submenu

Here you see four properties. The bottom three are Gain controls for the
three dynamic ranges: shadows, midtones, and highlights. The first
property defines where the midtones are—if this was a generally dark
image, the midtones would be lower.

11. Bring down the Shadow Gain property to about 0.81. This should fill
the grays in the dark background area.

12. To fill in the white areas of the matte with more white, adjust the
Highlights Gain down. I stopped at 0.19.

NOTE

If your matte looks different than mine, that means you picked
different colors for the Screen Colour property. That’s OK. But
you’ll have to play around with the Shadow Gain, Midtones
Gain, and Highlights Gain properties to make the foreground
areas white and the background areas black. The Screen
Matte properties, which I explain in the next step, also need to
be adjusted in a different way.

13. To fill in more, adjust the Midtones Gain down a little, so you do not
add gray to the background. I reached 0.97.

Keylight offers a lot of additional controls. Under the Screen Matte


submenu you find properties that control the matte after its keying
process has been created. I call these the post key operations, because
these operations happen on the matte produced by the keying process
rather than being part of the keying process.

14. Click the arrow next to Tuning to hide these controls, and then click
the arrow next to Screen Matte to display those options instead (FIGURE
7.32).

FIGURE 7.32 The Screen Matte options

You can adjust the Clip Black and Clip White properties to remove any
remaining gray pixels in the matte. If you lose too much detail, you can
use the Clip Rollback property to bring back fine detail.

15. Adjust Clip Black and Clip White to get rid of all the gray pixels in the
matte. I ended up with 0.03 in the Clip Black property and 0.78 in Clip
White.

The View drop-down menu at the top of the Properties panel changes the
output of the Keylight node to show one of several options. Here are the
important ones to note:

• Source: Shows the original source image.

• Final Result: Shows a premultiplied foreground image with its matte.


This is normally how you would output the result of this node.

• Composite: Creates a composite over whatever is in the Bg input of the


Keylight node. Normally you will not use the Keylight node to composite
the foreground over the background, but while keying, it is convenient to
see your final composite and what other work you have left to do.

• Status: Shows a black, gray, and white representation of the keying


process, where white represents complete foreground pixels, black
represents complete background pixels, and a 50% gray represents
transparent or spill suppressed pixels. You can use this handy tool to see
the status of your matte and pinpoint problematic areas (FIGURE 7.33).


FIGURE 7.33 Choosing output mode in the View property

16. To see the final result with the background, change the View property
to Composite.
Here you can see your greenscreen element composited over the
background. It is not bad at all (FIGURE 7.34).

FIGURE 7.34 The final output of the Keylight node

17. Change the View property back to Final Result.

Usually, using post key operations, as you did before using the properties
in the Screen Matte submenu, significantly degrades the matte. Edges
start to bake and wiggle (two terms used to describe a noisy matte that
changes in an unnatural way from frame to frame), and the level of detail
starts to deteriorate. A good way to get a better key is to use the tree itself
to build up a series of keys together that produce the best result.

This is really where node-based compositing systems, and Nuke in


particular, shine. Because you can easily combine mattes using the tree,
you can get a very good key and can use several keyer nodes together,
taking the best from each.

COMBINING KEYER NODES USING THE TREE


In my opinion, the IBK produced a far superior result when compared to
the other two keyer nodes we discussed in the previous section. The hair
detail is really fine, and the edges are great. The main problem, however,
is that the area that’s supposed to be background isn’t black in the matte.
After all, edges are really what keying is all about. It is very easy to draw a
shape with the RotoPaint node and fill in things that are supposed to be
white in the center (or core) of the matte, and it is very easy to do the
same for areas that are supposed to be black (also called garbage). But
the actual edge—that’s what you are looking for when you’re keying.

The result of the Keylight node is pretty good too, but not great. You can
use the Screen Matte properties to create a very hard matte, but it has
well-defined white and black areas. Maybe you can use the Keylight node
and the IBK nodes together to create a perfect key.

First let’s use Keylight1 to create a garbage matte—a matte that defines
unwanted areas.

1. Keylight1’s Properties panel should still be loaded in the Properties Bin.


If it’s not, load it. Also, make sure you are viewing Keylight1 in the Viewer.

2. Make sure Keylight1’s matte doesn’t have any grays in it (aside from the
edges). Do this by adjusting the Clip Black and Clip White properties
further. I ended with 0.05 for Clip Black and 0.65 for Clip White
(FIGURE 7.35).

FIGURE 7.35 A crunchy matte made up of mainly black and


white

Now what you have in front of you is a hard matte, or a crunchy matte,
and you can use it as two things: as a garbage matte and as a core matte.
In this case, however, you don’t need a core matte, because the core of the
IBK is fine. You do need a garbage matte because the outside of the IBK
matte has a fair amount of gray noise. However, if you use the matte as it
is now, you will lose a lot of the fine detail the IBK produced. You need to
make the matte you have here bigger, and then you can use it to make the
outside area black. For that you need to use one of three tools: Erode,
Dilate, and Erode (yes, really, keep reading).


Erode, Dilate, and Erode
Three tools in Nuke can both dilate (expand) and erode (contract) a
matte. Their names are confusing and, since they all have two sets of
names, it gets even more confusing. They are called one thing in the Filter
toolbox, where you can find them, but once you create them, they are
called something else in the interface. Yes, it is that crazy. I do wish they
would simplify this little source of confusion. Here’s the rundown:

Erode (Fast): A simple algorithm that allows only for integer dilates
or erodes. By integer I mean it can dilate or erode by whole pixel values. It
can do 1, 2, 3, and similar sizes, but it can’t do a 1.5 pixel width, for
example. This makes it very fast, but if you animate it, you see it jumping
from one size to another. Once created, the control name becomes Dilate.

Erode (Filter): A more complex algorithm that uses one of four filter
types to give more precise control over the width of the Dilate or Erode
operation. This allows for subinteger (or subpixel) widths. Its name
changes to FilterErode when created.

Erode (Blur): Another type of erode algorithm. This one creates a


harsh black and white matte and then provides blurring control. Called
Erode when created.

Right now, let’s start by using the Erode (Fast) node.

1. Select Keylight1 and insert an Erode (Fast) from the Filter toolbox after
it (FIGURE 7.36).

FIGURE 7.36 The node is named Erode (Fast) in the Filter


toolbox.

2. You also need a Blur node, so insert one after Dilate1 (the Erode you
just created).

3. View Blur1 in the Viewer and make sure you are viewing the alpha
channel (FIGURE 7.37).


FIGURE 7.37 In the Node Graph, the new Erode node is
labeled Dilate1.

Now you can see what you are doing and have full control over the size of
your matte. You can increase the Dilate Size property to expand the matte.
You can then increase the Blur Size property to soften the transition.

4. Increase Dilate1’s Size property to 10.

5. Increase Blur1’s Size property to 15 (FIGURE 7.38).

FIGURE 7.38 An expanded matte

The matte defines areas that need to remain as part of the foreground
rather than be thrown out; this is because usually it’s the white areas that
define a shape, not the black areas. If you invert this image, it defines the
areas that are garbage.

6. From the Color toolbox, insert an Invert node after Blur1.

Now you need to combine the garbage matte and the matte coming out of
the IBK; use a regular Merge node for that.

7. Click Invert1 and then Shift-click IBKGizmoV3_1. The order is


important as it defines which ends up being the A input and which ends
up the B input.

8. Press the M key to insert a Merge node (FIGURE 7.39).

FIGURE 7.39 The Merge node is supposed to be connected


like this.

9. In Merge2, choose Stencil from the Operation drop-down menu


(FIGURE 7.40).

FIGURE 7.40 Changing the operation from Over to Stencil ⬆


This removed all the gray noise in the background. It did not, however,
deal with the gray noise that surrounds the foreground; that won’t be
noticeable in the final result, however, and you can tweak it by changing
the properties in the Dilate and Blur nodes (FIGURE 7.41).
FIGURE 7.41 By using the garbage matte, you have cleaned
up most of the black area in the matte.

Spill suppressing with HueCorrect


So you built a little tree to create a better matte. In general, you need to
make another tree to create a better fill as well. The fill consists of the
RGB channels, so you need to remove any kind of spill you can find in the
RGB. Some keyers take care of this, some don’t—but even the ones that do
handle it don’t always give you the same control as a tree you build in
Nuke, where everything is something you chose. In this tree you just
created, you got a pretty good fill from the IBKGizmo. Still, now you’ll
replace it with a tree for the fill you build yourself. You won’t be learning
otherwise.

A good node for removing spill is the HueCorrect node. HueCorrect is


another very useful tool for color correcting in Nuke. This node allows you
to make changes according to the hue of a pixel, mixing together the color
weights for that hue.

1. Select Read2 and Shift-click to branch out a HueCorrect node from the
Color toolbox (FIGURE 7.42).

FIGURE 7.42 The HueCorrect’s interface is very similar to


that of the HueKeyer.

HueCorrect has several functions controlled by curves on a graph; all of


the functions have to do with selecting a hue and moving a point on the
graph. The curves you can manipulate include saturation, luminance, red,
green, blue, red suppression, green suppression, and blue suppression.
The difference between changing green and suppressing green is that
changing it simply multiplies the amount of green; suppressing it reduces
the amount of green so that it’s not higher than that of the other two
channels.

To reduce the amount of green in the brown color of the woman’s hair and
the beige car interior, you first need to find out where that color is in the
graph. This works in a similar way to the way it does in the HueKeyer
node.

2. While viewing the RGB channels of HueCorrect1 in the Viewer, hover


your mouse pointer around the hair at the back of the woman’s neck and
the beige car interior.

The resulting yellow line in the graph shows a hue at around 1.9. This is
the area where you need to suppress green. To suppress the green color,
use the G_sup curve in the Curves List window on the left.


3. Click G_sup in the Curves List window.

4. Ctrl-Alt-click/Cmd-Option-click the curve at about 1.9 to create a new


point there.

5. Bring down the point you just made to somewhere closer to 0.5 on the
Y axis.
6. To get rid of more spill, also bring down the point at the X value of 2 to
something closer to 0.3 on the Y axis. Then click it again and drag it to the
right so it’s at about 2.8 on the X axis.

7. Grab the point at the X value of 3 and bring that lower, to about 0.1,
then drag to the right to about 3.5.

The resulting curve should look something like FIGURE 7.43.

FIGURE 7.43 Suppressing green from various hues in the


image using the G_sup curve

This has taken care of the spill in the image. Now you have a branch for
the matte, ending with Merge2, and a branch for the fill, ending with
HueCorrect1. You need to connect them.

8. Click HueCorrect1 and insert a ShuffleCopy node after it.

9. Connect ShuffleCopy1’s 1 input to the output of Merge2 (FIGURE


7.44).

FIGURE 7.44 Connecting the two branches with a


ShuffleCopy node

This now copies the alpha channel from the matte branch to the RGB
channels from the HueCorrect branch, which is exactly what you need.

The result of the ShuffleCopy node, though, is an unpremultiplied image.


You need this image to be premultiplied before you can composite it over
the background.

10. Click ShuffleCopy1 and insert a Premult node after it.

Now you can use the output of Premult1 as the A input for a new Merge
node.

11. With Premult1 selected, click M to insert a Merge node after it.

12. Connect Merge3’s B input to the Read1 (FIGURE 7.45).


FIGURE 7.45 The current tree

13. View the output of Merge3 (FIGURE 7.46).

FIGURE 7.46 The final result of the keying tree

14. View the output of Merge1 and then Merge3.

You can see the difference in the result between Merge1 and Merge3. One
is a little better than the other as far as spill goes.

Now you have a tree consisting of three keyers, but you use only two of
them for output. You have one branch for the matte and another for the
fill. Though no one keyer gave you a perfect result, the tree as a whole did
manage that. If you need to further tweak the matte, do so in the matte
branch using eroding nodes, blurs, roto, and merges. If you need to
further correct the color of the foreground image, do so in the fill branch
and add color-correcting nodes there. If you need to move the foreground
element, do so after the keying process finishes, between Premult1 and
Merge1 (or Merge3). The same goes for filtering, such as blur.

NOTE

Opening other people’s scripts is a great way to learn how to


use Nuke. Unpicking through a tree is a very important skill;
start practicing it right now.

There’s still plenty to do on this composite to reach an acceptable result.


Keep in mind that working this way gives you a great amount of control
over your matte and fill. Now that you see your final composite, it is very
easy to change it and make it better. It is all accessible and open to you for
manipulation.

Try to take this composite to the next level. You need only the tools that
you’re already familiar with. I’ve also created a version of this Nuke script,
called Chapter07_end.nk, in this chapter’s folder. You can open it and see
what I did.

PREV NEXT

Recommended
6. RotoPaint / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out
⬆⏭
8. Compositing Hi-Res Stereo Images
© 2017 Safari. Terms of Service / Privacy Policy
 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
7. Keying 9. The Nuke 3D Engine
   🔎

8. Compositing Hi-Res Stereo Images


In this chapter you will complete a stereoscopic, film resolution
composite. A lot of problems are associated with stereo work
and hi-res images, and this chapter explains how to deal with
those problems in Nuke. This chapter also covers parts of the
interface that we haven’t yet discussed.

So far you’ve been using Nuke easily enough, but you might
have noticed some things were missing—things you might be
accustomed to from using other applications. For example, you
haven’t once set the length, size, or fps (frames per second)
speed of a project. In many other applications, these are some
of the first things you set.

In Nuke, doing that stuff is not necessary—but it is still


possible. In the following sections, you will learn why.

USING THE PROJECT SETTINGS PANEL


Each Nuke project has settings associated with it. You aren’t required to
set any of them, but sometimes they make things easier. You haven’t
touched these settings so far, but now let’s take a look at them.

1. Make sure you have a fresh script open in Nuke.

2. Hover over the DAG and press the S key.

The Project Settings panel appears in the Properties Bin. The bottom part
of the Root tab is filled with goodies (FIGURE 8.1).

FIGURE 8.1 The Project Settings panel


The Root tab
Two Frame Range fields control the frame range of your Viewer and the
Frame Range fields in the panels that appear when you launch flipbooks
or click Render. Nuke automatically sets this range to the length of the
first Read node you create, which is why you haven’t had to set it until
now.

When the Lock Range box is selected, the Frame Range property stays as
it is. If it isn’t selected, when you bring in a longer Read node, Nuke
updates the Frame Range fields to accommodate this longer node.

The fps field determines the speed at which the project is running: 24 is
for film, 25 is for PAL (video standard in Europe), and 30 is for NTSC
(video standard in the United States). These numbers have very little
meaning in Nuke—Nuke cares only about individual frames. It doesn’t
care if you later decide to play these frames at 25fps or 1500fps. Setting
the fps field just sets the default fps for newly created Viewers and for
rendering video files such as QuickTime.

The Full Size Format drop-down menu sets the default resolution for
creating images, such as Constant, Radial, and RotoPaint nodes, from
scratch. You can always set the resolution in other ways as well, including
using Format drop-down menus in the node’s Properties panel or
connecting the input to something that has resolution (such as a Read
node). You have done this several times before, but when you set the
resolution in the Project Settings panel, you no longer have to worry about
it.

The next four properties—the Proxy Mode check box, the Proxy Mode
drop-down menu, Proxy Scale, and Read Proxy File—control proxy
settings that are discussed later in this chapter.

Nonlinear images and lookup tables (LUTs)


More settings are available in the LUT tab of the Project Settings panel.

1. Click the LUT tab at the top of the Project Settings panel (FIGURE
8.2).

FIGURE 8.2 The Project Settings panel’s LUT tab

This area is where Nuke’s color management happens. Color management


is a topic that can—and does—fill entire books. Here, I have room only to
explain how Nuke deals with it.

As mentioned in Chapter 4, Nuke is a 32-bit float linear compositing


system. However, most of the images you will work with won’t be linear
images. Most images are made with color correction built into them
automatically in the form of color management to compensate for other
external problems that make them nonlinear.

Review two main nonlinear color spaces, sRGB and log, and understand
why they exist.

• sRGB shows the reverse of what the nonlinear monitor you’re using is
displaying (yes, that means all your monitors, even that really good new


Apple LED cinema monitor—yes, even that one). sRGB is applied to show
you the real image on the nonlinear monitor. You can click the sRGB
curve in the Curves List window on the left to see the color correction that
will be applied to an sRGB image. This curve will be used just like in a
ColorLookup node.
• Log (sometimes called Cineon) is there to compress and better mimic
that large abundance of colors found on a celluloid negative. It’s used
when scanning film to digital files. You can click the Cineon curve in the
Curves List window on the left to see the color correction that will be
applied to a log image. With the arrival of the new film digital cameras
such as ARRI Alexa and RED, there is now more than just one type of log.
There’s the Cineon log type that I just mentioned, but other logs, such as
AlexaV3LogC and REDLog, match those cameras respectively.

When you bring an image into Nuke, Nuke needs to convert it to linear so
that all images that come in are in the same color space and so that
mathematical operations give you the results you are looking for (a blur
on a log image gives very different results than a blur on a linear image).
To convert all images to linear, Nuke uses lookup tables (LUTs). LUTs are
lists of color-correcting operations similar to curves in the ColorLookup
node. Nuke uses these LUTs to correct an image and make it linear, and
then convert it back to whatever color space it came from or needs to be
for display or render.

The LUT tab is split into two areas, the top and the bottom. The top area
is where you create and edit lookup tables. The bottom part sets default
LUTs for different image types.

2. Click Cineon at the top-left list of available LUTs to see the Log
Colorspace graph (FIGURE 8.3).

FIGURE 8.3 Select a graph in the list at left to display it at


right.

What you see now is a standard Cineon curve. Studios that have their own
color pipelines can create other LUTs and bring them in here to better
customize Nuke and make it part of their larger-scale pipeline.

NOTE

The next tab of the Project Settings panel—OCIO—gives you


access to the Open Color IO set of LUTs that Sony Pictures
Imageworks developed. This book does not cover this tab, but
if you’re ever in a position in which you need to use it, you will
no doubt be told how to.

By default, as you can see set out in FIGURE 8.4, Nuke assumes that all
images that are 8 and 16 bit were made (or captured) on a computer and
have the sRGB color space embedded. It assumes that log files (files with
an extension of .cin or .dpx) are log/Cineon, and that float files (files that
were rendered on a computer at 32-bit float) are already linear. Also it
rightly assumes that your monitor is sRGB and sets all Viewers to sRGB as
well.

FIGURE 8.4 Nuke’s default LUT settings ⬆


These settings are fine for most purposes, but you can always change a
specific Read node’s color space independently. In fact, you’ll do that later
in this chapter.
The Views tab
Click the Views tab at the top of the Project Settings panel (FIGURE
8.5).

FIGURE 8.5 The Views tab

This tab manages Views, which let you have more than one screen per
image. This functionality can be used for multiscreen projects, like big
event multiscreen films, but it is usually used for stereoscopic projects.
Stereoscopic projects are made with two views, one for each eye—you’ve
seen this effect in practically every blockbuster since James Cameron’s
Avatar. In regular speak, they are called 3D films. The professionals calls
them stereoscopic.

The Views tab lets you set up as many views as you like. The Set Up Views
For Stereo button at the bottom allows you to quickly set up two views
called Left and Right for a stereo workflow (FIGURE 8.6).

FIGURE 8.6 The Set Up Views For Stereo button creates


Left and Right views.

Throughout this chapter you use various controls in the Project Settings
panel, so get used to pressing the S key to open and close it.

SETTING UP A HIGH-RES STEREO SCRIPT


OK, it’s time for a little project. Start as you normally do—by bringing in
images from your hard drive.

1. Press the R key and navigate to the chapter08 directory.

You should see the BulletCG and BulletBG directories.

2. Navigate inside BulletBG to the full directory where there are a pair of
stereo image sequences: bulletBG_left.####.dpx and
bulletBG_right.####.dpx. One represents what the left eye will see and
the other what the right eye will see.

3. Bring in both sequences.

You now have two Read nodes. Read1 is the left sequence, and Read2 is
the right sequence.

4. Click Read1 (left) and press the 1 key to view it in Viewer input 1. Then
click Read2 (right) and press the 2 key to view it in Viewer input 2.

5. Hover your mouse pointer over the Viewer and press the 1 and 2 keys
repeatedly to switch between the two Views.

NOTE

This shot is taken from a film called This Is Christmas by Alex


Norris, with support from the North London Film Fund.

You should see a woman holding a gun in front of her (FIGURE 8.7).
Switching between the two inputs is like shutting one eye and then
opening the first and shutting the other—one is supposed to look like it
was shot from the direction of the left eye and the other like it was shot
from the direction of the right eye.

FIGURE 8.7 A woman with a gun

6. Stay on Viewer input 1. Have a look at the bottom right of the image
(FIGURE 8.8).

FIGURE 8.8 The image’s resolution

What you see is the resolution of the image you are viewing. This is a log-
scanned plate from film with a nonstandard resolution of 2048×1240.
Normally a 2K scanned plate has a resolution of 2048×1556. (Plate is
another word for image, used mainly in film.)

Setting formats
Now you need to define the resolution you are working in so that when
you create image nodes, they will conform to the resolution of the back
plate. You will give this resolution a name, making it easier to access.

1. Double-click Read1 to make sure its Properties panel is at the top of the
Properties Bin.

2. Click Read1’s Format property and review the options in the drop-
down menu (FIGURE 8.9).


FIGURE 8.9 The Format drop-down menu

The Format drop-down menu lists all the defined formats available. The
image you brought in doesn’t have a name, just a resolution: 2048×1240.
You can add a name to it, and by that, define it. Because it is already
selected, you can choose to edit it.

3. Choose Edit from the Format drop-down menu, which brings up the
Edit Format dialog box (FIGURE 8.10).

FIGURE 8.10 The Edit Format dialog box

4. Enter Christmas2k in the Name field. (This is an arbitrary name; you


could have called it anything.)

The File Size W and Full Size H fields represent the resolution of the
image, which should already be set for you. The pixel aspect at the bottom
is for non-square pixel images, such as PAL widescreen and anamorphic
film.

NOTE

For some formats, a value called pixel aspect ratio is needed


for the display to show the image in the correct width. This is
simply a way for narrower formats to present widescreen
images. Using this value in the Format panel makes the
Viewer show the correct image. Nuke can’t correctly
compensate for pixel aspect ratio in a composite. If you need
to combine images with different pixel aspect ratios, you need
to use the Reformat node (covered in Chapter 10) to change
the resolution and pixel aspect ratio of the one of the images. ⬆

5. Click OK (FIGURE 8.11).


FIGURE 8.11 The format has been defined and is now
presented in the Viewer as well.

If you look at the bottom-right corner of the image in the Viewer, you will
see that it now says Christmas2k instead of 2048×1240 (you might need
to press 1 and 2 on the keyboard while hovering in the Viewer to refresh).
You have now defined a format. You can set the Full Size Format property
in the Project Settings panel to the newly defined format as well.

6. Hover over the DAG and press the S key to display the Project Settings
panel.

7. Make sure the Root tab is displayed.

8. Choose Christmas2k from the Full Size Format drop-down menu.

Now when you create a Constant node, for example, it will default to the
Christmas2k format and resolution.

Working with LUTs


As mentioned earlier, Read1 and Read2 are Cineon log-scanned files from
film. This means these images were shot on film, scanned using a film
scanner, and saved to hard drive. During the conversion from film, the
colors of these images were changed to preserve more dark colors than
bright colors (due to our eyes’ increased sensitivity to darker colors). This
was done using a logarithmic mathematical curve, hence it is called a log
image, or rather it has a log color space. Let’s look at one of the images as
it was scanned.

1. Double-click Read1 to load its Properties panel into the Properties Bin.

2. At the bottom right of the Properties panel, select the Raw Data check
box (FIGURE 8.12).

FIGURE 8.12 The Raw Data check box

This Property asks the Read node to show the image as it is, without any
color management applied. The image looks washed out and is lacking
any contrast (FIGURE 8.13).

FIGURE 8.13 The raw log image

So why did you see it looking better before? This is where Nuke’s color
management comes in. Every Read node has a Colorspace property that
defines what LUT to use with an image to convert it to a linear image for
correct processing (FIGURE 8.14).


FIGURE 8.14 The Colorspace drop-down menu

Even after you choose which LUT to use and apply this change, this still
isn’t the image you saw in the Viewer before. It was automatically
corrected a second time in the Viewer so you could see the correct image
for your sRGB screen. The setting to make this second correction is in a
drop-down menu in the Viewer, and unless you are fully aware of what
you are doing, you should leave it set as it is (FIGURE 8.15).

FIGURE 8.15 The Viewer Colorspace drop-down menu set


to sRGB

The log image you now see is also corrected by the Viewer—if it wasn’t you
wouldn’t see a real linear image.

3. Press the S key to display the Project Settings panel.

4. Click the LUT tab.

I mentioned the default LUT settings before, and here they are again at
the bottom of this tab. When you imported this file, Nuke knew (because
of its .dpx extension) that this was a log Cineon file, so it automatically
assigned it the Cineon LUT. Now that Raw Data is checked, Nuke is no
longer using this LUT to do the color conversion.

Another way to convert the image from log to linear is to perform the
correction yourself rather than using Nuke’s color management.

Before you change the Color Management setup, save a copy of the color-
managed image so you have something to compare to.

5. Select Read1 and press the 2 key to load it into Viewer1’s second buffer.

6. In Read1 properties, deselect the Raw Data box.

7. Click the Pause button at the top right of the Viewer to disable any
updating on Viewer1’s 2nd buffer (FIGURE 8.16).

FIGURE 8.16 Pausing the Viewer means it won’t update


unless you click the Refresh button on its left.

8. While hovering over the Viewer, press the 1 key to view the first buffer.

9. In Read1’s Properties panel, check the Raw Data box.

Now that you have the uncorrected, unmanaged log image, let’s see the
alternative method for applying color space conversion.

10. Make sure Read1 is selected in the DAG and, from the Color toolbox,
insert a Colorspace node. This node converts between different color
spaces (such as sRGB, Log, etc.).

11. As you are bringing in a Cineon image, choose Cineon from the In
drop-down menu (FIGURE 8.17).


FIGURE 8.17 The In drop-down menu in Colorspace1

12. Hover over the Viewer and press the 1 and 2 keys to compare between
the two types of color management you used.

The images look the same. These two different ways to color manage
produce the same result. The first method is quicker and uniform. The
second method is more customizable, but it means more work because
you need to apply it to every Read node.

13. Delete Colorspace1.

14. Deselect the Raw Data box for Read1.

15. Switch to Viewer1’s second buffer and deselect the Pause button.

Stereo views
You have two Read nodes in the DAG, both representing the same image,
but through a different eye. For the most part, everything you do to one of
these images you also do to the other. For the illusion to work, the two
images need to feel like they are indeed images seen from the audience’s
left and right eyes. Hence color correction applied to one eye should also
be applied to the other eye.

Doing this seems like it would be very annoying, though, because you’d
have to keep maintaining two trees and copying nodes from one to the
other—or it would be this way if Nuke didn’t have its Views system.

Using Views, you can connect two Read nodes into one multiview stream
and from then on build and manipulate only one tree. If you do need to
work on just one View, you will be able to do so per node—or if needed,
split the tree in a specific point to its two Views and join them again later.

Let’s connect the two Read nodes into one stream. But before we do that,
you need to define this project as a Stereo Multiview project.

1. Press the S key to display the Project Settings panel.

2. Click the View tab (FIGURE 8.18).

FIGURE 8.18 The current list of available Views

At the moment, only one View appears in the View list: Main. That’s
normally the case, but now let’s replace that with two Views called Left
and Right. You can do this manually using the + (plus) and – (minus)
buttons at the top of the list. However, since Left and Right views are the ⬆
norm, a button at the bottom of the list enables this as well.

3. Click the Set Up Views For Stereo button (FIGURE 8.19).


FIGURE 8.19 The Main view was replaced with Left and
Right views.

After clicking this button, your Views list should change to display Left
and Right instead of Main. At the top of the Viewer you should also see
two new buttons that allow you to switch between the two Views you
created, as shown in FIGURE 8.20.

FIGURE 8.20 The Views buttons in the Viewer

4. Select the Use Colours in UI? check box at the bottom of the Views tab.

In the Project Setting’s Views tab, notice the red-colored box next to the
Left view and a green-colored box next to the Right view. Selecting the
Use Colours in UI? check box makes these colors reflect in the Views
buttons in the Viewer and they will be used to color the pipes connecting
left and right specific parts of your trees.

Now that you’ve set up the multiview project, you can proceed to connect
your two Read nodes together into one multiview stream.

All View-specific nodes are held in the Views toolbox in the Node toolbar.
You use the node called JoinViews to connect separate Views into one
stream.

5. With nothing selected in the DAG, click JoinViews from the Views
toolbox to create one.

6. Connect JoinViews1’s left input to Read1 and the right input to Read2
(FIGURE 8.21).

FIGURE 8.21 Connecting two streams into the beginnings


of a multiview tree

7. Make sure you are viewing JoinViews1 in the Viewer and use the Left
and Right buttons to switch between the views.

You can see the two Read nodes’ output in the Left and Right views now
instead of through separate Viewer inputs. This is the beginning to


working with Views. Later in this chapter you will do more.

For now, you have just one more thing to set up so that you can work
quickly with such large-scale images.
Using proxies
Working in 2K (over 2000 pixels wide, that is), which is becoming the
norm, can become very slow, very quickly. Because compositing software
always calculates each and every pixel, giving it more pixels to work with
dramatically increases processing times, both for interactivity and
rendering.

For example, a PAL image of 720×576 has 414,720 pixels, which is seven
times fewer than a normal 2K frame of 2048×1556 with 3,186,688 pixels!
So, as you might guess, the 2K frame is that much slower to work with.

Nuke has a few ways to make working with hi-res images faster. First, you
can use the Viewer Downscale Resolution drop-down menu. This menu
lets you scale down the display resolution. Input images are scaled down
by the selected factor, and then they are scaled up in the Viewer by the
same factor. This creates a speedier workflow with just a quality
difference in the Viewer (FIGURE 8.22).

FIGURE 8.22 The Viewer’s Downscale Resolution drop-


down menu

1. Choose 32 from the Viewer’s Downscale Resolution drop-down menu.

2. Move the Timebar one frame forward.

You can see the apparent change in quality. The first time you load a
frame it will still take a little longer because it still needs to access the full-
resolution image.

3. Move back one frame.

This time around, Nuke took no time at all to show this frame. From now
on, working with both frames 1 and 2 will be very quick because you will
be working with 1/32nd of the resolution. Note that if you are using a very
fast system with a fast hard drive, this change in speed might be
negligible.

4. Switch the Viewer Downscale Resolution drop-down menu back to 1.

This is a useful function to quickly switch to a faster way of working. This


also offers a good way to have just one Viewer show a lower-resolution
image while another shows a full-res image.

But this is just the tip of the proverbial iceberg. Nuke has a full-blown
Proxy System that handles the switch between low-res and hi-res images
for the whole project.

5. Press the S key again while hovering over the DAG to make sure your
Project Settings panel is at the top, and click the Root tab.

The way Nuke’s Proxy System works is by taking care of everything


related to changes of resolution. The Proxy System is split into three
areas.

• The first area is the Read node. All images coming in through a Read
node are scaled down by a scale ratio. The Proxy System takes care of that.
A Proxy Scale of 0.5, for example, halves the resolution of all images.

• The second area concerns all pixel-based properties. Blurring a 500-


pixel-wide image by a value of 50 pixels is a 10% blur, whereas blurring a
50-pixel-wide image by a value of 50 pixels is a 100% blur—to keep the
same level of blur (10%), you need to blur by only 5 pixels. The same goes
for transforms and every other pixel-based property. When using the
Proxy System, Nuke takes care of that. And if your proxy scale is half,
asking for a 100-pixel blur actually shows you half a resolution image with

a half, or 50-pixel, blur in the Viewer.

• The third area is the Viewer. It’s inconvenient that anytime a proxy scale
changes, the Viewer shows images at a different size. Because of this, the
Proxy System scales up the Viewer to compensate for the change in
resolution. All this is done automatically and is controlled using one of
two ways to define the change of scale (FIGURE 8.23).

FIGURE 8.23 The two types of proxy modes

The drop-down menu for the Proxy Mode property shows two types of
proxy: Format and Scale. Format lets you choose another defined format
for the size of the proxy. On the other hand, Scale allows you to choose a
ratio by which to scale the image down, as a derivative of the Full Size
Format dimensions.

By default, the Scale option is selected and under it the Proxy Scale
property lets you choose the ratio by which to scale everything (FIGURE
8.24).

FIGURE 8.24 The Proxy Scale property

6. Under the Root tab in Project Settings, choose Format from the Proxy
Mode drop-down menu.

Now that Format is selected, the Proxy Scale property is replaced by a


Proxy Format property. You can now use that to define the format to
which proxy images will be scaled (FIGURE 8.25).

FIGURE 8.25 The Proxy Scale property is replaced with the


Proxy Format property.

Let’s use the Proxy System to make working with this project faster. Use
the Scale property rather than Format, and scale down to a third of the
original resolution.

7. Choose Scale from the Proxy Mode drop-down menu.

8. Change Proxy Scale to 0.333.

9. To activate the Proxy System, select the box next to the Proxy Mode
property to turn it on (FIGURE 8.26).

FIGURE 8.26 Turning on the Proxy System using the Proxy


Mode check box

You don’t have to load the Project Settings panel every time you want to
turn on proxy mode. To toggle proxy mode on and off, you can also either
use the Proxy Mode button in the Viewer, as shown in FIGURE 8.27, or
press the hot key Ctrl/Cmd-P.

FIGURE 8.27 The Proxy Mode button in the Viewer is set to


on.

Now that proxy mode is on, what happens is that, much like the Viewer
Downscale Resolution you used before, images are read in and scaled
down, then Viewer scales the images back up, and all pixel-based values
are changed to reflect the smaller resolution.
As shown earlier when we used Downscale Resolution, some processing
still needs to be done on the full-resolution image. Nuke has to read in the
whole image to scale it down. It does this on the fly; you do not need to do
anything else. However, this stage, too, can be removed.

Instead of using on-the-fly proxies as you are using now, you can read
smaller-resolution images directly from a specified location on the hard
drive. This way, the images are already there and will always be there, and
you never need to process them—which results in a quicker workflow
throughout.

Let’s generate some proxy images. These are sometimes called


prerendered proxies because they are not rendered on the fly.

10. Create an unattached Write node and connect its input to Read1.

11. Click the little folder button beside the Proxy property (FIGURE
8.28).

FIGURE 8.28 This time you should use the Proxy property
instead of the File property.

12. Navigate to the BulletBG directory inside the chapter08 directory.


Create a new directory here called third (if it doesn’t already exist).

13. Name your sequence the same as the full-resolution one (add this at
the bottom to the end of the path), bulletBG_left.####.dpx, and press
Enter/Return.

This is a pretty standard way to work with prerendered proxies—have a


directory with the name of the element. Inside it, have a directory for each
resolution and then keep the sequence name the same.

Just like in the Read node, there is a Colorspace property here as well.
Because the tree itself is in linear color space, the image needs to be
reconverted to Cineon so it will be like the image that came in. All this is
set by default in the Views tab in the Project Settings panel.

14. Make sure the Proxy Mode check box is selected in the Viewer.

15. Click the Render button in Write1’s Properties panel.

Since you are rendering Read1 and not a multiview tree (only after
JoinView1 does your tree become a multiview tree), you need to use the
Views drop-down menu in the Render panel to select only the Left view.

16. In the Render panel that opens, use the Views drop-down menu to
deselect the Right check box (FIGURE 8.29) and then click OK to start
the render.

FIGURE 8.29 Deselecting the Right check box

Because proxy mode is on, your image is now being scaled to a third of its
size. Nuke is now using the Proxy property in the Write node instead of
the File property it usually uses, and it’s actually rendering third-
resolution images.

When the render finishes, tell your Read1 node that there are prerendered
proxies on the disk.

17. Copy and paste the path from the Write1 Proxy field to the Read1
Proxy field.

Copying the file path like you just did doesn’t update the Read node. You

can easily fix that.

18. To update the Read node, click the folder icon to the right of the Proxy
property field, select the sequence that’s in there, and then press
Enter/Return. If the Viewer still doesn’t update, switch between having
the proxy mode off and on using the button in the Viewer.

Make sure the Proxy format has updated to 682×413.

19. Switch proxy mode on and off. Now, when you enable and disable
proxy mode, you actually switch between two file sequences on the disk.

Now you need to do the same for the right view, or Read2. You can use the
same Write node—just move its input and change some settings.

A node is just text


I use this point to show you a little trick that has nothing to do with this
comp but can make things easier sometimes. A Nuke node is just some
text, and having it as text means all sorts of things: You can email it easily,
you can save it in some note-taking software you use, or you can
manipulate it easily as text.

Now you need to replace the path in Write1’s proxy path from left to right.
You can go overboard now and do it by using this unique feature.

1. Click to select Write1 and press Ctrl/Cmd-X to cut it from the tree.

2. Using a simple text-editing application (Notepad on Windows, and


TextEdit on Mac), create a new document.

3. Paste (Ctrl/Cmd-V) the node into your empty document.

The node appears as some lines of text in the document (FIGURE 8.30)
that start with set cut_paste_input.

FIGURE 8.30 A node as text

Look further down in this text. A line starts with the word Proxy and
immediately after the word is a file path.

4. In the file path, find the word left and change it to right without
changing anything else or adding any spaces.

This is an easy way to make changes to a node. Maybe it’s a bit


unnecessary here, but hey, you’re learning.

5. Select the whole text block that you pasted and copy it by pressing
Ctrl/Cmd-C.

6. Go back to Nuke and with Read2 selected, branch-paste the changed


Write node by pressing Ctrl/Cmd-Shift-V (FIGURE 8.31).

FIGURE 8.31 Pasting a node from text

Perhaps this is a little bit of overkill, but you now have the Write node you
need, it is inserted where you want it, and the path has changed for the
right eye.

7. Make sure proxy mode is on because you now want to render the lo-res ⬆
version of the image.

8. Double-click Write1 to bring up its Properties panel, then click the


Render button.
In the Render panel that appears, only the Left view should be selected.
This is fine, because it doesn’t matter which view you are rendering at this
point in the tree, just that you are rendering only one. When working with
a multiview project, you always have two streams; however, this Read
node represents only one of them and will show the same image in both
views. As there is no point in rendering the same image twice, checking
only one view is enough.

9. Click OK.

When this render finishes, you have prerendered proxy files for both
views. You need to tell Read2 to use the files you just generated.

10. Copy and paste the path and filename from Write1’s Proxy field to
Read2’s Proxy field.

11. To update the Read node, click the folder icon to the right of the Proxy
field, select the sequence that’s in there, and click Enter/Return.

12. You don’t need Write1 anymore; you can delete it.

That’s that. Both views are now ready.

Creating stereo-view proxies e iciently


You need to bring in one more element and perform the same processes
on it. It will be a lot quicker now that you have everything already set up.

Also, notice you had to run through the proxy-making stages twice, once
for the left eye and once for the right eye. This is not strictly necessary.
There are other ways to work with Read and Write nodes when it comes to
multiview projects and trees.

Instead of bringing in two separate Read nodes, one for each eye, you can
bring in a single Read node and use a variable to tell Nuke that both a Left
and a Right view exist. A variable is simply a placeholder that tells Nuke
to look for something else. In this case, the variable will be %V, and it tells
Nuke that it needs to replace this variable with whatever is in the Views
list. In this case, it looks for Left and Right.

1. Create a new Read node and navigate the File Browser to the
chapter08/BulletCG/full/ directory.

Here you can see there are two sequences. One is called
bulletCG_left.####.exr and the other bulletCG_right.####.exr.
Replacing Left or Right with %V will enable you to use only one Read
node.

2. Click the first sequence in the list, the one indicating the left eye.

3. In the path bar at the bottom, replace the word “left” with %V and
press Enter/Return (FIGURE 8.32).

FIGURE 8.32 Replacing “left” with %V in the File Browser

You now have a single Read node in the DAG: Read3. If you look at it
carefully, you can see there is a little green icon with a V in it at the top-
left corner (FIGURE 8.33). This indicates that this node has multiple
views available. Let’s see this in the Viewer.

FIGURE 8.33 The green V icon indicates a multiview node.


4. Look at the newly created Read3 in the Viewer.

5. Go to frame 50 in the Timebar.

6. Switch between viewing the Left and Right views using the Viewer
buttons (FIGURE 8.34).

FIGURE 8.34 Watch out! It’s coming right at you.

You can see the difference between the two eyes. This bullet is coming
right at you. You can see more of its left side on the right of the screen
with the left eye, and more of its right side on the left of the screen with
the right eye. (Take a pen, hold it close to your face, and shut each eye and
you can see what I mean.)

7. View the alpha channel in the Viewer, then switch back to the RGB
channels. Notice there is an alpha channel here.

Having an alpha channel affects some of what you are doing here. As a
rule, you shouldn’t color correct premultiplied images without
unpremultiplying them first. Having an alpha channel is supposed to raise
the question of whether this is a premultiplied image. You can see in the
Viewer that all areas that are black in the alpha are also black in the RGB.
Black indicates that this indeed is a premultiplied image. Just to make
sure, I asked the 3D artist who made this to render out premultiplied
images, so I know for a fact that’s what they are.

Applying color management on Read3 applies color correction on the


image. Because it is premultiplied, you need to unpremult it first. To do
that, use the Premultiplied check box next to the Colorspace property
(FIGURE 8.35) in the Read node’s Properties panel. However, because
this image is actually a 32-bit linear image, there won’t be any color
correction applied to it, so this step is not really necessary.

FIGURE 8.35 Premultiplied needs to be selected for


premultiplied images that are not linear.

Now that you’ve brought in this stereo pair of images, you can make
proxies for them in one go.

8. Connect a new Write node to Read3 by clicking it and pressing the W


key.

9. Set the path under the (new) Write1 Proxy field to


chapter08/BulletCG/third/ bulletCG_%V.####.exr. If the third directory
doesn’t exist, create it.

Notice that again you are using the %V variable. This means you can
render the two views, and the names of the views—Left and Right—will be
placed instead of the variable.

NOTE

Write nodes default to rendering the RGB channels only


because usually this is the final render and there’s no reason
to retain the alpha channel. Keep this in mind—forgetting it
will waste render times when you do need an alpha channel.

10. Because you want to render the alpha channel as well, change the
channel set for Write1 from RGB to All to be on the safe side. ⬆
This is an EXR file, which has a lot of properties you can change. The
original image is 32 bit and you will keep it that way.

11. Choose 32-bit Float from the Datatype drop-down menu.


Again, because there is an alpha channel in these images, color
management should normally get considered here, and the Premultiplied
check box should be selected. However, because this is a linear image, no
color management is applied, and so no change needs to be applied.

Notice that the render range here is only 1–50, whereas the background
sequence is 1–60. You’ll deal with this later.

12. Click the Render button.

13. In the Render panel, change the Views drop-down menu so that both
Left and Right are turned on (FIGURE 8.36).

FIGURE 8.36 Turning on both views for rendering

14. Make sure proxy mode is on.

15. Click OK to start the render.

16. When the render is complete, copy and paste the proxy path from
Write1 to Read3. Again click the folder icon to the right of the Proxy field,
select one of the file sequences that are in there, change the view to %V,
and click Enter/Return.

17. Delete Write1.

The proxy setup is ready. You have prerendered proxies for all files,
individual or dual-view streams. Now you can go ahead and create the
composite, which is very simple.

COMPOSITING A STEREO PROJECT


With this composite, the first thing you need to do is deal with the change
in length between the two elements. The background is a 60-frame
element, whereas the bullet is a 50-frame element. You have several ways
of dealing with this, both creatively and technically. Let’s look at the
options.

Retiming elements
The first creative way to deal with the lack of correlation between the
lengths of these two elements is to stretch (slow down) the bullet element
so it’s 60 frames long. Slowing down clips means that in-between frames
need to be invented. You can do that in three ways:

By copying adjacent frames

By blending adjacent frames

By using a technology called Optical Flow to really gauge the


movement in adjacent frames and create in-between frames by moving
and warping groups of pixels

TIP

If you have a NukeX license, you can use a slightly better


version of this node called Kronos. It’s available from the Time
toolbox as well. The main difference between OFlow and
Kronos is that Kronos can use the GPU—that is to say, the
power of your graphics card—for rendering, making it a lot
faster than its counterpart.

Several different nodes in the Time toolbox deal with timing. You’re going
to look at two different options for slowing down elements. ⬆
1. Select Read3 and insert an OFlow node after it from the Time toolbox.

The OFlow (optical flow) node generates high-quality retiming operations


by analyzing the movement of all pixels in the frames and then rendering
new in-between images based on that analysis. This node is slow.
You can use this node in two different ways—either by stating the speed
you want to change to or by keyframing frame locations, which you will do
now.

2. In OFlow1’s Properties panel, choose Source Frame from the Timing


drop-down menu (FIGURE 8.37).

FIGURE 8.37 Retiming an element with OFlow

3. Go to frame 1 in the Timebar, right-click/Ctrl-click the Frame field, and


choose Set Key from the contextual menu.

4. Go to frame 60 in the Timebar and set the Frame field to 50 (the last
frame of the original element you brought in).

5. While viewing OFlow1 in the Viewer, click Play.

If you look at the sequence frame by frame, you can clearly see that there
are some original frames and some newly created frames. Change that by
increasing the ShutterTime property in the OFlow node; doing so
increases the blending that’s happening in between frames.

OFlow is a very good tool. However, its downside is that it is also very
slow. You are working in proxy mode at the moment, so things are quick
and easy. But when you go back to the full-res image, you will feel how
slow it is. Let’s try another retiming node.

6. Click Stop when you’re done watching.

7. Delete OFlow1.

The next retiming node is simply called Retime.

8. Click Read3 and, from the Time toolbox, insert a Retime node.

Retime is a handy tool to use to stretch, to compress, or to change the


timing location of clips. Retime is fast because it doesn’t use optical flow
technology, and it either blends between frames or freezes frames to
accommodate speed changes. There are many ways to play around with
this tool. It can really work in whatever way fits your needs (FIGURE
8.38).

FIGURE 8.38 The Retime node’s interface

You can specify speed using a combination of the Speed, Input Range, and
Output Range properties.

9. Check the boxes next to the Input Range and Output Range properties.

10. Enter 60 in the second field for the Output Range property.

The original 50 frames now stretch between frames 1 and 60.

11. While viewing Retime1 in the Viewer, click Play.

Again, you can see that movement isn’t exactly smooth. Retime is set to
blend frames at the moment. Setting it to blend or freeze is done in the
Filter property. Box means blending, while Nearest means freezing. None
means slowing down animation done inside Nuke—meaning keyframes—
and will create smooth slow motion for those. None of these three options
will produce appealing results, however. Slowing down crispy, clean
movements like the one you have here for the bullet is always a bit
difficult and creates jerky movement. Let’s delete this retiming node.

12. Click Stop.

13. Delete Retime1.

Notice that the element you were working on was part of a multiview
branch. You didn’t even notice. You were viewing only one view—either
Left or Right, depending on what your Viewer was set to—but the other
view was being manipulated as well. That’s the beauty of working in this
way.

Now you’re going to solve this problem in a different way, by simply


shortening the background element.

First, let’s make the project itself only 50 frames long.

14. Display the Project Settings panel.

15. Enter 1 and 50 in the two Frame Range fields (FIGURE 8.39).

FIGURE 8.39 Changing the Frame Range in the Project


Settings panel

The longer length of the background element doesn’t matter that much
anymore. Now you will use the end of the element and trim the beginning,
rather than the other way around. You need to shift the position of two
Read nodes in time.

16. Clear the Properties Bin.

17. Double-click Read1 and then double-click Read2.

Using the Dope Sheet, you can shift the position of Read nodes in time.

18. Click the Dope Sheet tab in the same pane as the DAG (FIGURE
8.40).

FIGURE 8.40 The Dope Sheet also shows the timing of


elements.

Using the bars on the right panel of the Dope Sheet, one for the first Read
node and one for the second Read node, you can shift the timing of the
Read nodes.

19. In the panel on the right, click the first file property and move it back
(to the left) until your out point on the right reaches frame 50 (FIGURE
8.41).

FIGURE 8.41 The numbers on the left and right indicate


the new in and out points.

20. Do the same for the second file property.

21. Go back to viewing the Node Graph.

That’s it for retiming. Now you can proceed to placing the foreground over
the background.

Compositing the two elements together


Now to place the foreground over the background—simple stuff you’ve
done several times already. This time you are actually doing it once, but
you’re creating two of everything, one for the Left view and another for
the Right view.

1. Click Read3 and press the M key to insert a Merge node.

2. Connect Merge1’s B input to the output of JoinViews1.

3. View Merge1 in the Viewer.

4. Click Play in the Viewer (FIGURE 8.42).


FIGURE 8.42 The slap comp (a quick, basic comp). You can
see it needs more work.

At this point, you should be getting pretty good playback speed because of
the small proxy you are using. What you see now is the bullet leaving the
gun’s barrel (the trigger was pulled in a previous shot). The color of the
bullet is wrong, and it should start darker, as if it’s inside the barrel. Let’s
make this happen.

5. Insert a Grade node after Read3.

Because this is a premultiplied image, you need to unpremultiply it. You


can do that as you did in Chapter 2. However, you can also do it inside the
Grade node (or indeed any color correction node).

6. At the bottom of Grade1’s Properties panel, choose Rgba.alpha from


the (Un)premult By drop-down menu.

This ensures that the RGB channels are divided by the alpha and then
multiplied again after the color correction.

7. Go to frame 20 in the Timebar. It’s a good frame to see the color


correction we need (FIGURE 8.43).

FIGURE 8.43 The difference in color is apparent.

You can see here that the metal of the gun and the metal of the bullet have
very different colors. Although they aren’t necessarily supposed to be the
same kind of metal, at the moment, it looks as if there is different-colored
light shining on them, and their contrast is very different as well. You use
the Gamma property to bring down the midtones and create more
contrast with the bright highlights that are already there; then use the Lift
property to color the shadows to the same kind of red you can find in the
background image. Finally, use Gain to tweak the highlights to match.

8. Adjust the Gamma, Lift, and Gain to color correct the bullet so it looks
better. I ended up with Lift: 0.035, 0.005, 0.005; Gamma: 0.54, 0.72,
0.54; and Gain: 0.95, 1, 1.35.

Take a look; the bullet should look better. Now for the animation—but,
before you get started, first make sure you learn what to do if you want to
change the color of just one of the views.

Changing properties for a single view


You have two primary options for manipulating just one of the views. The ⬆
first is to split the whole tree up into two again, and then merge it back
into one using a SplitAndJoin node.

1. Select Grade1 and, in the Views toolbox, click SplitAndJoin (FIGURE


8.44).
FIGURE 8.44 The SplitAndJoin node is actually several
nodes.

You now have three new nodes in your DAG, not just one. SplitAndJoin
actually creates as many nodes as it needs to split the number of views out
to individual branches—usually two views, with a OneView node—and
then it connects them up again using a JoinViews node. You can now
insert whatever node you want in whatever branch you want. Here’s one
method.

2. Select OneView1, OneView2, and JoinViews2 and delete them.

The other method is to split just specific properties inside nodes, because
you need them.

3. Insert another Grade node after Grade1.

Look at the Gain property. There’s a new button on the right that lets you
display the View menu (FIGURE 8.45).

FIGURE 8.45 The new View menu

4. Click the View button to display the View drop-down menu, which
allows you to split off one view to be controlled separately.

5. Choose Split Off Left from the View drop-down menu for the Gain
property (FIGURE 8.46).

FIGURE 8.46 This is how to split a property to control two


views independently.

You now have a little arrow next to the Gain property.

6. Click the little arrow next to the Gain property (FIGURE 8.47).

FIGURE 8.47 You now have two properties for Gain, one
for each view.

This reveals the two separate View subproperties.

This method lets you manipulate each view separately for this property
and still have overall control over the rest of the properties as well.

You can also reverse this using the View drop-down menu again.

7. Choose Unsplit Left from the View drop-down menu for any one of the
Gain properties.

You are back where you were before—controlling the two views together.

Now it’s time to change the color of the bullet as it’s leaving the barrel.

8. Go to frame 10 in the Timebar.

9. Create a keyframe for both the Gain property and the Gamma property.
10. Go to frame 1 in the Timebar.

11. Change these two properties so the bullet looks dark enough, as if it’s
still inside the barrel. I set the Gain to 0.004 and the Gamma to 1.55, 1.5,
1.35.

This image is still premultiplied, remember? Using the (Un)premult By


check box in Grade1 earlier only unpremultiplied the image where Grade1
was applied. The output of Grade1 is still a premultiplied image. You need
to do the same here, in Grade2.

12. Change the (Un)premult By drop-down menu to Rgba.alpha.

This is all you are going to do at this stage to this composite. It’s been a
long lesson already.

RENDERING AND VIEWING STEREO TREES


Before you finish this lesson, let’s take a look at what you’ve done so far;
you’ll learn a couple more things as we do.

1. Insert a Write node after Merge1.

2. Click the File property’s Folder button to load a File Browser.

3. Navigate to your student_files directory and create another directory


named bullet_full.

4. Give the file to render the name of bullet_%V.####.png and click


Enter/Return (FIGURE 8.48).

FIGURE 8.48 Notice the Colorspace property.

You now have a Write node set up to render to the hard drive. You haven’t
set up a proxy filename, just the full-resolution file name. If you render
now, with proxy mode turned on, the render will fail. You can, of course,
create another folder and call it bullet_third if you want to render a
proxy. Right now, however, render the full-resolution image.

Notice that as you render a PNG file sequence, which is an 8-bit file
format, you are rendering it to the sRGB color space. Nuke is taking one of
your elements, which is a Cineon color space, and another element, which
is in linear color space, working with both of them in linear, and then
rendering out an sRGB color space PNG sequence. It’s all done
automatically and clearly presented in front of you.

5. Click the Render button. In the Render panel that appears, make sure
the Frame Range property is set correctly by choosing Global from the
drop-down menu, and make sure the Proxy setting is off.

This render might take a little while because of the hi-res images you are
using. Good thing you deleted that OFlow node—it would have taken a lot
longer with that one.

When the render is complete, bring in the files so you can watch them.
You can do it in the Write node.

6. Select the Read File box at the bottom of Write1’s Properties panel.

You can now view your handiwork. The thing is, however, that this is a
stereo pair of views you have here, and you can watch only one eye at a
time.

If you have anaglyph glasses (red and cyan glasses, not the fancy kind you
find in cinemas), you can use them to watch the true image you are
producing.

7. In case you don’t have anaglyph glasses, click SideBySide from the
Views/Stereo toolbox to see your two views side-by-side (FIGURE
8.49).


FIGURE 8.49 Choose between the Anaglyph node or the
SideBySide node to view both views.

To view stereo with anaglyph glasses, you need your image to be an


anaglyph image in the first place. You can make sure this is the case with
the Anaglyph node, found in the Views/Stereo toolbox.

8. Select Write1 and insert an anaglyph node after it.

If you use the Anaglyph node, your image now looks gray with red and
cyan shifts of color on the left and right. This shift in color makes viewing
the image with anaglyph glasses correct in each eye (FIGURE 8.50).

FIGURE 8.50 An anaglyph image

If you use the SideBySide node, you see both eyes, one next to the other
(FIGURE 8.51).

FIGURE 8.51 SideBySide node does just what it says on the


box.

9. Click Play and enjoy the show.

Anaglyph glasses don’t give a spectacular stereoscopic experience, I must


admit. Other solutions are out there, but they are more professional and a
lot more expensive. (If you have one of those solutions, that means you
know what you are doing and you don’t need instructions from me!)

PREV NEXT
⏮ ⏭
Recommended
7. Keying / Queue / History / Topics / Tutorials / Settings / Get the App / Sign Out ⬆
9. The Nuke 3D Engine
© 2017 Safari. Terms of Service / Privacy Policy
 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
8. Compositing Hi-Res Stereo Images 10. Camera Tracking
   🔎

9. The Nuke 3D Engine


One of the most powerful features in Nuke is its 3D engine.
Nuke has an almost fully featured 3D system that enables the
compositor to import from cameras, create simple objects, edit
and import objects, perform camera projections, extract
various types of 2D data from 3D data, and much more.

An important thing to note is that Nuke’s 3D engine isn’t there


to replace your 3D department. You won’t see character
animation done in Nuke. Nor will you see serious lighting and
rendering done in Nuke. Although Nuke’s 3D capabilities are
remarkable for a compositing system, they are still far from
what you’d need to replace full-blown 3D applications.

Nuke’s 3D capabilities are there to enhance the compositor’s abilities and


make it easier for the compositor to converse with the 3D department. In
addition, when you have scenes for which you have 3D data available—
whether it is from a live shot or generated in a 3D application—sometimes
you can get other 3D data, such as 3D tracking, and use it in novel ways.

You can use Nuke’s 3D engine to more easily generate 2D tracking


information, give still frames more life, simplify roto and paint work, and
do much, much more.

3D SCENE SETUPS
In Nuke, 3D scenes are built out of four main elements: a camera, a piece
of geometry, a Scene node (optional), and a ScanlineRender node to
render the 3D data into a 2D image.

NOTE

Nuke works well with external 3D applications and can


exchange data with them using file formats commonly used in
3D applications: Alembic, obj and fbx. The .obj extension is
for files that hold geometry. The Alembic and fbx file types can
hold practically anything generated in a 3D scene.

Let’s look at the four elements:

Camera: Through the camera element, the rendering node views the
scene. You can also import camera properties from fbx files.

Geometry: The geometry element can be a primitive (simple


geometry) created in Nuke, such as a card or a sphere, or it can be
imported from another application as an obj file, an obj sequence of files
(for animating objects), or an fbx file.

Scene: The Scene node connects all the pieces that make up the scene
you want to render; this includes all pieces of geometry, lights, and
cameras. This means if you have only one element, you don’t need this
node. By using this node you are saying that all these elements live in the
same space.

ScanlineRender: The ScanlineRender node renders the scene. It


takes the 3D data and makes a 2D image out of it. Without it, you won’t
see a 2D image.

Using these elements together, and some other nodes that are not part of
the basic 3D setup, you can do remarkable things. I cover a few
techniques in this chapter and the next two chapters.

FIGURE 9.1 shows the basic 3D elements connected together.

FIGURE 9.1 The basic 3D setup

You’ll find all the tools that deal with the 3D engine in the 3D toolbox. I’m
not going to go over everything here because it’s just too expansive, and I
need to leave something for the advanced book, don’t I? But never fear;
you will still get to do some pretty advanced stuff.

First let’s practice a little with a few simple nodes.

Setting up a Nuke 3D scene


Let’s start to create some simple nodes and step into the 3D engine. First
let’s create a camera.

1. With nothing selected in the DAG, create a Camera node from the 3D
toolbox.

2. With nothing selected, create a Scene node from the 3D toolbox.

3. Select Camera1 and Scene1 and insert a ScanlineRender node from the
3D toolbox.

The ScanlineRender node connects itself to the Camera and Scene nodes
in the correct inputs (FIGURE 9.2). Eventually you will need some kind
of geometry or this setup won’t mean much, but there’s still time.


FIGURE 9.2 The beginning of a 3D setup

4. Select ScanlineRender1 and press the 1 key to view it.

At the moment you don’t see much in the Viewer, and you shouldn’t.
Although you have a Camera node and a ScanlineRender node, you have
nothing to shoot through the camera. You don’t see anything in the
resulting 2D image. However, in the 3D world, you should be seeing at
least a camera. Nuke’s Viewer functions as both a 2D and a 3D Viewer.
You can easily switch between 2D and 3D viewing modes by hovering the
mouse pointer over the Viewer and pressing the Tab key. Alternatively,
you can choose 2D or 3D (or other views such as Top and Front) from the
View Selection drop-down menu at the top right of the Viewer, as seen in
FIGURE 9.3.

FIGURE 9.3 Changing the View Selection in the Viewer

5. Hover the mouse pointer over the Viewer and press the Tab key.

You should see something similar to FIGURE 9.4.

FIGURE 9.4 Viewing Nuke’s 3D world and the camera in it

You are now in the 3D view, as you can see if you look at the View
Selection drop-down menu. There’s your camera; it sits in the middle of
the virtual world at 0, 0, 0. All 3D elements are generated at this position.
You are seeing the camera from Nuke’s perspective camera, which you can
use to view your 3D scene. You navigate the perspective camera, and
hence the 3D Viewer, using a combination of magic and, well, mainly hot
keys.

Navigating the 3D world


The following hot keys and techniques are useful for navigating in the 3D
view:

Ctrl/Cmd-click and drag to rotate around the 3D world.

Alt/Option-click and drag to pan across the 3D world.

Use the scroll wheel (if you have one) or the + and – keys to zoom in
and out.

Now that you know how to move around the 3D world, you can use this
knowledge to move around a little.

1. Use the combination of hot keys to move around the 3D Viewer.


Challenge yourself by always keeping the camera you created in the center
of your Viewer.

It will be good for you to know how to move the camera—and, by


extension, everything else in the 3D world.

You can, of course, use the Properties panel to move the camera, but often
this is unintuitive. Instead, you want to use on-screen control Camera
axes. To do this, first select the Camera element in the Node Graph and
make sure that it’s loaded in the Properties Bin. Only nodes that are ⬆
loaded in the Properties Bin are available for manipulating with the on-
screen controls.

2. Double-click Camera1 in the DAG to select it and load its Properties


panel.
See how the camera lights up in green in the Viewer. The axes—a colorful
red, green, and blue color—are displayed in the center of the camera and
let you move it around (FIGURE 9.5).

FIGURE 9.5 After you select Camera1 in the DAG, the


camera’s axis appears.

Now that you have the axes, you can click them and move the camera
around. You can also use an individual axis to move in one direction. The
reason the axes are in three colors is to help you figure out which is which.
The order of colors is usually red, green, blue; the order of directions is
usually X, Y, Z. In other words, the red-colored axis controls the X
direction, the green one controls the Y direction, and the blue controls the
Z—very convenient and easy to remember.

The same happens with rotation.

3. In the Viewer, hold down Ctrl/Cmd and watch the axes change to show
the rotation controls (FIGURE 9.6).

FIGURE 9.6 Holding Ctrl/Cmd lets you control rotation.

The circles that appear on the axis when you hold down Ctrl/Cmd allow
you to rotate the camera around one of three axes. This is done in the
same way as the translation axes: Red for X rotation, green for Y rotation,
and blue for Z rotation.

These are the basic techniques for moving things around in the 3D viewer.
Feel free to play around with the camera and get a feel for it. You will reset
it in a minute anyway, so you can really go to town here.

Using Nuke’s ToolSets to save groups of tools


Earlier I mentioned there are basic building blocks for Nuke’s 3D engine.
It’s not just the one node, it’s several, and it can become tiresome to
continuously create each of these separately and then connect them.
Instead of doing this, let’s create a ToolSet that takes care of this task.

Nuke ToolSets is a Toolbar toolbox, just like any other toolbox, with one
major difference: You populate this toolbox yourself by selecting groups of
nodes in the DAG and clicking a few buttons. Everything is saved—not
just the nodes, but the connections between the nodes, and every property

in each of the nodes (as well as animation, or expressions, or anything
else) (FIGURE 9.7).
FIGURE 9.7 The ToolSets toolbox

As all animation is saved with the ToolSet, let’s reset the camera.

1. Right-click in Camera1’s Property panel and choose Set Knobs To


Default to reset the camera.

Now let’s turn these three nodes into a ToolSet.

2. By creating a marquee with the mouse, select Camera1, Scene1, and


ScanlineRender1 (FIGURE 9.8).

FIGURE 9.8 Make sure not to select the Viewer node.

3. Choose Create from the ToolSets toolbox.

You are presented with the panel shown in FIGURE 9.9. In it you need
to call your ToolSet by a name. You can also create submenus and later
choose to place more ToolSets in those submenus. Create a submenu
called 3D, and in it call this ToolSet Setup. Let’s see how.

FIGURE 9.9 The Create ToolSet panel

4. In the Menu item field, type 3D/Setup.

5. Click Create.

6. Click the ToolSets toolbox again and have a look (FIGURE 9.10).

FIGURE 9.10 Your new, very own ToolSet inside your new,
very own submenu

By separating 3D from Setup with a /, you told Nuke to create a submenu


called 3D, and then call the actual ToolSet Setup.

To delete a ToolSet, simply use the Delete submenu shown in FIGURE


9.11. But don’t do it now, as you’ll use it later in this chapter.


FIGURE 9.11 This is how to delete a ToolSet.

Now that you have this ToolSet ready, and creating these three nodes is
just a click away, you won’t feel bad about starting a new Nuke script.

7. From the File menu, choose Clear.

The Clear command simply starts a new empty script without asking any
questions. It doesn’t even start you off with a Viewer. But don’t worry.
We’ll create one.

VIEWING A 2D IMAGE AS A 3D OBJECT


The project you’ll work on now has been pretty much composited. It’s a
shot from a short film by Or Kantor. The film still doesn’t have a name,
but if you follow Or at his Vimeo page, vimeo.com/user1126707
(http://vimeo.com/user1126707), you might get to see the piece when it is indeed
finished. Or already composited this shot, but I want us to add more
elements to it, and in order to do that, it will be a lot more convenient if
we know more about it.

Let’s start by bringing in the shot so you can see what I’m referring to.

1. From the chapter09 directory, bring in creature_beauty.####.exr.

2. Select Read1 and click 1 to watch the sequence in the Viewer.

By clicking 1 to view, you also created a Viewer node.

3. Now that you know how to use the Project Settings panel, set up the
script by opening the Project Settings panel and changing Full Size
Format to HD and fps to 25. Close the Project Settings panel.

When this creature inserts its head between the bars of the grill, I would
like some magical butterflies to pop out from underneath said grill. You
use a particle system for this effect. A particle system is a technique that
reproduces chaotic systems by emitting a lot of objects and controlling
them with physics-derived forces. Not only are particle systems pretty
advanced stuff, but they are available only for NukeX owners; as a
consequence, this book doesn’t cover them. Instead, you’ll be loading
them as a script later on.

You know where to place the particle system in the 3D world by turning
the 2D image you brought into a 3D object using a world position pass. Bit
by bit I walk you through this process. Fear not. Let’s start with that last
bit: world position pass? What’s that?

Using a world position pass to create a 3D object


A world position pass is a pass rendered out of 3D software that doesn’t
represent the beauty part of the image; instead, it’s a utility pass. If you
think back to Chapter 3, you connected a lot of passes to make up the
beauty pass, but you also used a motion vector pass to create motion blur.
The motion vector pass is a utility pass, as is the world position pass. The
world position pass represents the position of each pixel in world space
(in relation to the center of the world at 0, 0, 0). Using this pass and a
node called PositionToPoints, you can take every pixel of the beauty
render and place it in the correct place of the 3D world.

1. In the Viewer, switch to viewing the WorldPos channel set (FIGURE


9.12).


FIGURE 9.12 Viewing other channel sets in the Viewer

2. Hover your mouse pointer in the Viewer and look at the bottom right to
see the values of pixels there (FIGURE 9.13).

FIGURE 9.13 Looking at pixel values in the Viewer

The position pass produces a very colorful image. The colors you see have
little to do with what this pass actually is. The values show what’s
important here as they represent a position in 3D space. For example,
take a look at the pixel my mouse pointer is hovering over in Figure 9.13;
it is located at X = 15.17, Y = 7.67, and Z = 1.79.

Another pass, which is a little less important in this case but is worth
mentioning, is called normals. The normals pass is very similar to the
world position pass in that it represents the pixels in space, only the
normals pass represents the angle of the pixel in world coordinates. Let’s
use this valuable information.

3. Switch back to viewing the RGBA channels in the Viewer.

4. With Read1 selected in the DAG, insert a PositionToPoints node from


the 3D/Geometry toolbox.

The Viewer pops over to the 3D view since you are no longer looking at an
image. The image you had is now a 3D object, or it will be, once you
change a couple of properties.

5. In the PositionToPoints Properties panel, choose WorldPos from the


Surface Point drop-down menu. For Surface Normals, choose Normals.

You can see immediately that some pixels appeared in the Viewer. These
are not really pixels but points—small 2D planes, like pieces of paper,
floating in 3D space (FIGURE 9.14).

FIGURE 9.14 The position of the pixels is laid out in 3D


space.

6. Move around your 3D viewer a little to see what you make of these
points.

If you’re already feeling good controlling the 3D space, you can probably


see the whole image pretty well. But in any case, this image is supposed to
be seen from a specific camera.
Importing a camera
For this exercise, you need the 3D camera to act like the camera used to
shoot this footage. The artist who made this shot already had a camera in
the software he used to produce this element. He already exported the
camera and has supplied you with a file to import. How convenient. You
probably want to learn how to import. Here goes.

First you need a 3D setup. Luckily you have a shortcut you can use to
produce one.

1. With nothing selected in the DAG, choose 3D/Setup from the ToolSets
toolbox (FIGURE 9.15).

FIGURE 9.15 You just saved yourself a whole lot of clicking


by using this ToolSet.

2. Double-click Camera1 to display its Properties panel at the top of the


Properties Bin.

The top-most property is a little check box called Read From File. Click it
to use the File tab, which is where you read the camera location and other
properties from a file (FIGURE 9.16).

FIGURE 9.16 Clicking this box enables the File tab.

3. Select the Read From File check box.

4. Switch to the File tab in Camera1’s Properties panel (FIGURE 9.17).

FIGURE 9.17 Camera1’s File tab

5. To import the camera file, click the folder icon to the right of the file
property to load the File Browser.

6. In the File Browser, navigate to the chapter09 directory and bring in


Camera.fbx.

7. A dialog box asks if it’s OK to destroy the current camera data; click
Yes.

An fbx file can hold a lot of cameras’ properties in various takes. From the
two drop-down menus, you will choose a take name and a node name.
These usually include a lot of default presets that you wouldn’t normally
use, depending on which application you used when you exported the
camera file. These should already be set up correctly, but just to be on the
safe side, take these steps.

8. From the Take Name drop-down menu, choose Take 001.

9. From the Node name drop-down menu, choose S26A_Camera


(FIGURE 9.18).

FIGURE 9.18 This is how your File tab should look at this
point.
10. Switch to the Camera tab.

You can see that the Translate and Rotate Input fields are all filled with
values, and that they are all grayed out—unavailable for changing. This is
how the animation is carried across. If the file on disk changes, the values
here change.

The camera has disappeared from the Viewer (or at least it did on mine).
To see it again, tell the Viewer to frame the selected element.

11. Select Camera1 in the Node Graph again.

12. Hover your mouse pointer over the Viewer and press the F key to
frame the Viewer to the selected element (FIGURE 9.19).

FIGURE 9.19 The camera after the import

The great thing here is that you can already see the relationship between
the camera and the points created with the PositionToPoints node. Points
exist only in the area inside the camera’s field of view.

Now let’s view the scene from the imported camera.

13. From the Viewer Camera drop-down at the top right of the Viewer,
where it says Default, as you can see in FIGURE 9.20, choose Camera1.

FIGURE 9.20 Choosing a camera to view the 3D space from

What’s this? Why are you seeing the 2D image all of a sudden? Well,
you’re not. You’re seeing the points from the correct angle—the angle they
were originally shot from. So now the image makes sense.

14. Move around the viewer a little so you can get a sense of what you’re
looking at. When you’re done, choose Camera1 from the drop-down menu
again to reset the camera (FIGURE 9.21).

FIGURE 9.21 All these points together make up the image.

There are a lot of holes in this object because only areas on the object that
are seen from the camera can be re-created with this method. But this is
OK. You are using this only to indicate location.

Next up—the butterflies.

MANIPULATING 3D NODE TREES IN 3D SPACE


As I mentioned earlier, the Butterflies element is a particle system
element generated with NukeX’s particle nodes; it is quite advanced and


not covered in this book. However, I still want you to have the freedom to
use this as a 3D element, so you’ll use the element in its original form—a
Nuke script.

Importing the particle system


The particle system is saved as another Nuke script, and you can merge it
with this script by using the Import Script command in the File menu.
1. From the File menu, choose Import Script.

2. Navigate to the chapter09 folder and double-click Particles.nk.

This book is written with a regular Nuke license in mind. The particle
system in Nuke is part of the NukeX series of nodes. The nice thing about
it is that even if you have only a Nuke license, you can still use NukeX
features; you just can’t change their properties.

3. If you are running Nuke, you get a message telling you that you are
using NukeX tools. Click OK to make it go away (FIGURE 9.22).

FIGURE 9.22 Running NukeX tools in Nuke generates this


message.

If you are running NukeX, you will not get a message, and you can change
the settings of any of the particle nodes later.

The group of nodes you are importing is automatically selected when you
bring it in. This makes it easy for you to move it to a convenient location.

4. Move the freshly imported nodes somewhere that doesn’t coincide with
the location of other nodes (FIGURE 9.23).

FIGURE 9.23 The tree you just brought in should look like
this.

The tree you brought in is missing one key element, and that’s the particle
image itself. Bring it in now and connect it.

5. From the Chapter 09 folder bring in butterfly.png with a Read node.

6. At the very top of the imported tree, above two Crop nodes, is a dot
node. Connect Read2’s output to the input of this dot (FIGURE 9.24).


FIGURE 9.24 This is how you should connect the butterfly
Read node.

7. Clear all Properties panels from the Properties Bin.

Let’s follow this tree down from Read2 so you understand the basics of it.
As you read, follow the tree and view each node I mention in the Viewer.

• The two Crop nodes under Read3 split the butterfly into two—the left
half and the right half.

• Card2 and Card3 are two flat 3D surfaces. They represent a paper-thin
object in 3D space. The butterfly images (left and right) texture those
surfaces. There’s an expression on one of the Card nodes that is being
referred to by the other Card node to make the two halves rotate, making
the butterfly appear to move.

• Scene2 connects the two parts of the butterfly and the TranformGeo1,
which is a 3D transformation node that rotates the combination of the two
Card nodes to better face the camera.

8. Double-click TransformGeo1, then press 1 to view this in the Viewer.


Hover over the Viewer and press F to frame the Viewer to your selection.

9. Click Play. When you’re done watching the butterfly flutter, press Stop.

TIP

Clearing the Properties Bin often when working with 3D nodes


is a good idea because it also clears the Viewer from
unneeded on-screen controls, which is this case, is all your
geometry and multiple cameras.

10. Clear the Properties Bin again.

• Card1 is the source of the particles being emitted. In particle speak it is


the emitter or gun.

• The nodes below Card1, and all the rest of the nodes in this tree that I
haven’t discussed, are either something you have already learned about or
part of the actual particle system, which is too expansive to discuss here.

• ParticleSettings1 is the last node in this tree. It is, then, the output of this
tree.

11. Select ParticleSettings1 in the DAG and press 1 on the keyboard to


view it.

12. Click Play in the Viewer (FIGURE 9.25). ⬆


FIGURE 9.25 The output of the imported particle system

What you’re seeing here is the output of the particle system. It’s important
to note several things: The butterflies emit from world center, and they
emit on frame 1.

Let’s move the particle system into place, both in position and time.

Moving 3D objects in space


The TransformGeo node mentioned in the previous section is just like a
2D Transform node you used earlier in this book (in Chapter 2, for
example). The only difference is that it’s designed to move objects, or
geometry, rather than 2D images. This also means it moves not only along
two axes like Transform does, but along three.

1. Select ParticleSettings1 and insert a TransformGeo node after it from


the 3D > Modify toolbox.

You now have an axis that moves the whole particle system in 3D space.
The controls are the same as those for the camera you practiced earlier.
Use the axes to move, and hold down Ctrl/Cmd to rotate.

But where should you move the butterflies to?

2. Double-click PositionToPoints1 and Camera1.

You double-click the first one so you can see it in the Viewer. You double-
click the second one so you can access the camera in the Viewer Camera
drop-down menu.

3. From the Viewer Camera drop-down menu at the top right of the
Viewer, choose Camera1.

4. Select TranformGeo2 in the DAG.

You can now see the axis controlling the particle system in the Viewer in
relation to the creature (FIGURE 9.26). I want you to place the particle
system exactly where the creature’s head is when it’s in between the bars
of the grill.

FIGURE 9.26 The red-green-blue axis is visible behind the


creature.

5. Go to frame 40 in the Timebar.

The creature’s head is inserted into that grill. You can position
TranformGeo2 now.

6. Using the on-screen controls, move TransformGeo2 to where the


creature’s head is.

I ended up with X = 2.2 and Z = 11.7. This brings TransformGeo2 and the
particle system that’s attached to it to where the head is. However, the


butterflies should be emitting from under the grill, so let’s move the
particle system down too.

7. Using TransformGeo2’s Properties panel, change the Y Translate to –2.

8. Go to frame 90 in the Timebar.


The butterflies are tiny. Can you even see them? Let’s make them a lot
bigger.

9. Go back to frame 40 in the Timebar.

10. In TransformGeo2’s Properties panel, change Uniform Scale to 5.

Now you can see them (FIGURE 9.27).

FIGURE 9.27 The butterflies are at the correct location and


are the right size now.

Even after all this, the timing is all wrong. The butterflies are being
emitted on frame 1, but the creature pokes his head in only at frame 30 or
so.

Moving 3D objects in time


All time-related nodes are located in the Time toolbox, surprisingly
enough. You used a few of them in Chapter 8. Note that some of these
nodes don’t just operate on clips of images, they also operate on geometry.

The nodes that operate on geometry are FrameRange, TimeOffset, and


FrameHold. Of these three, TimeOffset is the one you’ll find helpful right
about now.

1. Select TransformGeo2 in the DAG and insert a TimeOffset node after it


from the Time toolbox (FIGURE 9.28).

FIGURE 9.28 TimeOffset is inserted after TransformGeo2.

Because you want something that currently starts at frame 1 to start at


around frame 30, you offset the time by, say, 30. It’s as simple as that.

2. In TimeOffset1’s Properties panel, find the Time Offset property (that
won’t be hard, there’s only one property there) and type 30 in the Input
field.

The butterflies now have the right timing.


Onward to rendering and compositing this!

TURNING 3D OBJECTS INTO 2D PIXELS


Your DAG should have three trees in it now. You can see my setup in
FIGURE 9.29. I marked the three trees with Backdrop nodes. You can
find Backdrop nodes in the toolbox called Other. You can use these nodes
to mark areas in the DAG. You don’t need them, it’s just that in this case,
they help me explain the current state of the node graph.

FIGURE 9.29 Your tree doesn’t have the Backdrop nodes.

The area on the right is the particle system. The area on the left is the
reference object generated with the PositionToPoints node. The purple
area at the bottom is the 3D setup. Having separate trees like this gives
you a lot of freedom to try out different things, but for the purpose of this
project, you need to connect all of them. Start by connecting the particles.

1. Drag the output of TimeOffset1 to the input of Scene1.

2. Select ScanlineRender1 and press 1 to view it in the Viewer. If you’re


still looking at the 3D view, press Tab while you hover the mouse pointer
in the Viewer to switch to the 2D view.

After all this you come to the point of the ScanlineRender node. You are
rendering, in the same way that external 3D software renders, the 3D
geometry through a camera and into a 2D image that can be manipulated
in Nuke with any 2D node, like Merge, Grade, and Blur.

Let’s composite the butterflies on top of the background. Notice that if


you hadn’t turned the geometry into a 2D image, you would not have been
able to do this.

3. With ScanlineRender1 selected, press M to insert a Merge node.

4. Connect Merge1’s B input to Read1 and view Merge1 in the Viewer.

5. Go back to frame 1 and click Play to watch the particle system as a 2D


image (FIGURE 9.30).

FIGURE 9.30 The butterflies as a 2D image, composited ⬆


over the background

The butterflies do seem to be positioned in the correct place. We need to


tweak several things, though. First, the butterflies appear on top of the
grill where they shouldn’t be. Also, they are all in front of the creature,
while some need to be behind it. Finally, their color is wrong and they
don’t feel like part of the image. One by one, let’s fix these issues.

6. Click Stop and go to frame 50.

7. With nothing selected, press P to create a RotoPaint node.

8. Hover the mouse pointer in the Viewer and press V to start drawing a
Bézier shape. Draw along the grill as I draw in FIGURE 9.31.

FIGURE 9.31 Create a mask like this with the RotoPaint


node.

Now use the roto to punch a hole in the butterflies.

9. With RotoPaint1 selected, press M to insert another Merge node.

10. Drag Merge2 on the pipe between ScanlineRender1 and Merge1.


When the pipe highlights, let go.

11. From Merge2’s Operation drop-down menu, choose Stencil.

12. Empty the Properties Bin.

FIGURE 9.32 shows the butterflies being masked out of the bottom grill,
and indeed, they already appear to be coming in from under the floor. The
mask’s edge is a little sharp.

FIGURE 9.32 The butterflies are masked out.

13. Select RotoPaint1 and press B to insert a Blur node.

14. Set Blur1’s value to 3.

That fixed the edge.

Your tree should look something like FIGURE 9.33—if you’re as


organized as I am, that is.

FIGURE 9.33 Make sure your tree looks like this—and keep
it organized!
APPLYING MATERIALS TO OBJECTS
Making some of the butterflies fly in front of the creature and some
behind is not as easy as masking the grill. You need to know which
butterflies are in front and which are behind, and then mask only the ones
behind. That’s not something worth attempting because potentially it can
be a lot of work, and you might not get it right. And why should you?
There are easier ways.

There is already an object in the scene that sits where the creature is and
can be used as a mask—the creature made out of points. Let’s see it in the
render.

1. Drag an output from PositionToPoints1 and connect it to Scene1


(FIGURE 9.34).

FIGURE 9.34 Connecting another object to the scene

Now that PositionToPoints1 is connected to Scene1, both the creature


point-made object and the particle system live in the same world and will
be rendered together through a ScanlineRender node.

2. Select Read1 and press 2 to view it in the Viewer’s second input.

3. Select ScanlineRender1 and press 1 to view it in the Viewer’s first input.

4. Switch between the inputs 1 and 2 by hovering the mouse pointer in the
Viewer and pressing 1 and 2.

When you do this, the good thing is that some of the butterflies do
disappear behind the creature’s head. The bad thing is that everything
becomes fatter in input 1. This is because the PositionToPoints node
doesn’t produce precise geometry. The points created aren’t the right size
and never will be. You can get closer by using PositionToPoint’s Point Size
property (FIGURE 9.35), but PositionToPoints simply isn’t designed or
intended for this. You should still use the untreated Read1 as the
background, but you can use PositionToPoints1 as a matte in 3D space.

FIGURE 9.35 PositionToPoints’ Point Size property

An object usually has a material assigned to it. The Card nodes making up
the butterflies have the butterfly images assigned to them as a material;
the butterfly image is connected to the Img input of the Card nodes
(FIGURE 9.36). If you did not assign a material, you would still have the
cards and be able to see them in the 3D viewer, but nothing would render
out through the ScanlineRender node.


FIGURE 9.36 Most Geometry has an Img input that
materials connect to.

Aside from simple materials such as images, some more complex


materials have more control over how they react to light; they can be
shinier, or emitting light, and they can control what color their shadows
have, for example. You can find all of these materials in 3D > Shader
(shader is another term for material).

One of these materials is called FillMat. The point of this material or


shader is to turn the geometry it’s assigned to into a kind of mask for
other geometry in the scene. Well, that’s exactly what we need.

Because PositionToPoints doesn’t have an Img input, you use another


node from the 3D > Shader toolbox called ApplyMaterial.

5. Select PositionToPoints1 and, from the 3D > Shader toolbox, choose


ApplyMaterial.

6. With nothing selected, create a FillMat from 3D > Shader.

7. Connect FillMat1’s output to ApplyMaterial’s Mat input (FIGURE


9.37).

FIGURE 9.37 Connecting a material using the


ApplyMaterial node

8. View ScanlineRender1 in the Viewer (FIGURE 9.38).


FIGURE 9.38 The creature now masks the butterflies that
are behind it.

You are no longer seeing the creature in the output of ScanlineRender1.


Instead you now have the butterflies being masked by the creature where
appropriate, which is what you were after.

9. View Merge1 in the Viewer (FIGURE 9.39).

FIGURE 9.39 See how the butterflies exist only where the
creature doesn’t?

Finishing touches
So almost everything is solved, we have just a couple of little simple
tweaks left to perform. The color of the butterflies is not right, and neither
is how sharp they look.

1. Insert a Grade node between ScanlineRender1 and Merge2.

2. Because the output of ScanlineRender1 is a premultiplied one, choose


your method of dealing with it. I choose to pick RGBA.alpha from the
(Un)premult By drop-down menu.

3. Change the color of the butterfly so that it’s warmer and maybe a little
brighter. I ended up changing just Gain to these values: R = 1.425, G =
1.18, B = 0.85.

4. Insert a Blur node after Grade1.

5. In Blur2’s Properties panel, change the Size to 2.

6. Close all Properties panels, click Play in the Viewer, and enjoy your
work (FIGURE 9.40).

FIGURE 9.40 The finished tree

If the render is too slow for you, you might want to render the output to
hard drive instead of waiting by the Viewer. You know how to do that by
now, so you don’t need me to explain. If you’re not sure, refer back to
Chapter 2.

This is essentially it. You should be pretty pleased with yourself! You’ve
done some pretty advanced stuff already, and you can see your tree is
larger than what you’ve had until now. You’ve also learned some ⬆
important building blocks here, mainly how to use the Camera, Scene,
and ScanlineRender nodes.
 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
9. The Nuke 3D Engine 11. Camera Projection
   🔎

10. Camera Tracking


Nuke 6.0v1 introduced the much-anticipated Camera Tracker.
In Nuke 8.0v1 the Camera Tracker got a serious update. The
Camera Tracker enables Nuke to extract the camera location
and movement from a moving shot. It uses a lot of tracking
points, also called features, which are usually generated
automatically, though manual tracks are possible too. Then,
through calculations of the parallax movement of different
planes, it can extract the camera information, which includes
its location and rotation on every frame as well as its field of
view (parallax is explained in the following note). The tracking
points used to create the camera information form a point
cloud, a representation in 3D space of the location of each of
these points.

The CameraTracker node is available only in NukeX—a more expensive


license for Nuke that enables some advanced features. The good thing is
that only the processing part of the Camera Tracker is disabled in
standard Nuke, so you can run your track on a machine with the NukeX
license, but then use it in standard copies of Nuke (FIGURE 10.1).

FIGURE 10.1 The Camera Tracker is a NukeX-only feature.

If you don’t have a NukeX license, please read on anyway because only a
small part of this chapter covers steps you won’t be able to perform.

NOTE

Parallax is the apparent change in the position of objects at


different distances that results from changing the camera’s
position. If you move your head left and right, you will see that
objects close to you are moving faster than things far away
from you. This phenomenon is used to figure out distance in ⬆
camera tracking technology.
CALCULATING REFLECTION MOVEMENT USING
CAMERA TRACKING
In Chapter 5’s four-point tracking exercise, you composited a new picture
into a frame and left it with no reflection on the glass. As promised, you
will now add the reflection. To calculate the correct way the reflection will
move on the glass, you need to use a reflective surface at the location of
the frame and have it reflect something—preferably the other wall of that
room. You also need to shoot that surface in the same way the original
shot was taken. For that you need to extract the camera information for
the shot.

Since you’ll be tracking with the CameraTracker node, you need to launch
NukeX rather than standard Nuke. If you don’t have a NukeX license,
finish reading this section with your standard version of Nuke. Read the
next section, “3D Tracking in Nuke,” but do not perform the steps. Then,
continue reading and performing steps from the “Loading a Pre-generated
CameraTracker Node” section. If you do have a NukeX license, do all the
steps in “3D Tracking in Nuke” but do not perform the steps in “Loading a
Pre-generated CameraTracker Node.”

1. Launch NukeX. If you don’t own NukeX, launch standard Nuke.

NOTE

Many employers have NukeX—and they may expect you to


have a full understanding of the CameraTracker node—so
you will want to read and learn the information in the “3D
Tracking in Nuke” section, even if you have only standard
Nuke.

2. Open a Nuke script by pressing Ctrl/Cmd-O. Go to your student_files


folder and double-click frame_v01.nk (FIGURE 10.2).

FIGURE 10.2 This is the script as you left it.

NOTE

If you don’t save your script as directed, some steps later in


the chapter won’t work.

3. Save the script in your student_files directory as frame_v02.nk.

Next, you will use Nuke’s Camera Tracker to extract the camera
information from the scene. Another useful thing that will come out of
this process is the point cloud, which represents locations of several
elements in the shot.

3D TRACKING IN NUKE
Nuke’s Camera Tracker works in three stages:

Track is the actual tracking process. Nuke looks at the image and finds
tracking points according to high-contrast and precisely defined locations ⬆
(such as dots and crosses), much as you would choose tracking points
manually.

Solve solves the 3D motion. Nuke looks at the various tracked points it
has, throws away redundant points that didn’t track well, and, by looking
at the movement, figures out where the camera is and where each tracking
point is located in space.

Create transfers all the gathered information into a variety of different


Nuke 3D setups including a Camera node, a Scene node, and a Point
Cloud node.

Let’s track the shot.

1. Select Read1. This should be the clip containing the frame on the table.

2. Branch out a CameraTracker node from the 3D toolbox by holding


Shift as you create it.

3. Make sure you are viewing the output of the newly created
CameraTracker1 (FIGURE 10.3).

FIGURE 10.3 A newly branched CameraTracker node

Tracking features
Your CameraTracker node is ready, but you can make several adjustments
to make for a better track.

1. Switch to the Settings tab in CameraTracker1’s Properties panel


(FIGURE 10.4).

FIGURE 10.4 The Settings tab

Several properties in this tab can help you achieve a better track. Here are
some important ones:

• Number of Features: The number of automatic tracking points


created by the tracker. If you increase this, reduce Feature Separation.

• Detection Threshold: The higher the number, the more precise the
tracker has to be in finding trackable points.

• Feature Separation: The higher the number, the farther apart the
tracking features have to be. If you increase Number of Features, reduce
this value.

• Preview Features: This check box shows the points that will be used
for tracking in the Viewer. Handy for the next properties described.

This shot can be tracked well without changing anything, but for added
accuracy, up the amount of features for tracking. Make sure you preview


this change first.

2. Click to turn on Preview Features.

3. Change the Number of Features property to 300 (FIGURE 10.5).


FIGURE 10.5 More features to track

See how more features were added? More features means better accuracy,
but a slower operation. You don’t mind waiting a few more seconds, do
you?

4. Turn off Preview Features.

5. Switch back to the CameraTracker tab (FIGURE 10.6).

FIGURE 10.6 The CameraTracker tab

This tab has several properties that change the accuracy of the track by
telling Nuke more and more about the camera and how it was used. The
more you know, the better the track.

The first property of importance to you currently is called Mask. It


controls CameraTracker’s second input, the Mask input.

3D tracking works only on stationary objects. Objects that are moving will
distract the solving process, as their movement is not generated from the
camera’s movement alone. If there’s a moving element in the scene, you
should create a roto for it, feed it into the CameraTracker’s Mask input,
and then use this property to tell the CameraTracker node which channel
to use as a mask. If you’re not sure where CameraTracker’s Mask input is,
drag the little arrow on the side of the node (FIGURE 10.7).

FIGURE 10.7 CameraTracker’s Mask input revealed

Here are four more properties worth mentioning:

• Camera Motion: This setting tells Nuke whether the camera is


handheld; Free Camera, meaning a free-moving camera; or Rotating
Camera, meaning it’s just rotating on a tripod. The camera can also have
Linear Motion for cameras where motion is a straight, linear path; and
Planar Motion for cameras that have a flat path and move in a two-
dimensional plane only.

• Lens Distortion: All lenses create distortion in image areas farther


away from the center of the screen. The wider the lens, the more
distortion appears. Distortion has a big effect on tracking, as tracking a ⬆
distorted background doesn’t give a true representation of the world.
There are ways to remove distortion. This property let’s you tell Nuke if
the source is distorted or not.

• Focal Length: Here you can tell Nuke what lens was used on the shoot,
if you know it. You can choose to give as precise or vague values as you
can. You do this in a moment.

• Film Back Size: This is the size of the sensor used to capture the
image. The sensor can be the size of the camera’s back in a film camera or
of the chip in a digital camera. If exact scale doesn’t matter in the end
result of your track—that is, if you don’t need to match to a specific real-
world scale—then a ratio such as 4:3 or 2:1 will be enough, depending on
your format.

The camera in the shot is a free camera, so you’ll leave the setting as is.
It’s not always apparent whether the camera is free—in fact, the best way
to know is by being on set and writing down whether it is or not. However,
in this case it is apparent the camera is moving because there is parallax
in the shot.

Nothing was done to remove lens distortion from the image, and so the
default setting of No Lens Distortion needs to change.

6. Choose Unknown Lens from the Lens Distortion drop-down menu.

As for the focal length, just by looking at the clip you can tell one thing:
There is no zoom change, and so the default choice of Unknown Constant
is correct.

The film back is another unknown, so leave it as it is.

For now, you will simply run the tracking process and hope for the best.

7. Click the Track button at the bottom of the Properties panel (FIGURE
10.8).

FIGURE 10.8 Click the Track button to start the tracking


process.

The CameraTracker node automatically tracks the length of the Input clip.
It’s doing it right now, as you can see from the progress bar that appears.
If you want to change the tracking length, change the property called
Range in the CameraTracker tab (FIGURE 10.9).

FIGURE 10.9 The progress bar showing the forward


tracking stage

The Camera Tracker will track forward from the start; when it reaches the
end, it will track backward and then stop. You can see all the tracking
points, or features, the Camera Tracker tracked (FIGURE 10.10).

FIGURE 10.10 Tracked features appear as orange crosses.

Solving the camera ⬆


The next stage is to “solve” the camera—to turn all these tracked features
into camera information. A good way of approaching this stage is to first
solve, see the result, and refine. You can do this as many times as you like.

1. Click the Solve button under the Track button (FIGURE 10.11).
FIGURE 10.11 The Solve button

Nuke is now processing the information and trying to re-create the real-
world situation out of the features it tracked. The progress bar indicates
how long this is taking.

It’s done. Good for you (FIGURE 10.12).

FIGURE 10.12 Solved tracks are presented in green, amber,


and red.

The tracks (features) changed colors to represent the tracks’ status.


Amber, as before, represents unsolved tracks. You can see that the
majority of the tracks are green, representing tracks that were used for the
solve. The red tracks were rejected by the solve because they didn’t make
sense with the majority of the other tracks.

The Error property field returns the overall level of error produced by the
solve operation. A value of 1 is considered bad (FIGURE 10.13). I have
(and so should you) an error of 0.76, which is not bad at all. But let’s see
ways to reduce this further.

FIGURE 10.13 The Error field shows the overall level of


error in the solve.

2. Click the AutoTracks tab in order to switch to it (FIGURE 10.14).

FIGURE 10.14 The AutoTracks tab


The AutoTracks tab gives information about the tracked features
produced. Several different curves represent different aspects of the
tracks. Some are very technical and some are easy to explain. I explain
what I can.

• Track Len – Min: This is the minimum allowed length of a track. In


3D tracking the length of a track doesn’t have to be the whole clip. It can
be only a few frames, and so it will contribute only during that length.
However, really short tracks don’t really add, and can actually damage a
good solve. The curve shows the short tracks.

• Error – (Min; RMS; Track; Max): These four curves show different
levels of error. The difference between them is difficult to explain;
however, reducing them helps get a better solve.

• Min Length; Max Track Error; Max Error: These three curves
show the setting chosen in the three properties of the same name that can
be found just under the curves window (FIGURE 10.15).

FIGURE 10.15 These three properties are also presented as


curves in the curves window above.

Let’s use the curves and properties to refine our solve and reduce the
overall level of error.

3. Click to select Track Len – Min in the curve list, then Ctrl/Cmd-click
the Min Length curve.

Now you should have both these tracks selected and shown in the curve
window.

4. Click the curve window, then press F to frame the two selected curves.

The squiggly curve you see in FIGURE 10.16 (presented in pink) shows
the minimum length of track in every frame. The straight curve shown in
green at the bottom is the property called Min Length.

FIGURE 10.16 The Track Len – Min curve in pink and Min
Length in green.

5. While looking in the Viewer and the curve window, using the slider,
change the Min Length property to 13.

What you are doing is chopping off tracks that are too short to be
considered. As you are climbing up the squiggly curve in the curve
window, more tracks are turning red in the Viewer, meaning they will no
longer be used for solving.

6. Click to select Error – RMS in the curve list, then Ctrl/Cmd-click the
Max Track Error curve.

7. Click the curve window, then press F to frame the two selected curves.

Here you see one level of error in the solve, and again you use the
corresponding slider to chop off any features that damage the solve.

8. Bring the Max Track Error slider down to 0.5 and look at the Viewer
(FIGURE 10.17). ⬆
FIGURE 10.17 A lot of features have turned red and won’t
be considered in the solve.

A lot of points turned red. Too many, in fact. If you keep going like this,
you won’t have any features left and will get a very bad solve indeed.

9. Click the curve window, then press F again to reframe the window.

10. Change the Max Track Error field to 0.91. This marks only the peak at
the end of the clip as unwanted, leaving plenty of features to use for
solving (FIGURE 10.18).

FIGURE 10.18 Anything above the purple line will not be


used.

Do this again for the Error – Max curve and the Max Error property.
Again, you want to chop off only the peaks.

11. Click to select Error – Max in the curve list, then Ctrl/Cmd-click the
Max Error curve (FIGURE 10.19).

FIGURE 10.19 Selecting curves in the curves list

12. Click the curve window, then press F to frame the two selected curves
(FIGURE 10.20).


FIGURE 10.20 Find the peaks in the Max – Error curve.

The curve peaks are apparent in the Max – Error curve. Changing the Max
Error property to something around 4.7 separates the peaks from the rest
of the curve.
13. In the Max Error Input field, type 4.7.

So by using the three sliders and the curve window, you determined what
features to remove. You now need to remove these tracks so they won’t be
used at all.

14. Click the Delete Rejected button (FIGURE 10.21). In the dialog that
opens, click Yes.

FIGURE 10.21 The three buttons at the bottom of the tab


delete unneeded tracks.

All the tracks marked red have now been removed. However, there are
still amber tracks. You should delete those as well.

15. Click the Delete Unsolved button. In the dialog that opens, click Yes.

You should now be left with only green colored tracks (FIGURE 10.22).

FIGURE 10.22 Now all tracks left in the Camera Tracker


are green.

All these changes have already done wonders to the Solve Error value. It
has dropped to 0.55 and you can see it at the top of the AutoTracks tab.
It’s the same property you saw in the CameraTracker tab; it’s just
mirrored here for convenience (FIGURE 10.23).

FIGURE 10.23 Solve Error can be found both in the


AutoTracks tab and the CameraTracker tab.

In order for all these changes to take effect in the output of the
CameraTracker node, you should solve again.

16. Switch back to the CameraTracker tab.

17. Click Solve, and in the dialog that opens, click Yes.

When you solve again, the progress bar remerges as the work is completed
using the tracked features that you haven’t deleted. When my solve
finished, I ended up with an Error value of 0.53. The work done here
reduced the error from 0.76 to 0.53. That’s a big change, and it’s a good
thing we did it.

Creating the scene


The last stage of using the CameraTracker node is creating the scene. This
converts the tracked information and the solved camera into useful Nuke
3D nodes, including potentially a Camera node, a Scene node, and a
CameraTrackerPointCloud node.

1. Make sure you are viewing the CameraTracker tab.

2. From the Export drop-down menu, choose Scene.

3. Click the Create button (FIGURE 10.24).


FIGURE 10.24 Choosing what to export using the Export
drop-down menu and the Create button

You now have three new nodes in the DAG (FIGURE 10.25). You have a
Camera node, which is the main thing you were looking for. You have a
Scene node, which simply connects everything, but is really redundant
and will not be used. And you have a node called
CameraTrackerPointCloud1. This last node serves as a placeholder for the
point cloud data. If you want to export the point cloud so it can be used in
another application as a 3D object, you can connect a WriteGeo node from
the 3D toolbox to it and render that out as an obj file.

FIGURE 10.25 Three new nodes are created.

LOADING A PRE-GENERATED CAMERATRACKER


NODE
If you don’t have a NukeX license, try the following steps. In fact, even if
you do have a NukeX license, you should read this section because it
explains the difference between using the Camera Tracker with and
without a NukeX license. If you performed the steps in the previous
section, don’t do the following steps, just read through them.

Not having a NukeX license means you can’t click the tree buttons in the
CameraTracker tab: Track, Solve, and Create. But you can still use a pre-
generated CameraTracker node as you do in the remainder of this
chapter. Here’s how to load the pre-generated CameraTracker node.

1. Delete CameraTracker1, which you created earlier.

2. Click File > Import Script and, from the chapter10 directory, import a
script called CameraTrackerScene.nk.

3. Connect Read1 to the input of the imported new CameraTracker1 node


and arrange the imported nodes as in Figure 10.25 in the preceding
section.

You are now ready to continue reading the rest of the chapter.

ALIGNING THE SCENE


When the Camera Tracker solves the camera and point cloud, it does it in
a void. It can figure out the relationship between the camera and the point
cloud, but not the relationship to the world. In other words, it doesn’t
know which way is up. Because of this, the next step is usually to define
the ground plane. You do this by finding several features you know are
parallel to the ground and then telling Nuke to align the scene to those.

In this case, the table is parallel to the ground so you will use the table as
the ground plane. Let’s look at the point cloud in the 3D Viewer.

1. If you are not already viewing the 3D scene, hover your mouse pointer
over the Viewer and press the Tab key to switch to the 3D Viewer.

2. Make sure CameraTracker1 and Camera1 are loaded in the Properties


Bin by double-clicking it.
You should now see points in 3D space. This is the point cloud. If you
don’t see this immediately, you might need to zoom in a little.

3. Zoom in a bit and rotate so you can see the scene from the right
(FIGURE 10.26).

FIGURE 10.26 The point cloud from the side in the 3D


Viewer

You can see that the points are spread out in a way that shows something
flat at the bottom and something upright on the right. The bottom thing is
the points representing the table while the upright points represent the
picture frame. You want to make the table flat, not diagonal like it is at the
moment.

You can pick points only in the 2D view, but it’s better to see your
selection in the 3D view—so you need two Viewers. Let’s create a second
Viewer.

4. Choose Split Horizontal from Viewer1’s Content menu (FIGURE


10.27).

FIGURE 10.27 Splitting a pane horizontally

5. In the new pane, choose New Viewer from the Content menu
(FIGURE 10.28).


FIGURE 10.28 Creating a new Viewer in a new pane

You now have a new Viewer and a new Viewer node in the node graph.
You need to connect what you want to see to that new Viewer.

6. Drag Viewer2’s input to CameraTracker1’s output (FIGURE 10.29).

FIGURE 10.29 Connecting a new Viewer

You now have two Viewers, one showing the scene in 2D and the other in
3D (FIGURE 10.30).

FIGURE 10.30 Two Viewers side by side

Let’s select features, or points, on the table. You define the ground plane
by selecting as many features on the table as you can; by double-checking
that they are OK in the 3D Viewer, you eliminate any remaining errors.

7. In Viewer2, which is showing in the 2D view, select as many points on


the table as possible. You can press Shift to add or remove from your
selection.

Notice that when you’re selecting points in the 2D view, they are
highlighted in the 3D view. Make sure the points you select are part of the
main point cloud area representing the table in the 3D view.

You can see the points I selected in FIGURE 10.31, both in 2D and 3D
view.
FIGURE 10.31 Selecting points from the point cloud

To define what the points represent, use the contextual menu.

8. Right-click (click/Ctrl-click) anywhere in the 2D Viewer. From the


contextual menu choose Ground Plane, then Set To Selected (FIGURE
10.32).

FIGURE 10.32 Setting the ground plane’s Z axis

You can see how the point cloud in the 3D Viewer jumped so it’s aligned
with the grid in the 3D Viewer. The camera is now pointing down, which
fits the real-world scenario better too (FIGURE 10.33).

FIGURE 10.33 The 3D scene’s new representation of the


real scenario is closer to reality.

The realignment is complete. Let’s tidy up the screen.

9. Choose Restore Layout 1 from the Layout menu in the menu bar.

10. Select Viewer2 in the DAG and delete it.

11. Select Viewer1 in the DAG and press 1 to turn it back on.

12. Clear the Properties Bin.

This removes the extra Viewer and brings the interface back to its default
setting.

This ends the Camera Tracker part of the lesson, but what now? You need
something to use as a reflection.

CREATING THE REFLECTION


To create the reflection, you need something for the frame to reflect.
Unfortunately I don’t have a shot of the back of this room to give you—in
real-world terms, let’s say the production didn’t have time to shoot it for
you. What I do have, though, is a panorama of a swimming pool on a
beach. True, it’s of an outside scene with bright light and palm trees, not
exactly an interior, but it will have to do.

This is a complete Nuke script of its own. You will now load in this script
and use its output as a reflection map.

1. Save your script.



2. Make sure you navigate the DAG so there is a lot of empty space in the
middle of the DAG window.

This will be invaluable when you bring in the panorama script. If you
don’t make sure this space is free, the newly imported script will overlay
the current nodes in the DAG, and you’ll have one big mess.

3. Click File > Import Script.

4. Navigate to the chapter10 directory and bring in Pan_Tile_v01.nk.


Don’t click anything else!

When you import another script, all the nodes that came with it are
selected automatically, which makes it easy to move them.

5. Drag the newly imported nodes to the right of the rest of the nodes
(FIGURE 10.34).

FIGURE 10.34 Place the imported nodes to the right of the


existing nodes.

I would love to explain how to go about creating a panorama like this,


however, there’s no room in the book. One of the great things about node-
based compositing applications, however, is that looking through the tree
and understanding what happened in it is a breeze. I encourage you to
have a look at the way this panorama is assembled.

ScanlineRender nodes
Reflection maps tend to be full panoramic images, usually 360 degrees.
That’s exactly what you have in this imported panorama. It consists of 15
still frames shot on location with a 24-degree rotation from frame to
frame. All these frames are sitting in 3D and are forming a rough circle,
producing a 360-degree panorama.

However, this is only a panorama in the 3D world, not the 2D one. You
need to convert this into a 2D panorama for a reason explained later.

Nuke’s ScanlineRender node usually outputs whatever the camera that’s


connected to it sees, but its Projection Mode property’s drop-down menu
(which sets the type of output mode) has other options. One of those is
called spherical, which is what you are after. Let’s connect a
ScanlineRender node to the panorama. But first let’s look at the
panorama in the 3D Viewer.

1. Double-click Scene2 (the Scene node at the end of the panorama script
you just brought in).

2. Hover your mouse pointer over the Viewer and press the Tab key to
switch to the 3D Viewer. If you are already in the 3D Viewer, there’s no
need to do this.

3. Navigate back and up in the 3D Viewer by using the + and – keys and
by clicking Alt/Opt-drag. (FIGURE 10.35).

FIGURE 10.35 The panorama is spread out in the 3D


Viewer.

This panorama is made out of the 15 frames textured on 15 cards spread


around in a circle.

4. Switch back to the 2D Viewer. ⬆


Now for the ScanlineRender node.

5. Select Scene2 (the Scene node connecting our 15 Card nodes) and
insert a new ScanlineRender node from the 3D toolbox.
6. In ScanlineRender2’s Properties panel, change the Projection Mode
drop-down menu from Render Camera to Spherical (FIGURE 10.36).

FIGURE 10.36 The Projection Mode setting controls what


kind of render the ScanlineRender node outputs.

7. View ScanlineRender2. Make sure you’re viewing in 2D (FIGURE


10.37).

FIGURE 10.37 The panorama in all its glory

Now you can see the whole panorama as a 2D image. There’s no need to
connect a camera to the ScanlineRender this time, as the camera has
absolutely no meaning. You don’t want to look at the scene from a specific
angle—you just want to look at it as a panorama. However, you do need to
input a background to the ScanlineRender, as spherical maps are a ratio
of 2:1.

8. With no node selected, create a Reformat node from the Transform


toolbox. You can also press the Tab key and start typing its name.

Usually you use the Reformat node to change the resolution of an image.
For example, if you have an image that’s full HD (1920×1080) and you
want it to be a 720p format (1280×720), this is the node to use. It can also
change the format of an image beyond just resolution, can change its pixel
aspect ratio, and, depending on the properties, can crop an image (though
that’s rarely used).

In this case, using a Reformat node is an easy way to specify resolution.


You don’t really need a background image for the ScanlineRender node,
you just need to enter a resolution, and so a Reformat node is perfect.

9. Choose To Box from the Type drop-down menu on Reformat1’s


Properties panel. This option lets you enter width and height values
without having to create a new format.

10. Select the Force This Shape check box to enable both the Width and
Height fields. Otherwise you will be able to specify only Width; Height
will be calculated with aspect ratio maintained.

11. Enter 2000 in the Width field and 1000 in the Height field.

12. Connect ScanlineRender1’s bg input to Reformat1 (FIGURE 10.38).


FIGURE 10.38 The Reformat node determines the
resolution of ScanlineRender1’s output.

The resolution of the reflection map is now correct. Using a different


aspect ratio would have eventually resulted in a squashed or stretched
looking reflection.

Creating the reflective surface


You need a flat surface to reflect on. A flat surface is sufficient to mimic
the front glass on the picture frame. You will use a Card node for that.
One way to create such a surface is to create a Card and then manually
position it where the frame is located according to the point cloud.
However, there is a simpler way:

1. Clear the Properties Bin.

2. Double-click CameraTracker1 to open its Properties panel in the


Properties Bin.

3. Press the 1 key to view CameraTracker1 in the Viewer.

4. Select all the points inside the picture frame. You can do this by
marqueeing them or Shift-clicking them.

5. From the Viewer’s contextual menu, choose Create, and then Card
(FIGURE 10.39). (Right-click/Ctrl-click to display the contextual
menu.)

FIGURE 10.39 Creating a Card at a point cloud location.

The Viewer now switches to the 3D view (if it doesn’t, you can press Tab
to do so) and you can see that a Card was created at the location of this
feature in the point cloud. It is placed well and at the correct angle. This
saves you a lot of work (FIGURE 10.40).

FIGURE 10.40 The newly created Card is exactly at the


average of your selected points.

In the DAG, you now have a new Card node that should be called Card16.


This is the Card the CameraTracker node dutifully created for us
(FIGURE 10.41).
FIGURE 10.41 The new Card node is created at the center
of the DAG.

You now have practically all the elements you need: a camera, a reflective
surface, and something for it to reflect. Time to connect it all up. Only two
pieces to the jigsaw puzzle are missing; they are discussed in the next
section.

Environment light and specular material


Nuke’s ScanlineRender node isn’t a raytracing renderer and so it can’t
generate reflections. Raytracing means that light can bounce not only
from its source onto a surface and to the camera, but also from its source
onto a surface, then onto another surface, and onto the camera. So even
though you have a surface and a reflection map, you can’t tell the surface
to mirror the reflection map.

What Nuke does have is a light source called Environment, which shines
light, colored by an input image instead of a single color, onto surfaces.
These surfaces need strong specular material characteristics. Specular
means objects (or part of objects) that reflect a light source directly to the
camera. A specular material can also be generated inside Nuke.

Let’s connect the whole thing up. First you’ll need to create the
Environment light and connect the panorama to it, then connect
everything up with a Scene node and finally another ScanlineRender
node.

1. Select ScanlineRender1 and, from the 3D/Lights toolbox, insert an


Environment node.

When you do so, the Environment node projects the panorama on every
piece of geometry that has a strong specular material. You need to add a
specular shader to your Card16 node.

2. With nothing selected, create a Specular node from the 3D/Shader


toolbox.

3. Connect Card16’s input to the output of Specular1.

4. Set the white property in Specular1’s Properties panel to 1.

By connecting Specular1’s output to Card16’s input, you tell Card16 to


have that texture or shader. Changing the white value of our Specular1
shader to 1 ensures full specularity, hence full reflectivity. This lets you
control the strength of the reflection later on.

5. Select Card16 and Environment1 by marqueeing or Shift-clicking them;


then, create a Scene node from the 3D toolbox.

The newly created Scene3 node connects the card and the Environment
light so that they work together (FIGURE 10.42).


FIGURE 10.42 Another Scene node to connect the
Environment light and the shaded Card16

Now let’s add a ScanlineRender node to render this thing through a


camera. Yes, this is when you connect the camera generated in the
beginning of the chapter to the tree and use it.

6. Select Camera1 and Scene3, then press the Tab key and type Scan. Use
the Arrow Down key until you reach ScanlineRender, then Shift-Enter to
branch another ScanlineRender node (FIGURE 10.43).

FIGURE 10.43 Connecting up all the elements and


rendering the 3D elements

7. View the newly created ScanlineRender2 by selecting it and pressing


the 1 key. Make sure you’re viewing in 2D mode.

8. Empty the Properties Bin, then click Play on the timeline.

What you see now is the reflected surface showing the part of the
panorama that is in front of it. You can choose another part of the
panorama if you want, but first make the render a little quicker.

9. Stop the playback and go back to frame 1.

10. View the output of ScanlineRender1. That’s the one showing the
panorama (FIGURE 10.44).

FIGURE 10.44 The panorama image

As you can see in front of you, the bounding box, representing the part of
the image that is usable, is all the way around the image—even though a
good 66% of the image is black and unusable. You will make the reflection
render faster by telling Nuke that the black areas are not needed.

11. With ScanlineRender1 selected, insert a Crop node from the


Transform toolbox.

12. Move the top and bottom crop marks in the Viewer so they are sitting
on the top and bottom edges of the actual panorama (FIGURE 10.45).
This will make for a more optimized (faster) tree.

FIGURE 10.45 Changing the bounding box for a more


optimized tree

13. View ScanlineRender2 in the Viewer.

To choose a different part of the panorama, you need some kind of


Transform node. As you used in Chapter 9, for geometry—the panorama
before the ScanlineRender node is just geometry, a collection of Cards,
not an image—you need to use the TransformGeo node. Because you want

to rotate all the Card nodes together, you can add the TransformGeo node
after the Scene node that connects all the Card nodes together—Scene2.

14. Select Scene2 and insert a TransformGeo node from the 3D/Modify
toolbox.
15. While still viewing ScanlineRender2, change TransformGeo1’s
Rotate.y property and see what happens (FIGURE 10.46).

FIGURE 10.46 The reflection image projected on the Card


node

As you bring the rotate value up, or clockwise, the image rotates
counterclockwise and vice-versa. This is because what you are watching is
a reflection, not the actual image. When you rotate the real image, the
reflection, naturally, rotates in the other direction.

Cutting the reflection to size


You need to add this reflection to the frame image. At the moment, the
reflection image is not the right size—it’s a lot bigger than the frame. You
need to create a matte that will cut the image to the right size. You already
have all the stuff you need in the 2D Tracking part of your tree. You just
need to manipulate it in a different way.

1. Navigate the DAG to the left, where you have the statue image
connected to the Rectangle node.

2. Click Blur1 to select it and then press the 1 key to view it.

3. View the alpha channel by hovering your mouse pointer in the Viewer
and pressing the A key.

That’s the matte you need. No doubt about it. You just need to get it to the
other side of the DAG.

I’m not sure where you placed all the bits and pieces of the somewhat
disjointed tree I asked you to build. They might be far away. You are now
going to drag a pipe from this part of the tree to that one, and I don’t want
you to lose it on the way. Let’s make this a little easier for you.

4. With nothing selected, press the period (.) key.

5. Connect this new Dot’s input to the output of Blur1.

6. Drag this new Dot to the right, or wherever you placed your
ScanlineRender2 node.

Now you need to hold the ScanlineRender2 node inside this matte, which
is connected to the Dot.

7. Select the Dot and press the M key to insert a Merge node.

8. Connect Merge2’s B input to ScanlineRender2.

9. Change Merge2’s Operation property from Over to Mask (FIGURE


10.47).

FIGURE 10.47 Using the frame’s matte to mask the


reflection

Now place the reflection on top of the rest of the composite. ⬆


10. Select Merge2 and insert another Merge node after it.

11. Connect Merge3’s B input to Merge1.


12. View Merge3 in the Viewer and make sure to view the RGB channels
(FIGURE 10.48).

FIGURE 10.48 The reflection is placed perfectly inside the


frame.

It’s in there—now you just need to take it down a tad.

13. Reduce Merge3’s Mix property to 0.05.

The comp is done. You added a very accurate, very sophisticated


reflection to the composite.

To really understand how well this reflection works, render it and view it
in the Viewer. You should know how to do this by now, so I won’t bore you
with instructions. (If you need a refresher, see Chapter 1.)

Remember that, if you want, you can still pick a different part of the
panorama image to be reflected. You can do that with TransformGeo1.
Just change Rotate.y as before. I left mine at –80. I also tidied up my tree
a little so I can read it better. You can see it in FIGURE 10.49.

FIGURE 10.49 The final tree

There are many other uses for camera tracking. Once you have the camera
information for a shot, a lot of things that you would consider difficult
suddenly become very easy. For example, I toyed with placing the doll
from Chapter 2 on the table. Try it.

More advanced stuff coming up in Chapter 11.

PREV NEXT
⏮ ⏭
Recommended
9. The Nuke 3D/ Queue
Engine/ History / Topics / Tutorials / Settings / Get the App / Sign Out 11. Camera Projection
© 2017 Safari. Terms of Service / Privacy Policy


 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
10. Camera Tracking 12. Customizing Nuke with Gizmos
   🔎

11. Camera Projection


In this chapter you create a slightly bigger composite and
combine a lot of the different tools you learned about in other
chapters to create the final image. You learn about camera
projection, use 3D scenes, texture an imported model, and
import a camera as well as do some traditional compositing
using mattes, color correction, and paint.

Camera projection, a technique used primarily in 3D


applications, was brought into Nuke and is proving to be a
strong and useful tool. Camera projection is a 3D texturing tool
that uses a camera as if it were a projector to project an image
onto geometry. It becomes useful when an image is being
projected on geometry that was built to look exactly like it.
Imagine, for example, an image of a building tilted at an angle.
If you build the geometry of the building and place it in front of
a camera so the alignment of the camera to the geometry
produces the same perspective as your image, you can use that camera to
project the image onto the geometry. You can then make that image do
things only a 3D scene can do, like move around it, for example.

Speaking of examples, let’s dive into one.

BUILDING A CAMERA PROJECTION SCENE


To have a point of reference, let’s start by importing an image.

1. From the chapter11 folder, import Ginza.png using a Read node and
view it in the Viewer (FIGURE 11.1).

FIGURE 11.1 The image you will use for the camera
projection

This is a photo I took of the Ginza Hermès building in Tokyo. You will
now make this still photograph move with true perspective and parallax

as well as add some more elements to it. Its resolution is 1840×1232. Not
exactly standard, but that doesn’t matter. You will make the final
composite a 720p composite. Let’s set up the project.
2. While hovering the mouse pointer over the DAG, press the S key to
display the Project Settings panel.

3. Set up the project as follows: Check Lock Range and choose New from
the Full Size Format drop-down menu. Then, enter the following:

• Name = 720p

• W = 1280

• H = 720

Click OK (FIGURE 11.2).

FIGURE 11.2 Creating a 720p format

For camera projection to work, you need a 3D scene that is set up in the
same way as the shot that was taken. This means you need geometry for
the buildings as well as a camera located where the camera that took the
photo was (I asked somebody who uses 3D software to produce this). You
just need to import the elements.

NOTE

Nuke’s Camera Tracker can track still frames, not just clips.
However, you need several different still frames with enough
change in perspective between them to achieve that, which is
something we don’t have because we have only a single
image.

You’ll start by reading in the geometry.

4. While hovering over the DAG, press R to create a Read node.

Yes, you can use a Read node to bring in geometry. When you select a
geometry file instead of an image file, the Read node turns into a ReadGeo
node.

5. Navigate to chapter11/geo and import Ginza_buildings.abc.

A dialog appears showing a hierarchal list (FIGURE 11.3). The .abc


extension is for the Alembic file type. Alembic, developed by Imageworks,
is an open source file type designed to hold 3D scenes. Alembic can not
only hold a single object, but in addition, it can hold whole scenes with
their geometry, cameras, textures, shaders, and everything else you can fit
inside.


FIGURE 11.3 The Alembic import dialog

This Alembic file holds three pieces of geometry called Bldg01, Bldg02,
and Bldg03. You need to bring these in separately. In order to do so, you
need to set each building item as a parent—meaning make it an item of its
own.

6. Select by clicking Bldg01, then Shift-Click Bldg03 to select all three


items.

7. Right-click one of the items and choose Select As Parent in the


contextual menu (FIGURE 11.4).

FIGURE 11.4 Turning items in the Alembic import dialog


into parents

8. Click the Create Parents As Separate Nodes button at the bottom of the
dialog (FIGURE 11.5).

FIGURE 11.5 The Alembic Read node can split into several
ReadGeo nodes with the button.

You now have three ReadGeo nodes in the DAG, one for each parent
(FIGURE 11.6). Let’s view them in the Viewer.

FIGURE 11.6 The three ReadGeo nodes

9. Switch to the 3D view by pressing Tab while hovering the mouse


pointer over the Viewer, and zoom out by scrolling out or hitting the – key
repeatedly until you see three cubes. Alternatively you can select the three
ReadGeo nodes in the DAG and press the F key to frame the selected
object to the Viewer (FIGURE 11.7).

FIGURE 11.7 The three imported pieces of geometry


representing the three buildings

These cubes represent the three buildings. Notice that the geometry is a
very basic representation, not a perfect copy. This is adequate, because
the texture is detailed enough all on its own.

Now you need a camera.

10. Create a camera node from the 3D toolbox.

11. Select the Read From File box in Camera1’s Property panel.

12. Switch to the File tab and click the folder icon at the end of the File
property.

13. Import chapter11/geo/Ginza_camera.fbx. Click Yes when asked if you ⬆


want to change all properties to the imported ones.

You need to choose which camera from the imported data to use.

14. Make sure the Node Name property is set to Camera01.


You now have a camera in your scene pointing up, which resembles the
angle of the camera in the photo. If you move around the scene by
Ctrl/Cmd-dragging, you’ll see that the camera is way out in another
location from the buildings and is rather large (FIGURE 11.8).

FIGURE 11.8 The camera has a good angle, but its location
and scale are off.

The geometry and camera were both generated in Autodesk’s 3ds Max.
This 3D application uses a different scale than Nuke. Because of that,
some elements get transferred with the wrong location or size. The ratio
in this case is 1:1000. Therefore, dividing each translation property by
1000 brings the camera into its correct location. Reducing the size of the
camera to a value of 1 will fix the scaling issue.

15. Switch to Camera1’s Camera tab.

Because Read From File is selected, the properties will keep updating
from the file. To change some of the properties, you need to deselect the
box now.

16. Deselect the Read From File box, click in the Translate.x field, and
move the cursor to the end.

17. Enter /1000 and press Enter/Return (FIGURE 11.9).

FIGURE 11.9 Manipulating the Translate.x field directly to


change scale

18. Do the same for Translate.y and Translate.z.

19. When you are finished, check that your values are approximately
these:

• Translate.x: 26.764

• Translate.y: 24.250

• Translate.z: 48.711

Now let’s take care of scale.

20. Change the Scale.x, Scale.y, and Scale.z from 100 to 1 by typing in
their respective Input fields.

The camera is now located in the correct place (FIGURE 11.10).


FIGURE 11.10 The correct location for the camera

Remember, camera projection means you are using a camera to project a


texture onto geometry. To use a 3D term, you will use a texture and a
camera to shade geometry. You will find the camera projection node in
the 3D/Shader toolbox. It’s called Project3D.
21. Select Camera1 and from 3D/Shader choose Project3D (FIGURE
11.11).

FIGURE 11.11 Inserting the Project3D node into the tree

Project3D1’s Cam input is already connected to Camera1. You need to


connect the second input to the texture, which is, in this case, Read1.

22. Connect Project3D’s second input to Read1.

You now need to use this texture setup on the geometry. One way to do
that is to use the ApplyMaterial node you used in Chapter 10. The simpler
way is to use ReadGeo1’s input and connect it to Project3D1’s output.

23. Connect ReadGeo1’s input to Project3D1’s output (FIGURE 11.12).

FIGURE 11.12 Connecting the camera projection setup to


the first piece of geometry

24. Make sure you’re viewing 3D in your Viewer (FIGURE 11.13).

FIGURE 11.13 The texture is starting to appear on the


building.

The first building is now textured in the areas where you have texture for
it. If you look at this textured geometry through the camera, it looks right.
This technique allows some movement around this object—but only
limited movement, as this texture is really designed to be viewed only
from the angle in which it was shot. If you look at it straight on, you will
see noticeable stretching (deforming and scaling of the texture unevenly
over the geometry).

Let’s repeat this for the other two buildings:

25. Select Project3D1, copy it, and, with nothing selected, paste.

26. Connect Camera1 to the Cam input of Project3D2, and Read1 to


Project3D2’s other input, and finally apply Project3D’s output to
ReadGeo2’s input. ⬆
27. Repeat again for the third building, or ReadGeo3.

28. For sanity’s sake, arrange the node in a way that makes sense to you. I
use a lot of Dots to help with that. Create a Dot by pressing the . (period)
key (FIGURE 11.14).
FIGURE 11.14 Using Dots to tidy up a script

Let’s build the rest of the scene to connect up this whole thing. You need a
Scene node, another Camera node, and a ScanlineRender node.

29. Create Scene and ScanlineRender nodes. Make sure that Scene1’s
output is connected to ScanlineRender1’s Obj input.

30. Connect the three ReadGeo nodes to Scene1.

Now, why do you need a camera? You already imported a camera into the
scene. Well, the camera you have, Camera1, is being used as a projector, if
you recall. So you need another camera to shoot through. You also want to
be able to move the camera to get the 3D movement you set out to
achieve. If you move the projection camera, the texture moves, so that’s
not allowed. It’s easier if the camera you shoot the scene with is based on
the projection camera as a starting point, so that’s what you will now do.

31. Copy and paste Camera1 to create another camera.

Now you have two cameras. This can be confusing, so rename them to
easily figure out which is which.

32. Double-click the original Camera1 and enter a new name,


ProjectionCamera, in the field at the top of the Properties panel.

33. Double-click Camera2 and change its name to ShotCamera.

34. Connect ShotCamera to ScanlineRender1’s Cam input.

You don’t need a BG input because you don’t need an actual background,
and the resolution is set by the Project Settings panel.

35. View the 2D scene through ScanlineRender1.

You can see the projection setup working; the three buildings are textured
and the sky is missing (FIGURE 11.15).

FIGURE 11.15 The basic projection setup is now working.

ANIMATING THE CAMERA


Now that the projection setup is good, you need to move a camera and get
the perspective movement for the shot.

Keep the default position for the ShotCamera so you always know where
you started. Use an Axis node to move the camera instead. An Axis node is
another type of translating node, much like TransformGeo.

1. Create an unattached Axis node from the 3D toolbar.

2. Connect ShotCamera’s input to Axis1’s output (FIGURE 11.16).


FIGURE 11.16 Connecting Axis1 to ShotCamera

Now you have an Axis with which you can move the camera.

3. While on frame 1, choose Set Key from the Animation drop-down menu
for Axis1’s Translate properties. Do the same for Axis1’s Rotate properties.

The kind of movement you will create for the shot starts at the current
location (which is why you added a keyframe on frame 1) and then moves
up along the building and forward toward the building while turning the
camera to the left.

4. Go to frame 100.

5. Play with Axis1’s Translate and Rotate properties until you reach
something you are happy with. The values I ended up with were
Translate: 100, 1450, –2200 and Rotate: –3, 7, 0.

Now watch the animation you did in the Viewer. To get a good rendering
speed, it’s smart to watch this animation in the 3D view.

Previewing 3D
The 3D view uses simple filtering and your GPU (that’s the power stored
in your graphics card) for fast playback. It makes it very convenient then
to use this as a preview for motion. The quality and colors are all wrong,
which makes the 3D view bad for previewing a look, but for animation it is
just fine.

1. Hover the mouse pointer over the Viewer and press Tab to switch to the
3D view.

2. As you want to see what ShotCamera is doing, from the Camera drop-
down menu choose ShotCamera.

3. You want to stay locked to ShotCamera, so turn on the Lock to Camera


button to the left of the Camera drop-down menu (FIGURE 11.17).

FIGURE 11.17 Locking the 3D viewer to a camera using


these interface elements

When you choose ShotCamera from the menu and lock the 3D viewer to
it, it means the 3D viewer will show you what the camera sees within the
square markings defining the window of view (FIGURE 11.18).


FIGURE 11.18 The rectangular box shows what will render
through the camera.

Playing the clip now previews the render for you.


4. Click Play.

This is great. It’s already useful. Let’s say the editor really needs to get this
shot’s motion into the cut as quickly as possible. Editorial doesn’t mind
that the look is not done yet, but the motion? That’s what they care about.

Luckily you can render out the 3D view to your hard drive with everything
you see in front of you. In some applications this is called a playblast; in
Nuke it is called Viewer Capture. The button for this is located at the
bottom right of the Viewer, as seen in FIGURE 11.19.

FIGURE 11.19 This button will save the Viewer’s output to


hard drive.

5. Click the Viewer Capture button.

You are now presented with the Capture dialog. The default function is to
save the files in a temporary directory and to load it into Framecycler.
Framecycler is a great application that comes bundled with Nuke and is
used for playback. Since Nuke 7.0, playback functionality has been built in
to Nuke, so Framecycler isn’t used much anymore and is not covered here.
Furthermore, editorial is breathing down my neck to get this clip, so
saving to a temporary location just isn’t what I need to do. Let’s change
some properties.

6. Click the check boxes next to Customise Write Path and next to No
Flipbook.

7. Click the folder icon next to the Write Path at the bottom of the panel.
In the File Browser that opens, navigate to your student_renders folder
and call the file tmp.%04d.jpg. Click Open (FIGURE 11.20).

FIGURE 11.20 This is how you should set up the Capture


dialog before clicking OK.

8. Click OK.

A progress bar comes up showing the render as the Viewer moves one
frame forward at a time. This should be a rather short process.

When the render finishes, nothing happens. Let’s bring in the render and
view it.

9. While hovering the mouse pointer in the DAG, click R to read a clip.
Navigate to the student_renders folder and double-click tmp.%04d.jpg.

10. In the DAG, select Read2 and press 1 to view it (FIGURE 11.21).

FIGURE 11.21 Viewing the output of the Viewer Capture ⬆


But hold on, it seems like nothing changed. Are you looking at the 3D
view still? If you look at the top of the Viewer you can see you are now
viewing 2D (FIGURE 11.22).
FIGURE 11.22 In this case the 2D view and the 3D view are
identical.

That’s exactly how it should be. You captured the 3D view with everything
that’s in it. All on-screen controls and markings are part of the viewer
Capture. Not only that, but the size of the capture equals the size of the
Viewer panel in the interface. Before sending this out to editorial you
probably want to crop this. But don’t worry about it now. The important
thing is you learned how to capture the Viewer.

11. Delete Read2.

As you were viewing the camera work in the 3D view, no doubt you
noticed that you had a wonderful tracking shot. Look again. Notice that on
the left of building02 (the back building) something odd is happening. As
more of building02 is exposed, you get more of the texture of building01.
This is because there is no more building02 texture—only what was shot.
You have to create more texture for that part of the building.

TWEAKING THE TEXTURE


Right now, there isn’t any “extra” back building hiding behind the front
building. What you have is what you have. You need to extend the texture
for the back building so there will be something else to expose as the
camera moves.

1. Create an unattached RotoPaint node from the Draw toolbox and insert
it between Read1 and Project3D2 (FIGURE 11.23).

FIGURE 11.23 Inserting a RotoPaint node between the


texture and the projection node

The good thing about your tree is that each geometry has its own pipe
coming from the texture. This gives you the option of tweaking the texture
of each geometry separately.

2. View RotoPaint1 in the 2D view (simply press Tab if you’re in the 3D


view).

Use the RotoPaint node to clone some texture where you’re missing it.

3. In RotoPaint1, select the Clone tool.

4. Using the Tools Settings bar, make sure you’re painting on all the
frames.

5. Align your brush using the lines on the building. Because the
perspective is changing throughout the height of the building, you have to
change the alignment as you go up and down. Start painting (FIGURE
11.24).

FIGURE 11.24 Use the window edges to align the clone


brush.
As you go, view the output of ScanlineRender1 once in a while to make
sure you have everything you need covered.

Painting should take some time because of the perspective change on the
buildings as you go down toward the ground level. Take your time and
you should be fine. You should have something like FIGURE 11.25 in the
end.

FIGURE 11.25 Using RotoPaint to add more texture

6. When you’re done, go back to view the output of ScanlineRender1 and


click Play in the Viewer.

The render should be slower than it was when you were viewing a preview
in the 3D view. The texture should be fixed now, and the shot should
come to life as “more” of building02 is being exposed. This kind of move
cannot be achieved using normal 2D transformations.

USING A SPHERICALTRANSFORM TO REPLACE SKY


The current composite has no sky. It was lost through the way this
composite was done. You can create another piece of geometry for the sky
and continue projecting there, too. However, the original sky in this image
is very flat and is not worth the work you’d need to do to get it to fit
throughout the shot. It’s better to replace it with a slightly more
interesting sky. You can do that by using another sky texture and a sphere
geometry.

1. Read in the SkyDome.png image from the chapter11 directory and view
it in the Viewer (FIGURE 11.26).

FIGURE 11.26 The sky texture you will use

Look at the bottom-right corner in the Viewer. It says square_2k. This is a


format defined in Nuke by default, which is why the name is displayed
here instead of as a set of two numbers. The format means this is a
2048×2048 image. It’s a mirror ball map, which is one way of
transforming a spherical image into a flat image—by taking a photo of the
reflection of half a chrome-painted ball. To place the image on a sphere
geometry, you need to turn it into a lat long map—a flat representation of
the sphere as it is being unraveled. The SphericalTransform node is used
to transform between several different ways of displaying spherical shapes


in a flat image.

2. Select Read2 and insert a SphericalTransform node after it from the


Transform toolbar.

SphericalTransform’s default is exactly this—changing from a Mirror Ball


input to a Lat Long Map output. You just need to change the resolution.
You can look at a sphere as an object that is twice as round as it is tall
(360 degrees around, 180 degrees in height), and so the resolution of the
texture should be twice as wide as it is tall. You’ll need to change the
Output Format property.

3. Change the Output Format resolution by selecting New from the drop-
down menu and then creating a format called 2K 2:1 with a resolution of
2048×1024 (FIGURE 11.27).

FIGURE 11.27 A transformed sphere

4. Create a sphere geometry from the 3D/Geometry toolbox and attach its
input to SphericalTransform1’s output.

5. View Sphere1, making sure you’re viewing the 3D Viewer.

You’re probably not seeing much. This is because spheres are, by default,
created with a size of 1. The buildings, on the other had, are very big.
Because your perspective camera (the one you’re viewing the 3D scene
through) is probably set to view the buildings, the sphere is too small to
be seen. You’ll have to scale the sphere up. It’s supposed to be as big as
the sky, after all.

6. Change the Uniform Scale property of Sphere1 to 500000. If you still


can’t see the sky, move back by using the – key or the scroll wheel until
you do.

Now you should see the sky dome (FIGURE 11.28).

FIGURE 11.28 The sky dome, half a textured sphere

7. Connect Sphere1’s output to another input of Scene1.

8. Select ScanlineRender1 and press the 1 key to view it. Make sure you
are viewing 2D.

You just did a sky replacement. A pretty sophisticated one as well. There’s
still work to do on it, though—as in, compositing.

COMPOSITING OUTSIDE THE SCANLINERENDER


NODE
At this point, you need better control over the kind of composite you are
producing between the buildings and the sky. At the moment, the
ScanlineRender and Scene nodes are doing the composite, but because
Nuke is a compositing package it would be better to do the composite
using normal compositing tools.

Cloning nodes
To have two layers available for compositing outside the 3D scene, you
need to split the geometry into two sets and render each separately. You
will need two Scanline-Render nodes. However, with two ScanlineRender
nodes, changing properties in one means you have to change properties in
the other. This is because you want both of them to render the same kind
of image.

Instead of having to keep changing two nodes constantly, you can create a
clone of the existing ScanlineRender node. A clone means that there are
two instances of the same node. One is an exact copy of the other. Both
have the same name, and if you change a property in one, it is reflected in
the other.
1. Select ScanlineRender1 and press Alt/Option-K to create a clone.

Notice that cloned nodes have an orange line connecting them to show the
relationship. They also have an orange C icon at the top left of the node to
indicate they are clones.

2. Connect ShotCamera to the Cam input of the newly cloned


ScanlineRender1.

3. Drag from the newly cloned ScanlineRender1 node’s Obj/scn input to


Sphere1’s output.

4. Drag and let go on the pipe leading from Sphere1 to Scene1 to


disconnect it.

5. Select the original ScanlineRender1 and press the M key to create a


Merge node.

6. Connect Merge1’s B input to the new cloned ScanlineRender1


(FIGURE 11.29).

FIGURE 11.29 The cloned ScanlineRender nodes

You have now made a composite between two separate 3D scenes. The
ScanlineRender nodes are cloned, so if you change something in one it
changes in the other.

If you look carefully, you’ll see that the edges of the buildings are very
jagged. This is because you are using just one sample to render (FIGURE
11.30).

FIGURE 11.30 The building has jagged edges because you


are using just one sample in the ScanlineRender node.

7. Double-click one of the ScanlineRender1 nodes (it doesn’t matter which


since they are clones).

8. In the MultiSample tab, change the Samples property to 6, which


should be enough for now.

Final adjustments
The sky is a little blue for this image. You will color correct it to match the
foreground better.

1. Select SphericalTransform1 and insert a Grade node after it by pressing


the G key.

2. Tweak Grade1’s properties to create a better match of the sky to the


buildings. I ended up changing only the Gain property as follows: 1.3, 1.1,
1.0.

There must be another part of the sky dome that fits the buildings better—
something with a highlight that matches the highlight on the buildings.
You can use the Sphere geometry’s Rotate properties to pick a different
part.

3. Using Sphere1’s Properties panel, change the Rotate.y property until ⬆


you find something more fitting for the buildings. I stopped at 65
(FIGURE 11.31).
FIGURE 11.31 Rotating the sky to find a better spot

The composite is looking better, but the edges of the buildings still need a
little work. You want to erode the matte slightly. Because this changes the
relationship between the RGB and alpha channels, you need to break up
the premultiplied buildings image first.

4. Select ScanlineRender1 (the original one creating the buildings image)


and insert an Unpremult node from the Merge toolbox.

5. Insert an Erode (filter) after Unpremult1 and a Premult node after that
(FIGURE 11.32).

FIGURE 11.32 Inserting an Erode node in between an


Unpremult and a Premult node

The FilterErode1 node has a default value of 1, which is enough in this


case. However, you want to have sub-pixel eroding, and you need to turn
that on.

6. Choose Gaussian from FilterErode1’s Filter drop-down menu. Leaving


it on Box won’t allow for sub-pixel eroding.

This concludes this part of the exercise. However, let’s make this shot a
little more interesting by adding a billboard screen!

2D COMPOSITING INSIDE 3D SCENES


This is Tokyo after all, and as far as you can see at the moment, not a
single billboard screen is in sight. This needs to be rectified posthaste!

You need to create a billboard screen using a Photoshop file a designer


has prepared for an airline called Fly Ginza. You start by working at a
different area of your DAG and then you connect your new tree to the
main tree later.

Importing Photoshop layers


Nuke can bring in Photoshop files—that’s basic stuff. Nuke can also split
those files into the layers that make them up! Since Nuke 6.3v7 (yes, The
Foundry even release these types of features on point releases), with a
click of a button you can access the whole Photoshop file, which has the
layers separated and a whole tree that mimics the Photoshop layer
structure.

1. Navigate the DAG so the tree is on the far right.

2. Read in chapter11/FlyGinza.psd with a Read node and view it in the


Viewer (FIGURE 11.33).

FIGURE 11.33 The Fly Ginza poster


What you see now is the Fly Ginza logo on a background. You would like
to add some LCD screen effect to this image, but you don’t want it to
affect the type, just the background. For this reason, you’ll need to split up
this Photoshop file into layers.

3. Using the leftmost Viewer channel button, see the different channel
sets you have available (FIGURE 11.34).

FIGURE 11.34 Some new channel sets are available.

You can see four additional channel sets: BG1, BG2, BG3, and Fly_Ginza.
These are the separate layers making up the complete Photoshop file.
Let’s turn this file into a tree.

4. Click the Breakout Layers button at the bottom of Read3’s Properties


panel (FIGURE 11.35).

FIGURE 11.35 The Breakout Layers button is available only


for Photoshop files.

5. In the Viewer, view the PSDMerge3 at the end of the new tree.

An entire new tree appears in the DAG (FIGURE 11.36). You have easily
and successfully rebuilt the Photoshop setup and have controls over each
and every layer separately.

FIGURE 11.36 The tree mimicking the Photoshop setup

Now you want to create some kind of “dot LED” look, so you need some
kind of grid-looking element. You can make this kind of element inside
Nuke. The main thing you need is a grid of spaced dots. Use a Grid node
from the Draw toolbox for that.

6. Click Read3 to select it, then Shift-click to branch a Grid node from the
Draw toolbox, and then view it in the Viewer.

You don’t need it to composite over this image. You do this just to get the
right resolution. You can replace the whole image with the grid image. ⬆
7. Check the Replace box in Grid1’s Properties panel.

You need to create exactly even squares, as wide as they are tall; the way
the number of lines is chosen in this node is through the Number
property. This property relates to the resolution of the image, which is
determined by how many lines there are per resolution (X, Y). To make
sure you have an even number of horizontal and vertical lines, you need to
take resolution into account and make a quick calculation. The nice thing
is that Nuke can make this calculation for you as you type.

In input fields, use the words “width” and “height” to ask Nuke for the
current resolution as a number. This is very useful for a lot of things. If
you need to know the middle of something, for example, you can just
enter width/2 for the horizontal middle point.

8. At the far right of Grid2’s Properties panel, click the 2 next to the
Number field to enable both fields (FIGURE 11.37).

FIGURE 11.37 Enabling the X and Y Number fields

9. In the Number.x field, enter width/16.

10. In the Number.y field, enter height/16.

You now have perfect squares that are 16 pixels wide and 16 pixels tall.

11. Change the Size property from 1 to 6 (FIGURE 11.38).

FIGURE 11.38 The grid is ready.

You now have an LED screen.

Now let’s have it affect the background layers of the sign.

12. Insert a Grade node after PSDMerge2.

13. Connect Grade2’s Mask input to the output of Grid1.

14. Decrease Grade2’s Gain property to something like 0.4 (FIGURE


11.39).

FIGURE 11.39 The grid is added to the composite.

This is all I wanted you to do here. Of course, you can take this a lot
further if you’d like.

Now let’s create the frame for this lovely screen to sit in. The frame is
going to be a simple affair for now—just a color with an alpha. Later on,
you will add a little something to it to make it look more believable. You’ll
make it in context—while looking at the final composite—which will make
your work even better.

1. Create a Constant node from the Image toolbox.

2. Change the Channels drop-down menu to RGBA, and change the


Format drop-down menu to HD.

You do not want the frame to be darker than a dark color in the original
image. If it is, it will feel fake.

3. View Read1 (the buildings image).


4. Click the Sample color from the Viewer button to the right of
Constant1’s Color property and Ctrl/Cmd-click to choose a dark color
from the Ginza image. I selected 0.003, 0.003, and 0.001. Alpha should
have a value of 1 (FIGURE 11.40).

FIGURE 11.40 Setting a basic color for the frame

To make this into a frame, start with a copy of this frame, make it smaller,
and use it to cut a hole in the original frame.

5. While you have Constant1 selected, insert a Transform node after it.

6. Change the Scale property to 0.95.

7. Insert another Merge node after Transform1.

8. Connect its B input to Constant1.

9. Change Merge2’s Operation to Stencil to cut a hole in the frame.

This is the frame. There’s nothing much to see before you place it over the
image itself.

10. Insert another Merge node after Merge2 and connect its B input to
PSDMerge3.

11. View Merge3 (FIGURE 11.41).

FIGURE 11.41 The image with its frame

You now have a screen with a frame. Hooray! But remember, you used a
separate tree for this. Now you need to connect it up (FIGURE 11.42).

FIGURE 11.42 The screen tree

Compositing the screen into the 3D scene


You can connect this screen to the 3D camera projection scene in many
ways, but in my opinion, for this purpose, compositing it on the original
image is probably going to be the best way. Use a CornerPin and a Merge
to do this.

1. Insert a CornerPin node after Merge3.

2. Select CornerPin1 and insert another Merge node after it.

3. Insert Merge4 in between Read1 and Project3D1.

4. View Merge4.

Use the CornerPin node to place the screen in perspective. ⬆


5. Move the four pins to place the screen on the front building in a
convincing way. Use the lines created by the windows to help you get the
correct perspective (FIGURE 11.43).
FIGURE 11.43 The screen in perspective

Notice that the building has a very strong highlight at the top right. The
frame doesn’t.

6. Select Constant1 and insert a Ramp node after it from the Draw
toolbox.

The Ramp node is a simple two-point gradient that produces one color
that fades along the ramp. Use this to create a highlight on the frame.

7. Move the Ramp1’s p1 on-screen control to the top right of the frame
and move the p0 to roughly the center of the frame (FIGURE 11.44).

FIGURE 11.44 Adding a highlight to the frame

RENDERING THE SCENE


The projection shot is now ready. It has everything you set out to put in it.
You should now render the tree; to do so, you need a Write node.

1. Insert a Write node (remember, just press the W key) at the end of this
comp, right after Merge1.

2. Click the little folder icon to tell Write1 where to render to and what to
call your render. Navigate to your student_files directory and name your
render ginza_moving.####.png.

3. Make sure you are not in proxy mode and click the Render button.

4. When the render is finished—and it can take a while—look at your


handy work in the Viewer and be proud (FIGURE 11.45)!

FIGURE 11.45 The final tree


 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
11. Camera Projection Appendix. Customizing Nuke with Python
   🔎

12. Customizing Nuke with Gizmos


One of the wonderful things about Nuke is that it’s completely
and utterly open. By open I mean that it’s customizable in a way
that I haven’t encountered before in a compositing application.
You can change the interface, add functionality, and change the
behavior of so many parts of the application that you can really
make it do anything! If your coffee machine were connected to
your network, it would make coffee for you, too.

Nuke offers several levels of customization. I showed you very


basic customization in Chapter 1—customizing the interface by
changing the size of the panes and what they contain. I also
showed you expressions and TCL scripting in Chapter 3. Other
levels of customization include Python scripting and Gizmos.

Python scripting, which I touch on in the Appendix, lets you do intensive


customization by writing code that can control Nuke, your operating
system, and any other application or device that allows for such
manipulation. In fact, big parts of Nuke are written in Python.

Gizmos are what this chapter is about. They are nodes that you create by
adding together other nodes and wrapping them into a custom tool with a
user interface, knobs, sliders, and even on-screen controls. Creating these
lets you automate mundane tasks—which you would otherwise perform
repeatedly—or create completely new functionality.

This might seem a little advanced for some readers, but worry not. If you
follow the instructions slowly, you will be presented with one of the most
powerful tools about Nuke!

ABOUT SHAPE CREATION TOOLS


Nuke isn’t really a graphics package. You won’t necessarily start your
design process with Nuke. This is why its design tools are sparse. It does
have RotoPaint (discussed in Chapter 6), which can let you draw
anything, but it’s not necessarily convenient from a design standpoint.

Nuke also has the Rectangle node and the Radial node, which can at least
let you create these two basic shapes. The way they are built, though,
makes it very difficult to create perfect squares and circles and position
them. Instead, both nodes are controlled by a box that defines the four
edges containing the shape. This is great for creating mattes, but if you
need a 100-pixel-diameter circle, you need to perform calculations and
take the time to create it (FIGURE 12.1).


FIGURE 12.1 The box controls both the location and size of
the shape generated by the Radial node.

Though Nuke doesn’t have better design tools by default, that doesn’t
mean you can’t create better design tools with some creative thinking and
some technical know-how. So, in this chapter, you are going to turn the
Rectangle and Radial nodes into something that’s easier to use. And
instead of having to repeat this process every time, I’ll use this chapter to
show you how to build a tool that generates this functionality for you
automatically. Neat, huh?

BUILDING THE GIZMO’S TREE


The first stage of making a Gizmo is to create a tree in the DAG with the
functionality you want. Because you want the tool to be easily
controllable, you use a few expressions along the way to link properties
together and change them according to various situations. It is important
to have the tree work in various different situations—the whole point of
this tool is to be able to add Rectangular and Radial shapes to any project
you are working on.

Let’s begin:

1. In a new, empty script, create a Rectangle node from the Draw toolbox
and view it in the Viewer (FIGURE 12.2).

FIGURE 12.2 The Rectangle node

This is a Rectangle node. You used it in Chapter 5. By moving the edges of


the box in the Viewer, you can change the size and location of the
rectangle. The box you see is called the area and you can find this property
in Rectangle1’s Properties panel (FIGURE 12.3).

FIGURE 12.3 The Area property

Let’s start by making a 200×200-pixel square in the middle of the screen.


Doing this by eye is not an option; however, it is easy to do numerically
and with expressions.

The left side (called X) of the rectangle needs to be at 100 pixels left of half
the width of the image. The bottom needs to be at 100 pixels under half
the height of the image. Width and height are properties that you can use
in expressions. Isn’t that lucky?

2. From Area’s Animation menu, choose Edit Expressions (FIGURE
12.4).
FIGURE 12.4 You can edit the expression of this group of
properties by choosing this option.

Now you need to type what I just explained as an expression.

3. In the X Input field, type width/2-100.

As Nuke’s 0,0 point is at the bottom left of any image, going left means
subtracting 100 pixels. Moving down also means subtracting. Up and to
the right is adding.

The figure of 100 stands for half the size of the eventual square. The
expression width/2 signifies the position for the center of the shape.

4. In the three other fields type the following:

Y: height/2-100

R: width/2+100

T: height/2+100

5. Make sure what you typed is the same as what appears in FIGURE
12.5, then click OK.

FIGURE 12.5 Four expressions typed by hand

In the middle of the Viewer there’s a 200×200 square (FIGURE 12.6).


You now have a convenient way to create a shape around a center point.
Expressions are great, aren’t they?

FIGURE 12.6 A square in the middle

But hold on. What if this now needs to be a 400×400 square? You need to
open Edit Expressions again and change every one of the four expressions
you just typed. Not very user friendly, is it? There is another, better, way:
Create a user knob.

CREATING USER KNOBS


You need to create a property for the size value so that you and any other
user of your Gizmo have something easy to manipulate. Nuke lets you
create user properties called User Knobs. Now you’ll make another node
to hold your User Knobs.

NOTE

The NoOp node is exactly what it says it is: It has no initial


functionality. You can use it for various things, but mainly you
use it for the kind of task shown here.


1. Create a NoOp node from the Other toolbox.

2. Right-click (or Ctrl/Cmd-click) somewhere in NoOp1’s Properties


panel. From the contextual menu, click Manage User Knobs (FIGURE
12.7).
FIGURE 12.7 Accessing the User Knobs panel

As mentioned, a knob is just another name for a property. Managing User


Knobs means managing the user-generated knobs. This loads the User
Knobs panel for this node (FIGURE 12.8).

FIGURE 12.8 The User Knobs panel

On the right, the two top buttons are Add and Pick. Both are ways to
create new knobs. Add lets you choose from a list of available knob types,
such as a slider, a Size Knob, a Text Input knob, and so forth. Pick lets you
choose from existing properties (discussed later in the chapter).

You want to be able to change the size in both X and Y. You can use a
Width/Height Knob that, by default, appears as one slider but can be split
into two separate Input fields.

3. Choose Width/Height Knob from the Add button/menu (FIGURE


12.9).

FIGURE 12.9 The Width/Height Knob panel

The Width/Height Knob panel opens so you can define the knob’s
properties. There are three properties:

• Name is the true name of the knob, which is used for calling the knob
through expressions.

• Label is what appears in the white text to the left of the slider. The Name
and Label don’t need to be the same—although they usually are.
Sometimes, however, when one knob name is already taken, you want the
Name to be unique so you use something longer. But you still need the
Label to be short and easily readable.

• Tooltip appears when you hover your mouse pointer over a property for
a few seconds. It’s a little bit of helper text that can remind you (and other ⬆
users) of the functionality of the knob.

4. In the Name Input field, enter shapeSize. Spaces are not allowed in
the Name property, but you can use an underscore in it if you like.
5. In the Label input field, enter shapeSize. Spaces are allowed in the
Label property, but I like to keep things consistent. Suit yourself.

6. In the Tooltip Input field, enter Controls shape size (FIGURE


12.10).

FIGURE 12.10 The filled Width/Height Knob panel

7. Click OK (FIGURE 12.11).

FIGURE 12.11 The User Knobs panel now has two knobs in
it.

You now have two lines in the User Knobs panel. The first one reads User
and the second one reads shapeSize. You made only the one called
shapeSize. User was created automatically and is the name of the tab
where the User Knob appears. If you don’t want this tab to be called User,
click to select this line, and then click the Edit button and rename it in the
panel.

8. Click the Done button in the User Knobs panel (FIGURE 12.12).

FIGURE 12.12 A new knob appears in NoOp1’s Properties


panel.

You have just created a tab called User. Inside it you have a knob called
shapeSize. (I don’t call it a property because it doesn’t do anything yet. If
you play with the slider, nothing happens, because this is just a knob.)

9. Experiment with the shapeSize property. When you’re finished, leave it


on 200.

Remember, if you tie it to something in some way, it then acquires


functionality. I already explained what this knob is to be used for: as the
Size property for Rectangle1. Let’s connect this up.

First expose the two separate parts of the shapeSize property.

10. Click the 2 icon—the individual channel’s button, to the right of the
slider you just created.

You now have two Input fields instead of the slider (FIGURE 12.13). ⬆
Note that the first Input field is called W and the second H.
FIGURE 12.13 Your Width/Height Knob’s two Input fields
are exposed.

You will replace the size part in Rectangle1’s Area property’s expression
with a link to this knob. To call up this property from another node, first
call the name of this node, then the name of your property, and then the
Input field you need.

11. In Rectangle1’s Properties panel, choose Edit Expression from the


Area property’s Animation menu.

12. In the first line, for the X property, replace the number 100 (in the
line width/2-100) with NoOp1.shapeSize.w/2. The whole expression
should now read: width/2-NoOp1.shapeSize.w/2.

13. For the Y property, replace the number 100 with


NoOp1.shapeSize.h/2.

14. For the R property, replace the number 100 with


NoOp1.shapeSize.w/2.

15. Finally, for the T property, replace the number 100 with
NoOp1.shapeSize.h/2 (FIGURE 12.14).

FIGURE 12.14 Changing the expressions to make use of the


new User Knob

16. Click OK.

Your new User Knob is now part of an expression and it acquires


functionality.

17. Go ahead and play with your shapeSize Input fields. You can bring the
shapeSize value back to being a slider or leave it as two Input fields. The
expression will not break. You can see the box on screen change size
horizontally and vertically. When you’re finished, go back to a value of
200 and a single slider.

Right. Now that you have this, you should make a knob that controls the
position of the shape. The 2D Position knob is perfect for that. Not only
does it have values in X and Y, it also produces an on-screen control for
you.

18. In NoOp1’s Properties panel, right-click (or Ctrl-click) and choose


Manage User Knobs from the contextual menu.

19. Click User to select it and then click Add > 2d Position Knob.

20. In both the name and the label Input fields, enter shapeCenter.

21. In the Tooltip Input field, enter Controls center location of shape
(FIGURE 12.15).

FIGURE 12.15 These are the settings for the 2d Position


User Knob.

22. Click OK. In the User Knobs panel, click Done (FIGURE 12.16).
FIGURE 12.16 The new knob appears under the previous
knob.

You now have another User Knob called shapeCenter at the top of the
Properties panel. It’s at the top because you selected User before you
created the knob. This placed the newly created knob under the User
Knob, which is the tab itself.

There is a new on-screen control in the Viewer called shapeCenter


(FIGURE 12.17). As I mentioned before, the 2d Position knob creates an
on-screen control. You can use the on-screen control to change the
properties in the Properties panel.

FIGURE 12.17 A new on-screen control appears in the


Viewer.

23. Move the shapeCenter on-screen control around. Leave it somewhere


close to the center of the image.

The on-screen control doesn’t do anything. You need to connect some


property in Rectangle1 to link to this new knob. That’s how you get
functionality from User Knobs.

Change the expression in Rectangle1 to link the center part of it to this


new knob—shapeCenter—instead of the old width- and height-based
values.

24. Choose Edit Expression from Rectangle1’s Area property’s Animation


button/menu.

25. Change every occurrence of width/2 to NoOp1.shapeCenter.x.

26. Change every occurrence of height/2 to NoOp1.shapeCenter.y


(FIGURE 12.18).

FIGURE 12.18 The expressions after replacing all width


and height occurrences.

27. Click OK.

28. Play around with the shapeCenter on-screen control (FIGURE


12.19).


FIGURE 12.19 The square’s center follows the shapeCenter
on-screen control.

29. Change the shapeSize property.

The size changes around the newly defined center. Your hard work is
paying off. You have two properties—one for each property of the
rectangle—defined as User Knobs.

USING TEXT TO CREATE A RADIAL


You now have a better way to control Rectangle nodes. The Radial node
has the same user interface issue that you just changed for the Rectangle
node. It too isn’t very convenient to use. The Radial node also has a
property called Area that controls the area in which the radial will be
created. As a result, the Radial node can benefit from the same treatment.

NOTE

Use only text editors (rather than word processors) for


working with Nuke scripts. Word processors such as Microsoft
Word and OpenOffice Write tend to add all sorts of extra
formatting characters (many of them invisible) that can break
the Nuke script. You should always work with text editors—for
example, Notepad in Windows and TextEdit on the Mac—that
show you everything they put in the file and don’t hide
anything.

1. With nothing selected in the DAG, create a Radial node.

I’ve already explained and shown that a node is just some text and that
you can manipulate it as such. Let’s use this functionality to copy the
expressions from Rectangle1 to Radial1.

2. Select Radial1 and cut it (Ctrl/Cmd-X).

3. Open a text editing application such as Notepad on Windows or


TextEdit on Mac (not WordPad or Word).

4. In a new document, paste your node as text by pressing Ctrl/Cmd-V


(FIGURE 12.20).


FIGURE 12.20 This is your node. Just a little bit of text.

Here you go. This little bit of text makes up this Radial node. For example,
the xpos and ypos lines say where the node was located in the DAG when
copied. The line that starts with area is the line we care about. Notice
there’s a space before the word area.
5. Back in Nuke, copy (Ctrl/Cmd-C) the Rectangle node.

6. In your text editor, create a couple of empty lines (by pressing


Enter/Return at the end of the text).

7. Using Ctrl/Cmd-V, paste Rectangle1 at the end of the document


(FIGURE 12.21).

FIGURE 12.21 Rectangle1’s Area property has a lot more


text in it.

The Area properties should be the same. Using a text editor makes
changing the value to an expression a very simple process.

8. In the Rectangle part, select the whole area line starting with the space
before the word area and going all the way (which might span several
lines) to the two squiggly brackets (}}) (FIGURE 12.22).

FIGURE 12.22 Select the whole line.

9. Copy the text with Ctrl/Cmd-C.

TIP

On a Mac, triple-clicking selects an entire line, so you can


triple-click in either of these instances.

10. In the Radial line select the whole line starting with the space before
area and going all the way to the end. Press Ctrl/Cmd-V to paste the
copied text over it.

Radial should look like FIGURE 12.23. The value for the Area property
has been copied from Rectangle1 to Radial1. It’s time to bring this node
back into Nuke.

FIGURE 12.23 The text after pasting

11. Select the entire text that defines Radial1, and press Ctrl/Cmd-C to
copy.

12. Back in Nuke, while hovering in the DAG, press Ctrl/Cmd-V


(FIGURE 12.24).

FIGURE 12.24 Text to node, just by copying and pasting


The Radial node is back. The green line stretching between Radial1 and
NoOp1 tells you there’s an expression link between the two, which is what
you were after.

13. View Radial1 in the Viewer and double-click Radial1 to load its
properties in the Properties Bin.

The circle fills exactly the same area the square filled, but the circle is soft.

14. In Radial1 change the Softness property to 0.

That solved the softness issue. Now we just need to connect it all together.

USING A SWITCH NODE TO CHOOSE BRANCHES


The two nodes—Rectangle and Radial—are going to be just a single Gizmo
in the end, so what we need now is a single input and a single output. At
the moment, we have two of each. Let’s fix this.

1. With nothing selected in the DAG, create a Dot node by pressing the .
(period) key.

2. Connect both inputs (for Rectangle1 and Radial1) to the output of the
Dot (FIGURE 12.25).

FIGURE 12.25 Using a Dot node to consolidate inputs

Note that the input of the Dot is the input to the shape-generating tree
you are making. The tree has two branches: the Rectangle1 branch and
the Radial1 branch. You need a way to have the outputs of the two
branches connect to a switchable output. Enter the Switch node. It’s
designed exactly for this.

3. Select Rectangle1, then Shift-click Radial1 to add it to the selection.

4. From the Merge tab, insert a Switch node (FIGURE 12.26).

FIGURE 12.26 The little arrow on the left-hand side


continues to provide an infinite amount of input.

The Switch node has an infinite number of inputs. Every input is assigned
a number starting at zero for the first input. The Switch node has a single
property called Which (FIGURE 12.27). The number given in this
property correlates to the input number and passes on to the output the
input selected. The image does not change, which makes it simple,
straightforward, and very useful for Gizmos.

FIGURE 12.27 The Which property


5. Make sure you are viewing the output of Switch1 in the Viewer.

6. In Switch1’s Properties panel, using the slider, change the Which


property from 0 to 1 and back.

When Which is at 0 you should see a square, and when it’s at 1 you should
see a circle. If it’s the other way around, you have your inputs crossed.

Let’s make a User Knob in NoOp1 for this. This time a pull-down menu is
a good idea, and you can choose from Rectangle and Radial in the menu.

7. Clear the Properties Bin, then double-click Switch1 and NoOp1.

8. Right-click (or Ctrl/Cmd-click) somewhere in NoOp1’s Properties


panel. From the contextual menu, click Manage User Knobs.

9. Click User to select it and then click Add > Pulldown Choice (FIGURE
12.28).

FIGURE 12.28 There’s a long list of knobs to choose from.

You are presented with the Pulldown Choice panel. The first two Input
fields are the same as all the other knob creation panels you have already
used, but the input field labeled Menu Items is new.

Here you have a multiline Input field. Every line has a numerical value
starting at 0 and going up. So when you choose the first line, it produces a
value of 0, and when you choose the second line, it produces a value of 1.
This is great because the Which property in Switch1 needs these exact
values. A value of 0 shows the rectangle branch and a value of 1 shows the
radial branch.

10. In both the Name and Label fields, type shapeType.

11. In the Menu Items field, type Rectangle, press Enter, then type
Radial.

12. In the Tooltip field, type Choose type of shape.

13. Click OK, then click Done (FIGURE 12.29).


FIGURE 12.29 This is how you set up the Pulldown Choice
panel.

You now have a new drop-down menu called shapeType in NoOp1’s


Properties panel. You can play with it, but it doesn’t have functionality
yet. To connect it, use an expression. There is no way to drag a link from a
drop-down menu because a drop-down menu has no Animation menu.
You can still type a link by hand, though.

14. Choose Add Expression from Switch1’s Which Animation menu.

15. In the panel that opens, type NoOp1.shapeType (FIGURE 12.30).

FIGURE 12.30 This should already be easy for you—just a


straightforward link expression.

16. Choose Radial from NoOp1’s shapeType drop-down menu, then


choose Rectangle again. Watch the Viewer as you do so (FIGURE 12.31).

FIGURE 12.31 Changing the pull-down menu changes the


value of the Which property.

The pull-down menu controls the Switch node. Perfect.

You now have a single input, a single output, and a way to choose the
output. You’re almost ready. The next stage is to wrap up this tree in
what’s called a group.

WRAPPING IN GROUPS
In Nuke, a group is a node that holds another tree inside of it. You can
still access what’s inside it and manipulate it, but you can also treat it as
one entity. It’s useful to warp little functionalities, or trees, that you know
are taking care of a certain element of your comp in a group. Some
compositors use them more and some less. To create a Gizmo, first you
have to create a group.

Like every other node, a Group node can hold User Knobs.

Everything in the DAG is in charge of making the two shapes. So that’s


what you are going to place in the group.

1. Select all nodes in the DAG except for Viewer1 and press Ctrl/Cmd-G.

The Group Output panel that pops up wants to know which node that you
selected will be the Group node’s output. Just as in any other node in
Nuke, a Group node can have many inputs, but only one output. In this
case, there are only two nodes for which the output is not in use: Switch1
and NoOp1. The output of the tree is the output of Switch1—not NoOp1,
which is unconnected to anything and is used only as a holder of Knobs
(FIGURE 12.32).

FIGURE 12.32 The Group Output panel

2. In the Group Output panel that pops up, make sure that Switch1 is
selected and click OK (FIGURE 12.33).

FIGURE 12.33 A Group node is created instead of the tree.

Your tree was replaced with a Group node called Group1. Notice it has two
inputs: One is the Dot’s input, and the other is NoOp1’s input. Let’s look
inside.

You should now also have the Group1 Properties panel loaded. Notice it
has a new button that other nodes lack—the S, or Show button (FIGURE
12.34). Clicking this button opens another Node Graph panel and shows
the tree that is held inside the Group node.

FIGURE 12.34 The Show button is unique to Group nodes.

3. In Group1’s Properties panel, click the S button.



In the DAG you can see that your shapes tree appears again. If you look
closely at the tabs of the current pane, where the Node Graph is, you’ll see
that you are no longer in the Node Graph, but rather in the Group1 Node
Graph. You can move back and forth between the Node Graph and the
Group1 Node Graph (FIGURE 12.35).
FIGURE 12.35 The Group1 Node Graph

In this DAG, you can also see three new nodes you didn’t place here:
Input1, Input2, and Output1. These nodes were created when you created
the group. The Input nodes appear as inputs in the main DAG; these are
inputs 1 and 2 that you saw before. And the Output node indicates where
to output the tree, normally at the bottom-most node. You need only one
input: the one connected to the Dot. The Input node connected to NoOp1
was created because NoOp1 had a free input, and that’s how the group
creation works.

Since Nuke creates these nodes automatically, I don’t know which node is
Input1 and which is Input2 in your tree. Delete the one called Input2
because deleting Input1 causes a bug to happen where the deleted input
still appears in the main Node Graph though it doesn’t exist anymore as a
node. If Input2 is the node connected to the Dot, you still have to delete it
and then move the output of Input1 so that it is connected to the input of
the Dot.

4. Click Input2 and click Delete to delete it. If you need to, connect the
Dot’s input to the output of Input1.

5. Switch to viewing the main Node Graph by clicking its tab.

You can see that the second input is now gone.

Now you would like to change the shape size or position, but your Knobs
are not available—they moved to within the group with NoOp1. This is not
convenient at all. Furthermore, this Group node can hold Knobs without
needing a NoOp that does nothing. The Group node can both group trees
together and hold User Knobs. It’s time to put that NoOp1 node to rest
and move all its functionality into Group1. First let’s start by generating
some knobs, including a few new ones.

6. In Group1’s Properties panel, right-click (Ctrl-click) an empty spot and


choose Manage User Knobs from the contextual menu.

Now you’re going to add another knob you didn’t have before. There’s
quite a lot of functionality at the top of both Rectangle1 and Radial1 that
you lost by just creating the knobs you did. This functionality deals with
the resolution and channels the shape is drawn in. It would be good to
carry them across as well. Instead of adding these knobs from the Add
menu, you can pick them from the existing knobs in the tree held within
the group using the Pick button.

7. Click the Pick button, which brings up the Pick Knobs To Add panel
(FIGURE 12.36).


FIGURE 12.36 The Pick Knobs To Add panel

From the Pick Knobs To Add panel, you can expose any knob from the
tree held in the group. You just need to find the right one. You’re looking
for a node called Rectangle1, a tab called Rectangle, and a property called
Output.

8. Navigate and open submenus until you find the Output property, as
shown in FIGURE 12.37. Then select it and click OK.

FIGURE 12.37 Finding the right property requires a little


digging.

In the User Knob panel you now have two entries: User and Output with
some curly brackets containing some code. You can click this Output entry
and then click Edit to change parts of it, but there’s no need. I will have
you change the name of the tab, though (FIGURE 12.38).

FIGURE 12.38 Two entries in the User Knob panel

9. Click the User entry and then click Edit.

10. Change the Name and Label to Shape. Click OK.

11. Click Done.

You now have a new type of knob that you created (well, sort of—you
actually copied it): This one contains a set of pull-down menus called ⬆
output. Changing the output knob changes in which channel set the Shape
is created (FIGURE 12.39).
FIGURE 12.39 The pull-down menu you just picked

You need to pick a few more properties: Replace, Invert, and Opacity.

12. In Group1’s Properties panel, right-click (Ctrl-click) an empty spot


and choose Manage User Knobs from the contextual menu.

13. Click Output to choose it, then click Pick.

14. Find the property Replace in Rectangle1, select it, and click OK.

15. Repeat this for Invert and Opacity. Make sure you add the properties
after the last added property by selecting the last knob in the knob list
before you click Pick. Also, make sure you select from the Rectangle node
and not the Radial node.

When you’re done you should see the list in FIGURE 12.40 in the User
Knobs panel.

FIGURE 12.40 The list of picked knobs

16. Click Done.

You now have three other properties in the Properties panel. The panel, as
shown in FIGURE 12.41, has the two properties, Replace and Invert, one
below the other. It would be good to have them next to each other instead.

FIGURE 12.41 The Properties panel can be laid out better.

17. Right-click to load the User Knobs panel, select the Invert line, and
then click Edit.

18. Deselect Start New Line to tell this knob to stay on the previous line
created by the previous knob (FIGURE 12.42).

FIGURE 12.42 Deselecting this check box forces the knob


to remain on the same line as the previous knob.

19. Click OK, then click Done.

You should now have the Replace and Invert check boxes next to each ⬆
other (FIGURE 12.43). Well done.
FIGURE 12.43 Two knobs on the same line

Great! These properties now change some properties in Rectangle1. But


what about Radial1? As you picked properties to mirror in the group’s
Properties panel from Rectangle1, Radial1 was left unchanged. No sweat.
You just need to do some clicking and dragging, and you can solve this.

20. Clear the Properties Bin, then double-click Radial1 (from the Group1
Node Graph), and then double-click Group1 (from the regular Node
Graph).

Pull-down menus such as the Output property and check boxes such as
the Replace property don’t have Animation menus. So what do you click
to drag? It’s different in each case, so pay attention.

21. In Group1’s Properties panel, Ctrl/Cmd-click the Link menu to the


right of the last pull-down menu (FIGURE 12.44), drag, and let go on
the equivalent Link menu in Radial1’s Properties panel.

FIGURE 12.44 The little button with the = sign is the Link
menu.

The colors of Radial1’s Output property are now muted (FIGURE 12.45).
This signifies a link. Clicking Radial1’s Output property’s Link menu
allows you to edit the link or remove it altogether.

FIGURE 12.45 The colors of the Output property are muted


to signify an expression link.

22. In Group1’s Properties panel, Ctrl/Cmd-click the actual check box for
the Replace property, then drag and let go on the equivalent check box in
Radial1’s Properties panel (FIGURE 12.46).

FIGURE 12.46 After you create the link, the check box
turns a light blue color.

The Replace check box for Radial1 turns light blue to signify it now has a
link. You can right-click this check box if you want to change the
expression or remove it.

23. Now continue linking between Group1 and Radial1 for Invert and
Opacity.

When you’re done, Radial1’s Properties panel should look like FIGURE
12.47.

FIGURE 12.47 Radial1’s Properties panel after linking the


four properties

This is it. Both Rectangle1 and Radial1 are following these four knobs.
Currently there are no knobs in Group1 that can replace the three knobs
in NoOp1. Let’s run through this. First add a divider line graphic element
to the panel to separate the top half of the properties from the bottom—
just like the one in the original Radial and Rectangle nodes.

24. Right-click (Ctrl-click) an empty spot in the Group1’s Properties panel


and choose Manage User Knobs from the contextual menu.

25. With Opacity selected in the knobs list, click the Add button, and from
the pull-down menu choose Divider Line.

26. With Unnamed (that’s the Divider Line you just added) selected,
choose Pulldown Choice from the Add button/menu.

27. For both the Name and Label fields, enter shapeType. In the Menu
Items field, enter Rectangle, then press Enter/Return, then type
Radial. In the Tooltip field, enter Choose type of shape. Click OK
(FIGURE 12.48).

FIGURE 12.48 Re-creating the shapeType knob

28. From the Add button/menu, choose 2d Position Knob.

29. Name this shapeCenter and label it shapeCenter. In the Tooltip


field, enter Controls center location of shape. Click OK.

30. To re-create the shapeSize knob, choose a Width/Height Knob from


the Add menu.

31. For both Name and Label, use shapeSize. The Tooltip should be
Controls shape size. Click OK.

You should now have nine User knobs in the User Knob panel in the order
you can see in FIGURE 12.49. If the order is different, you can rearrange
it by selecting a knob in the panel and using the Up and Down buttons.

FIGURE 12.49 Use the Up and Down buttons to get this ⬆


order of knobs.

32. Click Done.

Group1’s Properties panel should look like FIGURE 12.50. Pretty


impressive user interface, wouldn’t you say? And you made it all yourself!
Let’s just set the values again for shapeCenter and shapeSize.

FIGURE 12.50 That’s a pretty impressive Properties panel


you made there!

33. Set the values again for shapeCenter and shapeSize to 1024, 778, and
200, respectively.

You now want to change all the expressions that point to NoOp1 so they
point to the Group’s User Knobs. Let’s do one.

34. Make sure the Properties panel for Rectangle1 is loaded in the
Properties Bin.

35. Click in the area.x Input field and press = to load its Edit expression
panel.

Now you need to replace the NoOp1 part of the expression with
something—but what? Because Rectangle1 is sitting inside Group1,
Group1’s properties are considered part of Rectangle1’s properties, and so
no Node name needs to be called, just the Property name. Instead of
calling up Group1 by name, simply ask Rectangle1 to look for the
shapeCenter and shapeSize Knobs without mentioning a node.

36. Delete NoOp1. (don’t forget the separating period) (FIGURE


12.51).

FIGURE 12.51 Calling for the group knobs is like calling


local knobs.

You need to change this for every other instance of an expression


referencing the NoOp1 node. In the next section, instead of doing it again
manually, you use a different method.

MANIPULATING THE NUKE SCRIPT IN A TEXT


EDITOR
Here’s a surprise:

1. Save your script in your student_files directory, name it


Chapter12_v01, and quit Nuke!

That’s right. Here’s something to note about Nuke scripts that I have
mentioned before: They are actual scripts—not just the nodes themselves
as you worked with earlier in this chapter, but the entire script. This
means they are made out of normal human-readable text, which means
you can change them using a text editor. The search and replace functions
in text editors are uniquely suited for this type of work.

2. Navigate to your student_files directory where you will find your last
saved Nuke script, which should be called Chapter12_v01.nk.

There may be another file next to it called SafeAreas_v01.nk~. Disregard


it—Nuke uses that file for temporary caching.

3. Open Chapter12_v01.nk in your favorite text editor. By default, it’s


Notepad on Windows and TextEdit on Mac.

You need to find every occurrence of the text “NoOp1.” (the dot is
important) and replace it with nothing (meaning, don’t type anything in
the Replace box).

4. Find the Replace function in your text editor. Normally it’s in the Edit
menu. It might be called Find instead, or the Replace function may be
included within Find. In Windows Notepad, press Ctrl-H; in Mac
TextEdit, press Cmd-Option-F.

5. Use the search function to find the words “NoOp1.” and replace each

instance with nothing (FIGURE 12.52).
FIGURE 12.52 Setting up the Find dialog box in Mac OS
10.8’s TextEdit.

6. You can use the Replace All function (you should have one).

TextEdit, which I’m using, told me it replaced the text 15 times—which is


good, that’s what I was expecting. This single click just saved a lot of time
and manual labor.

7. Save the changes and then double-click the file again to open it in Nuke
once more.

8. Double-click Group1 to load its Properties panel.

9. Click the S button to open the Group1 Node Graph (FIGURE 12.53).

FIGURE 12.53 The green expression line connecting NoOp1


is gone.

You can now see that NoOp1 is no longer being used. There are no green
expression lines pointing to it. All the expressions pointing to it have been
changed to point to the new knob in the group.

10. Change properties in Group1 and see the shape change in the Viewer.

11. Reset the properties to what they were originally.

The knobs you created in the group now control everything you need. You
no longer need the NoOp1 node.

12. Delete NoOp1.

TURNING A GROUP INTO A GIZMO


You have finished building the Shape group. For now, this is still just a
group. You can save it and bring it in later as a script. But the really smart
way to use this is to turn it into a Gizmo that you can always access
through a button in the interface.

It’s a good idea to name your group something that makes sense for the
Gizmo.

1. Change the name of Group1 to Shape.

Now for the last part.

2. In the Shape Properties panel, switch to the Node tab.

3. At the bottom of this window you can find the Export As Gizmo button
(FIGURE 12.54). Click it. (Note that only Group nodes have this
button.)

FIGURE 12.54 The Export As Gizmo button in the Node tab


4. Name the Gizmo Shape and save it on your Desktop.

5. Save your script and quit Nuke.

Installing the Gizmo


To install the Gizmo, first you need to know where Nuke saves its
customization files. According to the manual, these are the default
locations:

For Linux: /users/login name/.nuke

For Mac: /Users/login name/.nuke

For Windows XP: drive letter:\Documents and Settings\login


name\.nuke

For Windows Vista and beyond: drive letter:\Users\login name\.nuke

On Linux and Mac, folders that start with a dot are hidden by default. You
need to find a way to find the folder. On Mac, press Cmd-Shift-G to open
the Go To Folder dialog box, enter ~/.nuke/ (“~” means your home
folder), and press Enter.

In this folder you can drop all kinds of files defining your user preferences
and added functionality. These can include favorite directories, new
formats, Python scripts, ToolSets, and Gizmos among other things.

1. Locate your .nuke directory.

You can see the ToolSets directory in your .nuke directory. That’s where
you find the 3D Setup ToolSet you made in Chapter 9.

2. Drag the Shape.gizmo from the Desktop into your .nuke directory. This
is all you need in order to install the Gizmo.

3. Launch Nuke.

4. Click File > Recent Files and choose Chapter12_v01.

Where did the Gizmo go? You haven’t told Nuke in which toolbox or
which menu you want it to appear.

Testing the Gizmo


Gizmos that are just dropped into the .nuke folder appear in the All
Plugins submenu under the Other toolbox.

1. Click the Other toolbox, click All Plugins at the bottom, then click
Update (FIGURE 12.55).

FIGURE 12.55 Updating the All Plugins submenu

2. Again access the All Plugins submenu.


This time you will see that where you had only the option to update
before, you now have a submenu of the alphabet. Under each menu you
have all the tools whose name starts with that letter. This includes all
tools: the ones that come in with Nuke, and the Gizmos and scripts you
create or bring from elsewhere.

3. Go to the S submenu to find Shape. Click it to create it (FIGURE


12.56).

FIGURE 12.56 This is how you create a copy of the Gizmo


you made.

Shape1 is created. In its Properties panel, you can see all the knobs you
made, play with them, and make shapes that fit your needs.

4. Bring in an image from chapter04/IcyRoad.png.

5. Clear the Properties Bin.

6. Connect Shape1 to Read1.

7. View Shape1 (FIGURE 12.57).

FIGURE 12.57 Your very own Gizmo in the tree

The shape is right at the top right of the image. You can use the
shapeCenter on-screen control to move it. You can also add as many
Shape nodes after this one as you want in order to add more and more
shapes. If you’re missing more functionality, go back to the group, add the
functionality, re-create the Gizmo, and replace it in the .nuke folder.

Life is just that easy when you can make your own tools.

MORE ABOUT GIZMOS


There’s one final thing to know about Gizmos—and it can help you a lot
later on. Follow these few steps first:

1. Double-click Shape1 to load its Properties panel.

2. Switch to the Node tab.

3. At the bottom of this tab, find the Copy To Group button. Click it
(FIGURE 12.58).

FIGURE 12.58 Click the Copy To Group button to make the


Gizmo back into a group.

This takes the Gizmo and reverses the operation, turning it back to a
group.

4. Double-click the newly created group, and click the S button at the top
right of its Properties panel.

You can now see the original tree that made up this Gizmo (FIGURE
12.59).

FIGURE 12.59 The original tree has been embedded in the


Gizmo the whole time.

Gizmos can be found all around the Web. There’s a big depository at
nukepedia.com (http://nukepedia.com). Downloading Gizmos, installing them,
and then looking at how they’re built is a great way to learn advanced
features in Nuke.

You can also create real buttons inside real toolboxes for your Gizmos
instead of using the Update button. This requires a little Python scripting. ⬆
If you are interested in that, take a look at the Appendix at the end of the
book.
 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
12. Customizing Nuke with Gizmos Index
   🔎

Appendix. Customizing Nuke with Python


A lot of features might seem to be missing from Nuke—things you may be
used to using in other applications, or tools that seem like they would be
nice to have. Maybe you keep performing a series of procedures for which
you wish you could just click a single button. Well, you can solve the
absence of some of these missing features with Gizmos, as discussed in
Chapter 12, but other features, such as automation and user interface
customization (among other things), need a more open and robust
framework. This is where Python scripting comes in.

Python is a scripting language that has been introduced into Nuke to


replace the aging TCL scripting language, and it is now the main scripting
language in Nuke. It is widely used, and you may find that learning to use
it in Nuke proves useful when you are working with your operating system
and other software applications. In fact, you can control your operating
system and other applications that support Python from within Nuke,
making Python a powerful tool in your arsenal.

NOTE

This appendix assumes you have completed Chapter 12. If


you haven’t, you won’t understand some of the terms I use
here and you won’t have the correct files in place to complete
the steps as I’ve written them.

In this appendix, I cover only how to use Python to do very basic things
that I believe are absolutely necessary if you use Nuke anywhere—except
for in a big organization that has people who take care of all that
customization for you. But even after you master these simple procedures,
I encourage you to learn Python, because it can really become a very
powerful tool for you to have at your disposal.

PYTHON SCRIPTING BASICS


In Chapter 12 you learned how to build a Gizmo. To access it, you used the
toolbox called Other and the Update button to load all plugins from all
paths into an alphabetical list. Sure, you can keep doing that—but it’s
nicer to be able to simply get to the Gizmos you use with an easy-to-access
button in an easy-to-access toolbox.

This seems to be an easy request, but for you to make this happen, first
you need to learn a little about how Nuke’s Python customization setup
works. Remember, Nuke is a production tool that relies on hardcore
knowledge of advanced systems. You need to know a little Python
scripting to be able to create a button in the interface. Let’s start.

1. Open Nuke.

2. From the Content menu on the right pane, choose Script Editor
(FIGURE A.1).

FIGURE A.1 Loading the Script Editor

The Script Editor is split into two panes: The bottom half is where you
write your commands (the Input pane), and the top half provides
feedback about what’s happening when you execute your commands (the
Output pane).

There is a row of buttons at the top. TABLE A.1 explains what each
button is called and what it does, from left to right.

TABLE A.1 The Script Editor’s Buttons

In order to create a button in the Nodes Toolbar, you need to produce a


Python script that starts by displaying the Nodes Toolbar, then populates
it with a new toolbox, and finally populates that toolbox with a new
button, which then calls up a node.

Python uses different terminology for certain user interface elements and
functions. I point out these distinctions as we encounter them.

Under Python, Nuke calls the various panels, panes, and toolboxes
“menus,” and the Nodes Toolbar you use all the time to grab nodes from is
a menu simply called Nodes. Here, you add a new toolbox menu to the
Nodes menu and call it User. You then tell Nuke to place a link to your
Gizmo in the new User menu.

CREATING A BUTTON WITH PYTHON
Your first line of code calls up the Nodes menu and calls it by a name
(toolbar) so that you can easily refer to it in the next lines of code.
Calling a command by name is referred to as assigning a variable in
scripting. In this case, the Nodes menu has the variable toolbar assigned
to it.

1. As the first line in your Script Editor’s input pane, enter the following:

toolbar = nuke.menu(′Nodes′)

2. Click the Run button at the top of the Script Editor to execute this
command.

The command you wrote disappears and the following appears in your
Output pane:

toolbar = nuke.menu(′Nodes′)

# Result

This means the command executed with no error. Let’s make sure the new
variable toolbar is now defined.

3. In the Input pane, enter toolbar and click the Run button.

The result should look something like this:

# Result: <Menu object at 0x15cc80f0>

Keep adding to the script, line after line, so you end up with one long
script that contains all the commands you wrote, one after another. Use
the Previous Script button to bring the previous command back to the
Input pane.

4. Click the Previous Script button twice to bring back the full command,

toolbar = nuke.menu(′Nodes′)

Now let’s create the new menu inside the newly defined variable,
toolbar.

NOTE

As mentioned in the beginning of this appendix, Nuke uses


different names for things than the interface itself does. In this
case, the Python name menu refers to a lot of things but can
refer to a toolbox as well. I switch between the names
depending on whether I want to describe what you’re doing
when writing the code or what you’re doing in the interface.

5. Press Enter/Return to start a new line and type the next command:

userMenu = toolbar.addMenu(′User′)

You just created a new menu called “User” with the addMenu command.
(In interface terms, you made an empty toolbox called User in the Nodes
Toolbar.) All this is performed inside the toolbar variable. This new
menu is also assigned a variable called userMenu.

You can now run these two commands to create an unpopulated menu.

6. Click the Run Current Script button again.

If you look carefully at your Nodes Toolbar, you will notice you now have
a new toolbox at the bottom with a default icon (FIGURE A.2).

FIGURE A.2 A new toolbox is born.

7. Hover your mouse over this new menu and you will see from the pop-
up tooltip that it is called User.

If you click the menu, nothing happens because it’s still empty.

Now call up the Gizmo:


8. Click the Previous Script button to bring back the two previous
commands.

9. Press Enter/Return to start a new line and then enter the following:

userMenu.addCommand(′Shape′, “nuke.createNode(′Shape′)”)

10. Click the Run button at the top of the Script Editor to execute this list
of commands.

This new command tells Nuke to look for a Gizmo called Shape (in the
second part of the command) and to give it the name Shape.

11. Click to display the User menu you just created.

The menu now contains a Safe Areas option (FIGURE A.3).

FIGURE A.3 The User menu is now populated with a Shape


node.

12. Now clicking Shape creates a Shape Gizmo (provided you have it in
your .nuke directory from Chapter 12).

Essentially, this is it. You have created a menu called User under the
Nodes Toolbar and placed a link to your Shape Gizmo in it. You can,
however, make it more interesting (keep reading).

ADDING A HOT KEY


You will use this Gizmo to add shapes all the time, so wouldn’t it be
convenient to have a hot key that calls this Gizmo? This is easy to do in
Python.

1. Click the Previous Script button to bring the three lines of code back to
the Input pane.

2. Change the third line so it looks like this (with the little bit at the end
added):

userMenu.addCommand(′Shape′, “nuke.createNode(′Shape′)”,
′^+z′)

3. Click the Run button at the top of the Script Editor to execute this
command.

NOTE

The last bit of code tells Nuke to use Ctrl-Shift-Z as the hot
key for the SafeAreas Gizmo (FIGURE A.4). (You can also
type that out as Ctrl+Shift+Z or Cmd+Shift+Z—but you have
to type Alt, not Option.) The symbols for the hot keys work as
follows:

^ Ctrl/Cmd

+ Shift

# Alt/Option


FIGURE A.4 The new menu populated by a link
to the Gizmo and a hot key to boot
You should be very proud of yourself here. Pat yourself on your back. But
there’s one problem: If you quit Nuke and restart it, all this disappears,
which is a shame. But there is a remedy...

MAKING CUSTOMIZATION STICK WITH THE


MENU.PY FILE
When Nuke loads, it looks for a file called menu.py in the .nuke directory.
This is a text file with a .py (for Python) extension instead of the default
.txt extension. This file holds all the customization you want to add to
Nuke.

NOTE

If you’re not sure where the .nuke directory is on your system,


see Chapter 12 to refresh your memory.

You can use Nuke’s Script Editor to create this file—but it’s best for testing
things quickly as you write them, not for long coding sessions. You can
use any text editor you prefer instead. I recommend using very simple text
editors (like Notepad on Windows and TextEdit on the Mac) or code-
oriented text editors (such as ConText or Notepad++ on Windows and
TextMate or TextWrangler on the Mac). Whatever you do, don’t use big
word processing software because it adds all sorts of styling and
formatting code to your text file, which Nuke can’t read. The text that
appears in your file is what should be in your file when Nuke reads it. This
is not the case with Microsoft Word’s .doc files, for example.

In this case, you already created all the code inside Nuke’s Script Editor.
You just need to save it in your .nuke directory and call it menu.py.

1. Click the Previous Script button to bring back the three previous lines
of commands (including the hot key part of the code as well).

2. Click the Save Script button.

3. In the browser window, navigate to your .nuke directory. (If you can’t
see the .nuke directory in your home directory, start typing .n and it will
come up.)

4. Name the file menu.py and click Save.

Your menu.py file is now saved with the commands for creating the User
menu in the Nodes Toolbar and adding a link to the Shape Gizmo in it.

5. Quit Nuke and then open it again.

The User menu is there now.

Creating and adding an icon


“Daddy, why do all the other toolboxes and commands have nice icons
and ours doesn’t?”

“Why, we can make icons as well, son.”

Really? Can we, please? After all, nothing says “we care” like an icon.
Icons in Nuke are 24×24 pixels in size and are saved as either .png or
.xpm files in the .nuke directory. Let’s make a couple.

The first one is an icon for the User toolbox itself: simple white text on a
black background. The second is an icon for the Shape Gizmo: a white
square and circle on a black background.

If you want to design your own icons instead, that’s up to you. Just
remember to make them with the requirements I’ve mentioned in mind.
Otherwise, follow these steps:

1. Create a Constant node from the Image toolbox.

2. Choose New from the Format drop-down menu.

3. Name the new Format icon and set the File Size to 24 and 24. Click
OK.

4. With Constant1 selected, insert a Text node after it from the Draw
toolbox.

5. View Text1 in the Viewer. ⬆


NOTE

When you select the text in the Message field later, property
adjustments change the selection. Not having anything
selected means that other properties have no effect. By the
way, you can also select the text in the Viewer itself.

6. In Text1’s Message property field, type User. Select the text you just
typed by click-dragging over it.

7. Change the Box property to 0, 0, 24, 24.

8. In both of Justify’s drop-down menus, choose Center.

9. Change the Size Property to 10 (FIGURE A.5).

FIGURE A.5 Text1’s properties

The icon for the User menu is ready! Now you will render it (FIGURE
A.6).

FIGURE A.6 The final icon in all its glory

10. Select Text1 and insert a Write node after it.

11. Click the File Browser icon and navigate to your .nuke directory.

12. Name the file user_icon.png and click Save.

13. Click the Render button and, in the Render panel that opens, click OK
(FIGURE A.7).


FIGURE A.7 The first icon’s tree

Now let’s make the second icon, this time for the Gizmo itself.

14. Copy and paste the three existing nodes to get another, unattached
tree.
15. View Write2 in the Viewer.

16. Double-click Text2 and change the Message property to Shape. Make
sure the word Shape is selected.

17. Change the Size property to 8. Due to a small bug, you might have to
do this twice for the change to take effect.

18. With Text2 selected, press Shift-Ctrl/Cmd-Z to insert a Shape node.

19. Change Shape1’s shapeCenter property to 8, 8; the shapeSize


property to 12; and Opacity to 0.25.

20. Copy Shape1 and paste it to insert another Shape node after Shape1.

21. Change Shape2’s shapeType property to Radial and shapeCenter to


16, 16 (see the results in FIGURE A.8).

FIGURE A.8 The second icon should look like this.

22. Double-click Write2 and change the name of the file to


Shape_icon.png.

23. Click the Render button and, in the Render panel that opens, click
OK.

24. You can save your icon project in your student_files directory if you’d
like.

Both icons are ready and saved in the .nuke directory, making them
available to be called via Python.

Now you need to tell Nuke (in the menu.py file) to attach these icons to
the menu and Gizmo commands.

25. Open a Script Editor in the right-hand pane the same way you did at
the beginning of the chapter.

NOTE

To locate your .nuke directory, refer to Chapter 12. Note that


on a Mac, file and folder names that start with a period will be
hidden.

26. Click the Load Script button, navigate to the .nuke directory, and
double-click menu.py.

You should have the following in your Input pane now:

toolbar = nuke.menu(′Nodes′)

userMenu = toolbar.addMenu(′User′)

userMenu.addCommand(′Shape′, “nuke.createNode(′Shape′)”,
′^+z′)

NOTE

The last line should be typed as one line. And remember,


code is case sensitive.

27. Change the second and third lines so they look like this:

toolbar = nuke.menu(′Nodes′)

userMenu = toolbar.addMenu(′User′, icon=′user_icon.png′)



userMenu.addCommand(′Shape′, “nuke.createNode(′Shape′)”,
′^+z′, icon=′Shape_icon.png′)

When you add the last bit of code, you tell Nuke to use an icon that
contains the name between the apostrophes. You don’t need to include a
full directory path to the icon file name since Nuke simply looks for the
icon in the .nuke directory.

28. Click the Save Script button.

29. In the File Browser, navigate to your .nuke directory and click the
menu.py file.

30. Click Save. When asked, approve overwriting the existing file by
clicking OK.

31. Close Nuke and open it again to load the menu.py.

You should now have icons associated with your toolbox and Gizmo
(FIGURE A.9). Hurrah!

FIGURE A.9 Successfully adding icons to the user interface

Other uses for menu.py


You can do many other things with menu.py. Items placed in menu.py are
loaded when Nuke starts up, so this is a convenient place to add Python
commands that create features you need all the time. There’s a lot more
about this in the Nuke documentation, and I encourage you to have a
look.

First, you should know that you can add many more Gizmos to your
already created User menu. Do that by simply duplicating the third line
and changing the name of the Gizmo, the hot key (if you want any), and
the name of the icon file.

But that’s just skimming the surface. You can add many more things to
the menu.py file. One example is adding a format to the default format
list. By adding a line like the following, you add a format called 720p to
the format list with all the right sizes. Just add this line at the bottom of
your menu.py (you can add a couple of spaces to separate the Gizmo code
and the format code) and then choose File > Save:

nuke.knobDefault(“Root.format”, “1280 720 0 0 1280 720 1


720p “)

PREV NEXT
⏮ ⏭
Recommended
12. Customizing
/ Queue
Nuke/with
History
Gizmos
/ Topics / Tutorials / Settings / Get the App / Sign Out Index
© 2017 Safari. Terms of Service / Privacy Policy


 Nuke 101: Professional Compositing and Visual E ects, Second Edition

PREV NEXT
⏮ ⏭
Appendix. Customizing Nuke with Python Nuke 101: Professional Compositing and Visual E ects, S…
   🔎

Index

NUMBERS
2D images, viewing as 3D objects, 266–271

2D tracking, 139–144. See also Tracker node

3D engine, 264–266

3D node trees, 271–276

3D objects. See also objects

converting to 2D pixels, 277–280

creating, 266–268

moving in space, 274–276

moving in time, 276

3D render, bringing in, 63. See also rendering

3D scenes

2D compositing in, 334–341

camera element, 260, 268–271

geometry element, 260

importing Photoshop layers, 335–339

ScanlineRender node, 260

scene element, 260

setting up, 261–262

3D toolbox, described, 7

3D tracking, explained, 138. See also Camera Tracker

3D view, accessing, 262

3D world, navigating, 263–264

8-bit color, explained, 100

32-bit color, explained, 101

720p format, creating, 317

A
Add math function, explained, 102 ⬆
alpha channel, viewing, 25

animating shapes, 169–170

animation. See also keyframes; rendering


combining RotoPaint node with, 184–190

comparing images, 58–60

Animation menu, using, 41–42

autosaves, timing, 28

auxiliary passes, motion vector pass, 94–95. See also passes

B
background image example, 72. See also foreground image

beauty pass

building, 73

changing look of, 62

connecting Merge node, 73–77

described, 65

spitting tree, 77–83

transparent A input, 76

working down pipe, 73–77

Bézier hot keys, 167

black and white points, color matching, 119–125

bounding box, using, 67–68

buttons, creating with Python, 377–378

Cache menu, described, 4

camera, animating, 324–327

camera element

described, 260

using, 268–271

camera projection

3D view, accessing, 324–327

Alembic import dialog, 317–318

aligning clone brush, 328

building scenes, 316–323

camera location, 320

cloning nodes, 331–333

connecting setup to geometry, 321

creating 720p format, 317

output of Viewer Capture, 326

Project3D node in tree, 321

ReadGeo nodes, 318

rendering scene, 342

SphericalTransform for sky, 329–331

texture for building, 322

transformed sphere, 330

tweaking texture, 327–329



working setup, 323

Camera Tracker. See also 3D tracking; reflection

aligning scenes, 299–302


connecting Viewer, 300–301

Create stage, 287, 297–298

creating scenes, 297–298

selecting points, 301

Solve stage, 287, 292–297

Track stage, 287

using, 287–291

Camera Tracking, explained, 138

CameraTracker node

availability, 286

features, 288–291

loading pre-generated, 298

Mask input, 290

properties, 290

Settings tab, 288

using, 286–287

Card node, creating, 307–308

CGI (Computer Generated Imagery)

explained, 61

over live background, 85

Channel buttons, identifying, 64

channel sets

Col, 65, 81

depth, 65

explained, 62

GI, 65

ID1-ID5, 65

Lgt, 65, 81

Mv, 65

Nrm, 65

Ref, 65

RGBA, 62, 65

Spc, 65

SSS, 65

viewing channels for, 64

viewing with Viewer, 63–65

Channel toolbox, described, 7

channels, defined, 62

clips, playing, 15–16

Clone brush

aligning, 328

using with RotoPaint, 180–182



cloning nodes, 331–333

Cmd versus Ctrl key, 6

Col channel set, 65


Col pass, adding, 81

color

8-bit, 100

32-bit, 100

dynamic range, 102–106

float, 100

normalized values, 101

color correcting

images, 36

portions of images, 134–136

color correction

explained, 99

functions, 102

setting interface for, 131

color matching

black and white points, 119–125

practicing, 115

color operations, studying I/O graphs, 106–112

Color Picker, 39–41

color render pass, 65

Color toolbox, described, 7

color wheels, opening, 127

ColorCorrect node, using, 128–134

ColorLookup, crating curves with, 112–114

colors, sampling, 39, 87

composites. See also process trees; rendering

starting, 18

viewing without rendering, 52

compositing, defined, 23

compression, changing for rendering, 45

ContactSheet node, 65–66

Content menu, 3 – 6

contextual menus, using, 6

Contrast math function, explained, 102

cross-platform, explained, 6

Ctrl versus Cmd key, 6

Curve Editor

accessing, 171

loading curves onto, 41

locating, 2 – 3 ⬆
navigating, 173

panel, 3
using, 171–174

using with animations, 54–55

curve interpolation, 172

Curve tool, using, 119–122

curves

creating points on, 173

creating with ColorLookup, 112–114

D
DAG (Directed Acyclic Graph). See Node Graph

depth / z render pass, 65

depth channel set, 65

depth of field, adding, 95–98

depth pass, using, 95–98

display resolution, scaling down, 239

Dope Sheet

locating, 2 – 3

timing of elements, 251

Dope Sheet panel

described, 3

using with RotoPaint node, 190–195

Dots

using in splitting trees, 77–83

using to organize nodes, 79

using to organize trees, 96

downloading Nuke, x

Draw toolbox, described, 7

dust removal, 182–184

dynamic range, 102–106

Edit menu, described, 4

Environment light source, using, 308–312

Eraser tool, using with RotoPaint node, 182

Error Console panel, described, 4

expressions, linking properties with, 69–71

EXR file format, 62

F
File Browser

selecting files in, 20–21

using, 9 –11


File menu, described, 4

file sequences, naming, 43–46


Filter toolbox, described, 7

Flare node, using, 142–144

float color, explained, 101

floating Properties panel, closing, 134

flows. See process trees

foreground image, repositioning, 29

foreground over background, 71–73

fps field, displaying in Viewer, 16

frames, advancing, 54

Furnace Core toolbox, described, 7

G
gamma, explained, 101

Gamma math function, explained, 102

Gamma property’s knob, example, 39

garbage matte, creating, 216

GI channel set, 65

Gizmos

All Plugins submenu, 371

building trees, 345–347

converting to groups, 373

copying, 372

creating Radial nodes, 353–355

defined, 344

NoOp node, 347

shape creation tools, 344–345

Switch node for branches, 355–359

testing, 371–373

turning groups into, 370–373

User Knobs, 347–353

wrapping in groups, 359–368

global illumination render pass, 65

Grade node

using, 115–119

using for color correction, 36

Group nodes, 360–367

Group Output panel, 359–368

groups, using with Gizmos, 370–373

GUI (graphic user interface), 2 – 6

Help menu, described, 4

hi-res stereo script. See also scripts ⬆


nodes as text, 243–244

proxies, 238–242

setting formats, 232–233


setting up, 230–232

stereo views, 236–238

using LUTs, 234–236

hot keys

adding with Python, 378–379

Bézier points, 167

color sampling, 87

keyframe selection, 171

Merge node, 23

for playing in Viewer, 16

saving scripts, 63

using, 6

Viewer navigation, 13

HueCorrect, spill suppressing with, 219–224

HueKeyer, 199–204

I
IBK (Image Based Keyer), 199–200, 204–210

icons, creating and adding, 380–382

ID passes render pass, 65

ID1-ID5 channel set, 65

Image toolbox, described, 7

images

color correcting, 36

comparing, 58–60

compositing, 23

controlling premultiplication, 26

displaying resolution for, 22

finding darkest pixels in, 120

finding lightest pixels in, 120

merging, 23–29

merging premultiplied, 25–26

previewing before importing, 10

X and Y resolution, 62

installing Nuke, ix– x

I/O graphs, studying, 106–112

K
keyboard shortcuts. See hot keys

Keyer nodes

creating garbage matte, 216

dilates, 217–219

Erode tools, 217–219



explained, 200

spill suppressing, 219–224

Keyer toolbox, described, 7


keyframe track, starting, 151

keyframes. See also animation

creating, 41

identifying, 54

markings in Timebar, 54

selecting, 171

Set Key option, 53

user-generated for tracking, 148–152

using in animation, 52–60

keying nodes, 199–200

keying terminology

chroma, 198

color difference, 198

different, 198

luma, 198

Keylight node

described, 200

explained, 200

using, 210–215

Keymix node, compositing with, 186–187

knobs, using, 38–39

Kronos node, availability of, 248

L
LayerContactSheet, using, 66

layers. See channel sets

Layout menu, described, 4

layouts

restoring, 6

saving, 5

Lgt channel set, 65

Lgt pass, adding, 81

Lift math function, explained, 102

light render pass, 65

light source, Environment, 308–312

linear color, explained, 101

Lookup math function, explained, 102

LUTs (lookup tables), using, 234–236

LZW compression, using, 45

M
Mask input

external image, 89–91



using, 89

using for color correction, 134–136

masking down pipe, 91–94


matte, reading as image, 46–50

MaxLumaData tab, sections of, 120

menu bar, options, 4

menu.py file, using, 379–382

Merge node

alpha channel, 25–26

connecting for beauty pass, 73–77

creating, 23, 48

do’s and don’ts, 26

foreground image, 25

inserting node into pipe, 49

matte image, 47

Operation drop-down menu, 49

RGB channel, 25–26

using with RotoPaint node, 178

viewing red channel, 47

Merge toolbox, described, 7

merging images, 23–29

Metadata toolbox, described, 7

midtones, matching by eye, 125–128

modifier keys. See hot keys

motion blur

adding, 94–95

enabling for RotoPaint node, 179

motion vector render pass, 65

Multiply math function, explained, 102

Mv channel set, 65

New Viewer panel, described, 4

Node Graph

locating, 2 – 3

panel, 3

snapping elements in, 79

nodes

arranging, 33

branching, 30–31

cloning, 331–333

connecting, 23, 32

connecting to Viewer inputs, 14–15

ContactSheet, 65 ⬆
creating, 7 – 8 , 30–31

deleting, 33–34
deselecting, 32–33

disabling, 28, 33–34

downstream, 18

File Browser, 9 –11

Flare, 142–144

Grade, 115–119

indicators on, 55–56

input, 6

inserting, 29–31

inserting into pipes, 49

Keymix, 186–187

masking input, 6

naming, 6 , 8

organizing with Dots, 79

output, 6

pasting from text, 244

Read, 8 – 9

replacing, 30–31

representing, 6

resetting properties for, 35

selecting, 32–33

Shuffle, 50–51

ShuffleCopy, 84–85

as text trick, 243–244

Tracker, 139–142

upstream, 18

Write, 42–43

Nodes Toolbar panel

described, 3

identifying, 2

toolboxes in, 7

normals render pass, 65

Nrm channel set, 65

Nuke

downloading, x

getting trial license, x


installing, ix– x

media files, xi

technical requirements, x –xi


.nuke directory, locating, 371

NukeX, features of, 286

O
objects. See also 3D objects

applying materials to, 281–284

connecting to scenes, 281

OFlow node, using, 248–251

OpenEXR file format, 62

Other toolbox, described, 7

panel name, locating, 3

panels, identifying, 3 – 4

panes, splitting interface into, 3

panes and panels, identifying, 2

panorama image, 305, 310–311

particle system, importing, 271–274

Particles toolbox, described, 7

passes, manipulating, 86–88. See also auxiliary passes

perspective movement, tracking, 145

Photoshop layers, importing, 335–339

pipes

defined, 11

inserting nodes into, 49

Pixel Analyzer panel

described, 3

using, 122–125

playing clips, 15–16

PositionToPoints node, using, 281–282

premultiplication

controlling, 26–27

restriction for color correction, 26, 36

Primatte node, explained, 200

process trees. See also composites; stereo trees

controlling timing in, 190–195

creating, 19–22

Dots, 77

end of beauty pass build, 83

example, 18–19

explained, 18

keeping organized, 33

node anatomy, 20

organizing, 53, 91
picking passes, 80

RGBA channel, 78

splitting, 77–83

unpicking through, 224

unpremultiplying passes, 78

processing, speeding up, 67–68

Progress Bars panel, 4 – 6

Project Settings panel

Frame Range, 250

LUTs (lookup tables), 227–229

nonlinear images, 227–229

Root tab, 226

Views tab, 230

properties

adjusting, 38–39

linking with expressions, 69–71

resetting for nodes, 35–36

Properties Bin

clearing, 38

displaying floating, 38

identifying, 2

locking, 38

Node Help button, 38

panel, 3

panels and buttons, 37

undo/redo functionality, 38

Properties panel

loading, 34

loading into Properties Bin, 38

removing from Properties Bin, 38

proxies

creating, 245–248

using, 238–242

Python

adding hot keys, 378–379

creating buttons with, 377–378

creating toolboxes, 377–378

customization setup, 375–376

menu.py file, 379–382

retaining customizations, 379–382

R

Radial nodes

creating, 135, 353–355

for masks and color correction, 136


raytracing, defined, 308

Read nodes

changing Write node to, 46

creating, 8 – 9

using, 20–22

Ref channel set, 65

reflection. See also Camera Tracker

cutting to size, 312–314

environment light, 308–312

loading script for, 303

ScanlineRender nodes, 304–306

reflection movement, calculating, 286–287

reflection render pass, 65

reflective surface, creating, 307–308

Render menu, described, 4

render passes, 65

rendering. See also 3D render; animation; composites

changing compression, 45

and comparing versions, 57

moving images, 43–46

naming file sequences, 43–46

process trees, 42–43

stereo trees, 255–258

using Write node, 42–43

resolution

defining, 232–233

displaying for images, 22

scaling down, 239

Retime node, using, 249

RGB sliders, using, 40

RGBA channel set, 62, 65

RolloffContrast, using, 105

RotoPaint node

animating shapes, 169–170

Bézier hot keys, 167

Clone brush, 180–182

combining with animation, 184–190

creating moving matte, 177–179

deleting strokes, 166

Dope Sheet panel, 190–195

drawing shapes, 166–169

dust removal, 182–184 ⬆


editing shapes, 166–169

editing strokes, 164

enabling motion blur, 179


Eraser tool, 182

erasing strokes, 166

hiding shapes, 178

inserting, 327

Keymix node, 186–187

Merge node, 178

painting in vectors, 164–165

painting strokes, 162–163

Reveal brush, 183

sample script, 174–177

Selection tool, 161

Source Time Offset field, 183

stroke-drawing tools, 162

Stroke/Shape List window, 187–190

Tool Settings, 160–161

Toolbar, 160–161

S
Saturation math function, explained, 102

ScanlineRender nodes

cloning, 332–333

Reformat node, 306

using for reflection, 304–306, 309–310

Scope panel, described, 3

Script Editor panel

buttons, 376

described, 4

scripts. See also hi-res stereo script

learning from, 224

manipulating in text editors, 368–370

saving, 28–29, 63

settings. See Project Settings panel

shadows, setting Gamma properties, 132

shape creation tools, 344–345

shapes

animating, 169–170

drawing, 166–169

editing, 166–169

hiding in RotoPaint node, 178

Shuffle node, using, 50–51, 84–85

SideBySide node, using with stereo trees, 258

sky texture

adding, 329–331 ⬆
color correcting, 333–334

slap comp

example, 73
explained, 72

sliders, adjusting, 38–39

source pins, adjusting, 154–158

Spc channel set, 65

specular render pass, 65

sphere, transforming, 330

SplitAndJoin node, using, 254

SSS channel set, 65

stereo project, 248–255

stereo trees, 255–258. See also process trees

stereo-view proxies, creating, 245–248

strokes

deleting, 166

editing, 164

erasing, 166

painting, 162–163

Switch node, using to choose branches, 355–359

T
TCL scripting language, using, 78, 80

text editors

manipulating scripts in, 368–370

using with Radial nodes, 353–355

text fields, use of TCL, 78

texture. See camera projection

Time toolbox

described, 7

using, 248–251

timing, controlling in trees, 190–195

TMI sliders

locating, 40

using, 133

toolboxes

creating with Python, 377–378

in Nodes Toolbar, 7

tools, saving groups of, 264–266

ToolSets toolbox, using, 7 , 264–266

Tracker node, 138–141. See also 2D tracking

trackers, selecting, 150

tracking

adding keyframe boxes, 150

adjusting source pins, 154–158



clear buttons, 148

explained, 137

four points, 145, 147


improving, 148–152

perspective movement, 145

pictures in frames, 146–148

replacing picture, 152–154

rotational value, 145

scaling, 145

typing expressions, 155

using Traffic Lights button, 149

tracking points, listing in Properties panel, 147

tracks

improving, 152

stopping, 148

Transform node, 29–30

Transform toolbox, described, 7

transformation controls, explained, 35

trees. See process trees

UI (user interface), 2 – 6

Ultimatte node, explained, 200

undo/redo functionality, availability, 38

(Un)premult By property, using, 118

user interface, 2 – 6

User Knobs, creating for Gizmos, 347–353

V
vectors, painting in, 164–165

Viewer

components, 12

connecting nodes to, 11–12

fps field, 16

identifying, 2

inputs, 14–15

menu, 4

navigating, 13

playing clips, 15–16

process tree and pipes, 11

RAM cached frames, 16

using, 13–14

Views toolbox, described, 7

W
world position pass, using with 3D objects, 266–268 ⬆
Write node, 42–43, 46

You might also like