0% found this document useful (0 votes)
171 views

Basics of Animation Notes

The document discusses several key aspects of character design: 1) It identifies common character archetypes like the hero, shadow, fool, mentor, and trickster that often appear in stories. 2) It emphasizes that a character's design should always serve the story and be motivated by the narrative. 3) When designing characters of other ethnicities, designers are advised to research the culture first before assigning traits to avoid stereotypes. 4) Originality in character design is difficult given the breadth of human creativity, so the focus should be on crafting a compelling character rather than one that is completely novel.

Uploaded by

priyal jain
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
171 views

Basics of Animation Notes

The document discusses several key aspects of character design: 1) It identifies common character archetypes like the hero, shadow, fool, mentor, and trickster that often appear in stories. 2) It emphasizes that a character's design should always serve the story and be motivated by the narrative. 3) When designing characters of other ethnicities, designers are advised to research the culture first before assigning traits to avoid stereotypes. 4) Originality in character design is difficult given the breadth of human creativity, so the focus should be on crafting a compelling character rather than one that is completely novel.

Uploaded by

priyal jain
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

2) Visualizing the Story

BASIC RULES OF COMPOSITION


Composition is the art of arranging objects in a frame. There are actually shapes and alignments that people
find pleasing, but movie composition also needs to tell a story. The arrangement of your objects and actors
in a frame can add to your storytelling by emphasizing some and de-emphasizing others.
1) Video vs Still Photos
The rules of composition are the same for video and photos, but like everything else, it's complicated. Put
simply, the aspect ratio — vertical vs. horizontal dimensions — of still camera images is different from
the aspect ratio of motion pictures.

2) Moving Frames…move

One other obvious thing that affects the composition of a movie frame is the fact that things are not only
moving in it but sometimes through it and often your camera frame itself is moving.

3) The Rules of Thirds

Divide the screen into thirds with four lines — like a tic-tac-toe game — your objects of interest should
fall at the intersection of two of these lines. Some cinematographers are more rigid about it than others,
but it's very possible to find a movie whose every shot does not deviate from this.

4) The Eyes are the Important Thing

We look at people's eyes. We instinctively look at what other people are looking at. We have a deep down
belief that the person inside a body is accessible through the eyes — and this is universal.

5) Balance and Symmetry

Imagine your frame as a shadow box that you're putting items into and that sits on a fulcrum in the center.
Balancing the left and right sides normally gives off a feeling of harmony, and an unbalanced frame one
of tension.

6) Leading Lines

We follow lines like roads. Leading lines are usually imaginary lines that go from one object to another to
draw our attention from a main object of focus to a secondary one.

7) Depth of Field

The camera can emphasize what's important and de-emphasize other things by using depth of field (DOF)
— the area of a shot that's in focus.

CAMERA ANGLES
• THE AERIAL SHOT
It’s all in the name – this shot is filmed from the air and is often used to establish a location (usually
exotic and/or picturesque).

• THE ESTABLISHING SHOT


Again, it’s in the name – this shot is at the head of the scene and establishes the location the action is set
on, whilst also setting the tone of the scene(s) to come. It usually follows directly after an aerial shot in
the opening of films and is beloved by TV directors.

• THE CLOSE-UP (CU)


This is perhaps the most crucial component in cinematic storytelling and is arguably an actor’s most
important moment on camera. This shot is usually framed from above the shoulders and keeps only the
actor’s face in full frame, capturing even the smallest facial variations

• THE EXTREME CLOSE-UP (XCU)


This shot is traditionally used in films and focuses on a small part of the actor’s face or body, like a
twitching eye or the licking of lips in order to convey intense and intimate emotions.

• THE MEDIUM SHOT (MS)


Also referred to as a ‘semi-close shot’ or ‘mid-shot’, this generally shoots the actor(s) from the waist up
and is typically used in dialogue scenes.

• THE OVER-THE-SHOULDER SHOT


This is where the camera is positioned behind a subject’s shoulder and is usually used for filming
conversations between two actors. This popular method helps the audience to really be drawn into the
conversation and helps to focus in on one speaker at a time. Seeing as the non-speaking actor is seen only
from behind, it’s common for major production sets to substitute actors with stand-ins or doubles for
these shots.

• THE LOW ANGLE SHOT


This shot films from a lower point and shoots up at a character or subject, making them appear larger so
as to convey them as heroic, dominant or intimidating. It’s also another way of making cities look empty.

• THE HIGH ANGLE SHOT


In contrast with the low angle shot, this one films from a higher point and looks down on the character or
subject, often isolating them in the frame. Basically the direct opposite of the low angle, it aims to portray
the subject as submissive, inferior or weak in some way.

• THE WIDE (OR LONG) SHOT


This shot normally frames the subject from the top of their head to their feet whilst capturing their
environment. It’s typically used to establish the setting of the particular scene – so similar to the
establishing shot, but focused more on characters and actors and the contextual relationship with their
surroundings.

• THE MASTER SHOT


Often confused with the establishing shot, this too, identifies key signifiers like who is in the shot and
where it’s taking place. However, unlike the establishing shot that has a tendency to focus more on
location, the master captures all actors in the scene and runs the entire length of the action taking place.
3) Character Design

Who is a Character Designer?


A Character Designer is an artist that creates new, original characters (sometimes shortened to “OCs”) for
a purpose. It can be a character based on a definition from a story or script as would be the process in
Feature Films, TV Series, Video Games, Children’s books, Web animation, Comic books, Comic strips, or
even Licensing or Toy Design.

Rules of Character Design:


1) Archetypes
2) The character is always in service to the story
3) Ethnicity
4) Originality
5) Shapes
6) Reference
7) Aesthetic

A) Archetypes –
i) The Hero – When dealing with Character Design, always remember that the character
exists as a result of the story. The story will dictate that you need a hero. The hero is defined
as someone who is brave, selfless, and willing to help others no matter what the cost.

ii) The Shadow – The Shadow character is the one who is connected the most with our
instinctual animal past. He or she is perceived as ruthless, mysterious, disagreeable and
evil.

iii) The Fool – The fool character is the one who goes through the story in a confused state and
inevitably gets everyone into undesirable situations. The fool is in the story to test the main
character. How that character deals with the actions of the fool tells us a lot about that
person.

iv) The Anima/Animus – The anima is the female counterpart to the male, and the animus is
the male counterpart to the female. This character embodies the male and female urges.
The anima/ animus characters exist to draw you into the story

v) The Mentor – the mentor plays a key role in making the protagonist realize his or her full
potential and is often portrayed as an old man or woman, this is because most cultures
associate age with having wisdom. The mentor takes on many of the characteristics of a
parent.

vi) The Trickster – The trickster character is the one that is constantly pushing for change. The
trickster can either be on the side of good or the side of evil

B) Story – A character is always in service to the story


i) Who? - Who is the character in question? Who are we talking about in this character
summary?

ii) What? - What does this character do in the story?

iii) When? - When does this story take place?

iv) Where? - Where does the story take place?

v) Why? - Why is the character motivated to do what he does in the story?

vi) How? - How does your character do what he does? Sometimes this question can be
answered in the why question.

C) Ethnicity

If you want to create a character of color, should you start with their ethnicity first or should you start
with the character? Consider we have two people approaching character design in these two separate
ways. The first person comes up with the personality first, then given them an appearance and the
ethnicity, then set the world around them. This is a valid way of creating the world, but it’s not the only
way.
Sometimes you decide the character’s ethnicity first. Consider the second person had stories about a
Middle Eastern character. Now he didn’t know much about her (or Middle Eastern cultures for that
matter), but he wanted to know more about the culture and doing research about the ethnicity to let him
know more about her character. If he had written the story before doing his research, he would have come
to understand that his character behaved in a way that was counter to her culture. If you are going to use a
culture that already exists, it may make more sense to research the culture first, then create the characters
within the confines of their culture.

D) Originality

To be original, you have to come up with something no one has ever done before. The reason it is so hard
to be original is because humanity has been on this planet for quite some time, and during that time a huge
number of ideas have been thought of and brought to fruition. Those creations are what inspired generations
of people such as artists, architects, actors, writers, musicians, and many others to create the great things
they did. So if you look throughout history, you will see that most, if not all, ideas have already seen the
light of day. Now I am not saying that there will never be an original idea ever again (I just hope I am
around to witness it), but I’m just saying it is going to be extremely difficult. So you shouldn’t be obsessed
with creating anything, let alone a character, with the assumption that it absolutely has to be an original
concept.

E) Shapes and Silhouettes

i) Square – Stability, Trust, Honesty, Order, Conformity, Security, Equality, Masculinity

ii) Circles – Completeness, Gracefulness, Playfulness, Comforting, Unity, Protection, Childlike


iii) Triangles – Action, Aggression, Energy, Sneakiness, Conflict, Tension

F) Reference

Reference is defined as “an act or instance of referring.” I don’t know about you, but when the definition
of a word uses the word to explain its meaning, I don’t think that’s much help. Reference for character
designers can be defined as “the ability to observe from life or from a photograph to ensure that what is
being portrayed is visually correct.”

G) Aesthetics / Style

What can make or break a character is the style or general aesthetic used in its creation. Adding to what
we’ve established before about simple shapes as a starting point for character design, the style of a character
comes from the way in which the shapes that compose it blend together in a visually stimulating
manner. Contrast of shape, form or proportion is a great way to balance shapes and make your character
interesting. Adding to what we’ve established before about simple shapes as a starting point for character
design, the style of a character comes from the way in which the shapes that compose it blend together in a
visually stimulating manner.

Ages 0–4 Characters have really big heads and eyes, short bodies, bright colors, and simple shapes.
Ages 5–8 Characters still have big heads but less so than characters for the 0–4 age group. Their eyes are
smaller, the colors are a bit more muted, and the shapes are more intricate.
Ages 9–13 Characters are pulling away from the simplistic. They resemble more believable proportions.
The colors are more realistic and have a lot more details.
Ages14–18 Characters resemble the real world. They are properly proportioned. The colors are more
complicated, and they have the most amount of detail.

The color wheel shows the primary, secondary, and complementary colors. Complementary colors are
directly across from each other, so red is the complementary color of green, and blue is the complementary
color of orange, and so on.
The color red generally evokes feelings of action, confidence, courage, vitality, energy, war, danger,
strength, power, determination, passion, desire, anger, and love.
The color yellow generally evokes feelings of wisdom, joy, happiness, intellect, caution, decay, sickness,
jealousy, cowardliness, comfort, liveliness, optimism, and feeling overwhelmed.
The color blue generally evokes feelings of trust, loyalty, wisdom, confidence, intelligence, faith, truth,
health, healing, tranquility, understanding, softness, knowledge, power, integrity, seriousness, honor,
coldness, and sadness
The color purple generally evokes feelings of power, nobility, elegance, sophistication, artificial luxury,
mystery, royalty, magic, ambition, wealth, extravagance, wisdom, dignity, independence, and creativity.
The color green generally evokes feelings of nature, growth, harmony, freshness, fertility, safety, money,
durability, luxury, optimism, well-being, relaxation, optimism, honesty, envy, youth, and sickness.
The color orange generally evokes feelings of cheerfulness, enthusiasm, creativity, fascination, happiness,
determination, attraction, success, encouragement, prestige, illumination, and wisdom.
The color black generally evokes the feeling of power, elegance, formality, death, evil, mystery, fear, grief,
sophistication, strength, depression, and mourning.
The color white generally evokes the feeling of cleanliness, purity, newness, virginity, peace, innocence,
simplicity, sterility, light, goodness, and perfection.
4) World Building
World-building is so much more than just a framing device. It’s the very essence of any good fantasy or
science fiction story, and the basis of a sense of place in other genres. Good world-building lends an
immersive richness to your writing, while also giving readers the information they need to understand
characters and plot lines.
So, how exactly should writers go about building worlds in their fiction? To find out, we’ll break down
the concept of world-building into three main categories:
▪ Imaginary worlds – the construction of entirely fictional universes, found primarily in fantasy genres.
▪ Alternate reality – re-imaginings of the details of our existing world; popular with writers of science
fiction.
▪ Actual locations – the invocation of a real place in the world, utilized in novels with no elements of the
fantastic.
Let’s begin by entering the wondrous realm of fantasy fiction.

IMAGINARY WORLDS
Creating an imaginary world is one of the most complex types of world-building. It’s most often utilized
in fantasy and science fiction, where a writer conjures up from scratch every detail of a world:
geography, history, language, lore, characters, social customs, politics, religion…

Deciding on a starting point


J. R. R. Tolkien, author of Lord of the Rings, The Hobbit and countless other classic works, began the
development of Middle-earth in an unusual way: by first creating an entire fictional language to be
spoken by his characters. A professional philologist and talented linguist, Tolkien developed the elvish
language of Quenya, using it as a base for expanding his imaginary world into the vast, detailed, lore-rich
Middle-earth we know today.

Asking questions about your world


After you’ve got the ball rolling by establishing a starting point, you’ll need to begin working out the
details that make up a convincing, consistent imaginary world. A great way to start doing this is to ask
(and answer) a set of questions pertaining to the different aspects of your world.

Approach this exercise as if you were describing your home country to someone who knows nothing
about it – or, on a larger scale, as if you were introducing Earth to someone from an alien race. How
would you explain:

▪ What it looks and feels like – its landscapes, its climate?


▪ Its people – their appearance, customs, ethics and values?
▪ The dominant forces that shape change and development?

Drawing inspiration from real life


Even though imagining an entirely new world is one of the most creative processes a writer can
undertake, it’s almost impossible to create something entirely from nothing. Naturally – even if only
subconsciously – you will adapt and incorporate some real-world elements into your imaginary setting
and story, using them as a base of inspiration.
Where reality and fantasy collide
An interesting way to provide contrast or conflict within your story is by developing your fictional world
alongside, or within, an established location – for example, right here on Earth. Perhaps the most
famous example of this reality/fantasy cross-over is J. K. Rowling’s Harry Potter series, which involves an
entirely made-up world of magic that’s hidden away on modern-day Earth. Harry himself, as the main
character from whose point of view the story is presented, is as much a stranger to this world as readers
are in the beginning. We’re introduced firsthand to every person, place, detail and experience right
alongside Harry; as he journeys into this magical new world and has his questions answered, so, too, do
we as readers.

ALTERNATE REALITY
Similar to the creation of an imaginary world, but slightly less demanding due to the existing base you
have to work with, the construction of an alternate reality is a type of world-building often found in
dystopian, speculative and science fiction. By creating an alternate reality, you are developing an
alternative version of our own Earth, imagining how things could be different and posing questions
about what these differences would mean for humanity. Authors often use this style of writing to
express their thoughts about the flaws of humanity and today’s world, exploring the consequences
these flaws may have to potential to produce.

What if?
As a writer imagining an alternate reality, the most important question you can ask is ‘What if?’ This is
the base-level query on which your story’s entire premise should be founded, and upon which you will
build the individual elements of your world. For example:

▪ What if a particular, important historical event had never happened?


▪ What if our planet and its inhabitants had evolved differently?
▪ What if a fundamental aspect of life as we know it was to change suddenly?
▪ What if we invented new technology that could accomplish wonderful/terrible things?
▪ What if we could visit or communicate with other life forms (or vice versa)?
Belief and disbelief
A key difference between creating an alternate reality and creating an imaginary world is the suspension
of disbelief you can expect from your readers. The imaginary worlds of fantasy and science fiction we
examined above – Westeros and Essos, the kingdoms of Middle-earth – imagine an entirely new world,
quite unrelated to our own. Due to this complete removal from reality, readers automatically enter with
a higher level of tolerance for things that may otherwise have jarred the story’s logic or lifted them out
of the moment. Readers would never think to question, for example, the fact that a single magic ring has
the power to rule the world; they’re also less likely to query the fact that a teenage girl is slowly
conquering a kingdom when there are dragons involved in the equation!

Past, present or future?


One of the best ways to begin establishing your alternate reality is by clarifying the time period in which
you want it to be set. When will your story take place in relation to the real world? Is it:
Each of these approaches has its own advantages and benefits, and each lends itself well to different
purposes. You’ll need to decide which of these settings best serves the particular story you want to tell.
Know your building blocks
In order to portray a compelling alternate version of the world, you must first be well-versed in the facts
of the real version. Whether you’re setting your alternate reality in a real-world location or reimagining
a history event, learn all you can about its real counterpart and incorporate your knowledge into your
new interpretation.

The end of the world as we know it


Dystopian and post-apocalyptic fiction has seen a meteoric rise in recent years, especially in the young
adult sub-genre. The Hunger Games trilogy, perhaps the most popular of the bunch, has spawned a
plethora of novels set in reimagined versions of Earth, and it’s easy to see why. Imagining a global-scale
apocalyptic event gives you the freedom to predict how humanity might rebuild itself after such a
disaster – a meaty subject to tackle.

Cultures
Pick three traits
So you want to create a culture? Start by identifying the three traits they value in a person. A culture based
on trade might value shrewdness, frugality, and trustworthiness. Or they could place higher value on
gregariousness, hard work, and reliability. A people whose history has been defined by frequent wars
might bravery, strength, and conformity. Ancient Sparta placed immense value on stoicism, discipline, and
duty, whereas the Viking culture prized ferocity, physical prowess, and daring. But both would be
considered “warrior cultures.” That bland label doesn’t begin to account for the stark differences in the
two cultures, but cultural values make the distinction much easier to understand.

Prioritize values
A priority of values underlies all cultural judgments. “Culture X are a bunch of barbarians.” “Culture Y?
Those effete money-grubbers?” “I wouldn’t trust Culture Z with my horse’s reins, let alone my bank
ledgers.”

A hero in one culture (A) may be a hero to the people in the next kingdom over (B). But on the far side,
his other neighbors (C) consider him a despicable villain. The hero in question? He killed his king to spare
the prince an unjust execution. The people of A value honor above loyalty, and thus the knight was in the
right to do what he had to in order to save the prince from his unjust fate. The people of B, looking on
from the next kingdom over, value boldness and honor most highly, and nod their heads in approval that
the knight chose to slay a dishonorable ruler rather than carry out such an order. The people who live in
C are aghast, as loyalty and humility are paramount to them. Even though they place value on honor, they
don’t see it as the knight’s place to value his own judgment over his king’s, even if he personally disagrees.

Outliers, personal values, and conflict


Of course, fiction would be boring if cultures were monolithic. While these broad-brush approaches may
work over the span of a county, or even a neighborhood, individuals will always carry their own personal
values around. Most of the time, they will fall in line with the local culture, but fiction thrives on the
outcast, the non-conformist, and the rebel. Consider a culture that values honesty above loyalty, and the
boy who decides to help his friend steal to feed his family. While the starving thief can be understood, if
not condoned, the helper (who was in no danger himself), is vilified for his crime.
Government
Monarchy
The old fantasy standby. It has a well-documented history in real life to draw on, a simplistic, figurehead
focused that can be delved into with great detail should the need arise. Most readers will have a feel for
how lords, princes, and princesses fit in, and where the peasants rank in the grand scheme (which is to
say: not at all, right up to the point they gather in vast numbers and revolt). Monarchies can be portrayed
as either benevolent or oppressive, usually but not exclusively in line with the disposition of the monarch.

Democracy
Flipping to the other end of the spectrum, a perfectly conceived democracy gives all the power to the
people to determine their own rulers. Most systems will make tradeoffs to allow for a more efficient
selection system (democratic republics), but generally some form of democracy is viewed as necessary to
maintaining a free, socially just country (as determined mainly by people who already live in a democracy,
of course).

Theocracy
Theocracy really just means rule by religion. It practice it can take on aspects of many other forms of
government.

Anarchy
While it provides little to nothing in the way of structure, anarchy offers plenty of opportunity for chaos
to drive a story. Various warlords, politicians, and/or deposed nobles can be vying for power, or the region
might just descend into general chaos. Usually this form of governmentlessness is temporary. People tend
to want order and direction, for safety if nothing else.

Bureaucracy
“The bureaucracy is expanding to meet the needs of the expanding bureaucracy.”
This is one of my favorite quotes included in Civilization IV, and they didn’t know its origin (probably lost
in a manila file folder somewhere). At some point, a government can become so bloated with the
instruments of rulership that the bureaucracy actually becomes the de facto ruler.

Council
This is a hybrid form of monarchy and democracy where a small group, chosen for their wisdom (or the
perception of their wisdom), appoints a ruler. This ruler is often beholden to the council for approval of
controversial decisions. It can also be a form of republican government where a small group of appointees
each represent the interests of a much larger constituency.

Communism
This is another hybrid form of government, split between democracy and bureaucracy. In an attempt to
do what’s best for the people, the people (and their uninformed decisions) are removed from the
equation.
6) Static to Moving Images - Application : Animation is
Everywhere

Initially when animations started with paper (2D Animation), people had a perception that it was
restricted to only drawing and cartoons. As the industry matured with computers being introduced, the
perception changed to flashy, photo-realistic or cartoonish productions for web games, movies, video
games etc.
Today, animation has convinced professionals from various fields that it should not be restricted to a
skillset but should be used as a medium of expression or communication. For instance, when you use
animation in education, it can used to explain theory and concepts to students in a more convincing
manner.
In truth, animation is used in a variety of industries away from the big screen or consoles. Computer
animation is a very practical tool with useful applications in a variety of fields. Below are some
industries which use animation, and most of them are not just related to the media & entertainment
sectors.
• Films, Media and Entertainment
• Games and Visual effects
• Medical visualization
• Architectural visualization
• Mechanical Animation
• Forensic Animation
• Education and training
• Journalism

Games and Visual effects


In gaming Industries most of the things depends on animation. Without model a programmer can't
programmer a game and everything in the scene starting from modeling, Texturing, Rigging,lighting
etc all are done by the Animation students or professionals. Similarly, lot of content in films are also
made using animation processes. For example, Fantacy worlds in films like avatar, Alice in
wonderland, Avaengers and also creatures and characters like dragon from GOT, Hulk, Spiderman
etc are made using animation process.
Medical Animation
A medical animation is a short educational film, usually based around a physiological or surgical topic,
rendered using 3D computer graphics. While it may be intended for a variety of audiences, medical
animation is most commonly utilized as an instructional tool for medical professionals or their patients.

Architecture Visualization
Architectural Animation is a short architectural movie or images created on a computer. A computer-
generated building is created along with landscaping and sometimes moving people & vehicles.
Mechanical Animation
Using computer modeling and animation to create virtual models of products and mechanical designs
can save companies thousands to millions of dollars, by cutting down on development costs. Working
in a virtual world can let developers eliminate a lot of problems that would normally require extensive
physical test models & experimentation.

Forensic Animation
Forensic animation is a branch of forensics in which animated recreation of incidents are created to
aid investigators & help solve cases. Examples include the use of computer animation, stills, and other
audio-visual aids.

Animation in Education and training


Animation has recently become a popular tool in classroom teaching and learning. The book, Learning
with Animation (2007), notes that the use of animation can actually increase interest & motivation in
learning.
Many companies & production houses have started producing training content in the form of animation.
As the training & education industry is massive and the content delivered is huge, there is a great
demand for content taught with the help of animation.
Animation in journalism
Journalism is a sector which uses animation and motion graphics extensively as medium of storytelling
in a comical way. It not only tells a story convincingly but can communicate a very complex political
issue because of the flexibilities of medium.

Similarly, animation is also used in various other industries. The scope of a career after doing an
animation course is therefore unlimited. With the right skill & training, you can hold a creative job in
any industry today.
07) Creating Stop Motion Animation

Intro to Animation:
STOP-MOTION ANIMATION
Stop motion is an animation technique that physically manipulates an object so that it appears
to move on its own. The object is moved in small increments between individually
photographed frames, creating the illusion of movement when the series of frames is played as
a fast sequence. Dolls with movable joints or clay figures are often used in stop motion for
their ease of repositioning. Stop motion animation using plasticine is called clay animation or
"clay-mation". Not all stop motion requires figures or models; many stop motion films can
involve using humans, household appliances and other things for comedic effect. Stop motion
can also use sequential drawing in a similar manner to traditional animation, such as a flip
book.
Types of Stopmotion Animation:
Clay Animation: Clay animation or claymation is a form of stop motion animation where each
animated piece is deformable thus allowing the objects to be shaped and moved as per the
animator’s desire who click pictures frame by frame as they simultaneously move the objects
bit by bit.
Technique: Each object or character is sculpted from clay or other such similarly pliable
material as plasticine, usually around a wire skeleton called an armature, and then arranged on
the set, where it is photographed once before being slightly moved by hand to prepare it for
the next shot, and so on until the animator has achieved the desired amount of film. Upon
playback, the mind of the viewer perceives the series of slightly changing, rapidly succeeding
images as motion.
Sand Animation: Sand animation is the manipulation of sand to create animation. In
performance art an artist creates a series of images using sand, a process which is achieved by
applying sand to a surface and then rendering images by drawing lines and figures in the sand
with one's hands. A sand animation performer will often use the aid of an overhead projector
or lightbox (similar to one used by photographers to view translucent films). To make an
animated film, sand is moved on a backlit or front-lit piece of glass to create each frame.
Puppet Animation: Puppet animation is a development from object animation where instead
of using objects, puppets are used. Puppet animation is a development of stop-motion
animation, rather than using objects in different frames puppets were introduced due to their
human like quality’s so directors can move the puppet freely and more easily to show
different movements. It works by taking pictures and moving the puppet in different frames.
Pinscreen Animation: Pinscreen animation makes use of a screen filled with movable pins,
which can be moved in or out by pressing an object onto the screen. The screen is lit from the
side so that the pins cast shadows. The technique has been used to create animated films with
a range of textural effects difficult to achieve with any other animation technique, including
traditional cel animation.

Object Animation: is a form of Stop motion animation that involves the animated
movements of any non-drawn objects such as toys, blocks, dolls, etc. which are not fully
malleable, such as clay or wax, and not designed to look like a recognizable human or animal
character.
Object animation is considered a different form of animation distinct from model
animation and puppet animation, as these two forms of stop-motion animation usually use
recognizable characters as their subjects, rather than just objects like static toy soldiers, or
construction toys such as Tinker Toys, Lego etc
Cut-out Animation: Cut-out animation is a technique of producing animations using flat
characters, props and backgrounds cut out from various materials such as, cardboard, stiff
photographs
Pixilation Animation: Pixilation is one of the most difficult forms of stop-motion animation. In
this form live actors change their movement in each frame of the animation. They pose
while multiple frames are taken and the position changes slightly in each frame.
Paint-on-glass-animation: Paint-on-glass animation is a technique for making animated
films by manipulating slow-drying oil paints on sheets of glass. Gouache mixed
with glycerine is sometimes used instead. The best-known practitioner of the technique
is Russain animator Alexsandr Petrov.
8) 2D Animation – 12 Principles of Disney Animation

12 Principles of Animation:
The Twelve Basic Principles of Animation is a set of principles of animation introduced by the Disney
animators Ollie Johnston and Frank Thomas in their 1981 book “The Illusion of Life”.

1.SQUASH AND STRETCH

Squash and Stretch is what gives flexibility to objects. There’s a lot of squash and stretch happening
in real life that you may not notice; in animation this can be exaggerated.
For instance, there’s a lot
of squash and stretch that occur in the face when someone speaks, because the face is a
very flexible area.

Squash and stretch can be implemented in many different areas of animation, like the eyes during
a blink or when someone gets surprised or scared, their face squashes down, and stretches.
Squash and stretch is a great principle to utilize to exaggerate animations and add
more appeal to a movement. It is used in all forms of character animation from a bouncing ball to the
body weight of a person walking. This is the most important element you will be required to master and
will be used often.

2.ANTICIPATION

Anticipation is used in animation to set the audience up for an action that is about to happen.
An easy way to think about this is that if a person needs to move forward, they first must move
back. For example, if a character is about to walk forward, they might move back slightly, this
not only gets their momentum up, but it also lets the audience know this person is about to
move. Or if a character is reaching for a glass on a table, they might move their hand back,
before moving it forward.

3.STAGING

A pose or action should clearly communicate to the audience the attitude, mood, reaction or idea
of the character as it relates to the story and continuity of the story line. The effective use of
long, medium, or close up shots, as well as camera angles also helps in telling the story. There is
a limited amount of time in a film, so each sequence, scene and frame of film must relate to the
overall story. Do not confuse the audience with too many actions at once.

Use one action clearly stated to get the idea across, unless you are animating a scene that is to
depict clutter and confusion. Staging directs the audience's attention to the story or idea being
told. Care must be taken in background design so it isn't obscuring the animation or competing
with it due to excess detail behind the animation. Background and animation should work
together as a pictorial unit in a scene.
4.STRAIGHT AHEAD AND POSE TO POSE ANIMATION

It is the first&last drawing and the ones in between. What is important that one doesn't loose
the original proportions, volume, size and proportions which is more Pose to Pose at it is more
structured way of setting poses by noting down the key drawings done at intervals throughout
the action/scene. This allows to have a better control over the technical side of the drawing and
its action. Straight ahead gives the scene a fresh, spontaneous and more expressive feeling to it
as well as more dynamic, fluid and realistic action sequence.

While Pose to pose can achieve quite dramatic, emotional and relation to the surroundings
result. In an animation process,both are used but more in a traditional animation as the
computer animation has resolved the straight ahead action by calculating the expected path of
the action. But, by knowing both of these principles one can have complete control over the
animation. . It allows you to work much simpler, and ensure the posing and timing is correct
before adding more detail.

5.FOLLOW THROUGH AND OVERLAPPING ACTION

Follow trough action is where nothing stops at the same moment.

Overlapping action is when a part of a object, body, part is catching up (is one step behind the
primer action). An overlapping action will move on a different timing while Follow trough
happens while the body has already stopped. Timing is essential. One can use timing to produce
a very realistic animation or exaggerate it and have a comical, cartoon like effect.

6.EASE IN EASE OUT / Slow in slow out

As action starts, at the beginning, because of inertia, there will be more often movements which means
more drawings. But in the mean time, the count of the drawings will decrease as drawings make the
action faster while more drawings make it slower. This principle helps to soften the action and make it
more realistic. This can be used for surprise, exaggeration or shock effect as it helps to catch the right
timing of a moment of an action.

7.ARCS

Arcs are a very natural occurrence of physics and motion. Arcs can be applied to almost anything as it
gives a better feeling of a realistic action.

The only exception might be something mechanical as it moves in straight lines.A speed increases arcs
flatten out in moving ahead and broaden in turns which means there need to be action/time for a quick
movement to take place. Without the arc animation looks stiff and robotic. It also gives the animation a
better sense of implied lines and direction of a movement.

8.SECONDARY ACTION

This principle completes, supplements and enriches the main action. It adds more characteristic to the
character and its movements as well as re-enforcing it. These additional movement should be small and
not distract from the main action. Secondary action should support the main action not overpower it. It
makes the character more interesting and appealing. It is suggested to use the secondary animation just
as anticipation and overlapping action-before and after the main action.Even a small thing as blinking is
considered as secondary action which enriches emotional state of a character and its state of mood.

9.TIMING and SPACING

Timing and spacing in animation is what gives objects and characters the illusion of moving within the
laws of physics. Timing refers to the number of frames between two poses. For example, if a ball travels
from screen left to screen right in 24 frames that would be the timing. It takes 24 frames or one second
(if you’re working within the film rate of 24 frames per second) for the ball to reach the other side of the
screen.

• The spacing refers to how those individual frames are placed. For instance, in the same example
the spacing would be how the ball is positioned in the other 23 frames. If the spacing is close
together, the object moves slower, if the spacing is further apart the object moves faster.

10.EXAGGERATION

Exaggeration is used to push movements further to add more appeal to an action. Exaggeration can be
used to create extremely cartoony movements, or incorporated with a little more restrain to more
realistic actions. Whether it’s for a stylized animation or realistic, exaggeration should be implemented
to some degree.

• If you have a realistic animation you can use exaggeration to make a more readable or fun
movement while staying true to reality. For example, if a character is preparing to jump off a
diving board you can push them down just a little bit further before they leap off. You can also
use exaggeration in the timing as wall to enhance different movements or help to sell the weight
of a character.

11.SOLID DRAWING

This principle requires to take into account three dimensional space, perspective, anatomy, volume,
shadow&light, balance and weight. The three dimensional is movement in space while the fourth is
movement in time. Some might confuse solid drawing with detailed&realistic drawing. If the main form
is correctly drawn the details can be added without any problems by also following the same rules

12.APPEAL

A charisma in an actor.A character who is appealing is not necessarily sympathetic.villains or monsters


can also be appealing,the important thing is that the viewer feels the character is real and interesting.

“Appeal is the pleasing and fascinating quality that makes a person enjoy what they are watching”.
10) 3D Animation – Techniques of Modeling and Texturing

Modeling
3D modeling (or three-dimensional modeling) is the process of developing a mathematical
representation of any surface of an object (either organic and inorganic) in three dimensions via
specialized software. The product is called a 3D model/3D asset.

Three-dimensional (3D) models represent a physical body using a collection of points in 3D space,
connected by various geometric entities such as triangles, lines, curved surfaces, etc.

Primarily there are three types of modeling -

• Polygonal modeling – Points in 3D space, called vertices, are connected by line segments, called
edges to form a polygon patch, called faces. The vast majority of 3D models today are built as
textured polygonal models, because they are flexible and because computers can render them so
quickly. However, polygons are planar and can only approximate curved surfaces use many
polygons.Their surfaces may be further defined with texture mapping by the use of another 2D
component called UV’s.

• Curve modeling – Surfaces are defined by curves, which are influenced by weighted control
points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for
a point will pull the curve closer to that point. Curve types include nonuniform rational B-spline
(NURBS), splines, patches, and geometric primitives

• Digital sculpting – Still a fairly new method of modeling, 3D sculpting has become very popular
in the few years it has been around.[citation needed] There are currently three types of digital
sculpting: Displacement, which is the most widely used among applications at this moment, uses a
dense model (often generated by subdivision surfaces of a polygon control mesh) and stores new
locations for the vertex positions through use of an image map that stores the adjusted locations.
Volumetric, loosely based on voxels, has similar capabilities as displacement but does not suffer
from polygon stretching when there are not enough polygons in a region to achieve a deformation.
Dynamic tessellation is similar to voxel but divides the surface using triangulation to maintain a
smooth surface and allow finer details. These methods allow for very artistic exploration as the
model will have a new topology created over it once the models form and possibly details have been
sculpted. The new mesh will usually have the original high resolution mesh information transferred
into displacement data or normal map data if for a game engine.
Texturing
Texture mapping or texturing is a method for defining high frequency detail, surface texture, or
color information on a computer-generated graphic or 3D model.

Texture mapping originally referred to a method (now more accurately called diffuse mapping) that
simply wrapped and mapped pixels from a texture to a 3D surface. In recent decades the advent of
multi-pass rendering and complex mapping such as height mapping, bump mapping, normal
mapping, displacement mapping, reflection mapping, specular mapping, mipmaps, occlusion
mapping, and many other variations on the technique (controlled by a materials system) have made
it possible to simulate near-photorealism in real time by vastly reducing the number of polygons and
lighting calculations needed to construct a realistic and functional 3D scene.
11) 3D Animation – Techniques of Character Animation

3D Skeletal Animation:

3D skeletal animation is a process wherein an object is animated by combining two techniques:


a surface representation of the object where the animator draws the object or mesh and then
adds animations to that object using a set of interconnected bones in a skeleton. Skeletal
animation can be used to animate characters and joined objects.
Kinematics studies the motion of bodies without consideration of the forces or moments that
cause the motion. Robot kinematics refers the analytical study of the motion of a robot
manipulator. Formulating the suitable kinematics models for a robot mechanism is very crucial
for analyzing the behaviour of industrial manipulators. There are mainly two different spaces
used in kinematics, the robot kinematics can be divided into forward kinematics and inverse
kinematics.
Inverse Kinematics: Inverse Kinematics (IK) is one of the techniques for manipulating the
joints in a skeleton to give the appearance of movement for a character. IK is often used as an
efficient solution for rigging a character's arms and legs.
The inverse kinematics problem of the serial manipulators has been studied for many decades.
It is needed in the control of manipulators. Solving the inverse kinematics is computationally
expansive and generally takes a very long time in the real time control of manipulators. Tasks to
be performed by a manipulator are in the Cartesian space, whereas actuators work in joint
space. Cartesian space includes orientation matrix and position vector. However, joint space is
represented by joint angles. The conversion of the position and orientation of a manipulator
end-effector from Cartesian space to joint space is called as inverse kinematics problem. There
are two solutions approaches namely, geometric and algebraic used for deriving the inverse
kinematics solution, analytically. Let’s start with geometric approach.
Forward Kinematics = To get co-ordinate of end effector from given angles of all joints.
Forward Kinematics A manipulator is composed of serial links which are affixed to each other
revolute or prismatic joints from the base frame through the end-effector. Calculating the
position and orientation of the end-effector in terms of the joint variables is called as forward
kinematics. In order to have forward kinematics for a robot mechanism in a systematic manner,
one should use a suitable kinematics model.

Motion capture
Motion capture (also Mocap) is a way to digitally record human movements. The recorded
motion capture data is mapped on a digital model in 3D software (e.g. Unreal Engine, Unity 3D,
Maya or 3D Studio Max) so the digital character moves like the actor you recorded. The MoCap
technology is used in the entertainment industry for films and games to get more realistic
human movements. A famous example of a movie with lots of motion capture technology is
Avatar.
In motion capture sessions, movements of one or more actors are sampled many times per
second. Whereas early techniques used images from multiple cameras to calculate 3D positions,
often the purpose of motion capture is to record only the movements of the actor, not his or her
visual appearance. This animation data is mapped to a 3D model so that the model performs the
same actions as the actor.
REF –
https://www.youtube.com/watch?time_continue=26&v=Qtjqti3AQ6s

Performance capture
Performance capture generally refers to the practice of capturing very subtle movements
from real actors and using those in animation. Movies like Ice Age are fine without performance
capture because the characters aren't realistic.
The problem you run into when trying to make realistic human animations is that the human
brain is wired to detect very subtle cues in facial patterns and if an animator makes even minor
errors, we tend to be uncomfortable with the animation, associating the problem with a physical
defect, illness or death. That phenomenon is usually called the uncanny valley.
One thing you can theoretically do with this performance capture technology is record a broad
array of facial expressions with a famous actor, and continue featuring that performance in new
movies long after their ultimate demise.
The recent movie The Adventures of Tintin: The Secret of the Unicorn and AVATAR is a
performance capture animated movie.
REF –
https://www.youtube.com/watch?v=1wK1Ixr-UmM
14) The Uncanny Valley & Motion Capture

uncanny valley

The uncanny valley is a common unsettling feeling people experience when androids(humanoid
robots) and audio/visual simulations closely resemble humans in many respects but are not
quite convincingly realistic.

The phenomenon is a consideration in a number of areas of design including robotics, video game art,
training simulators and 3-D animation. Depending on the intent, a designer may want to avoid the
uncanny valley or exploit it to elicit a particular response.

The uncanny valley is named for the way the viewer's level of comfort drops as a simulation approaches,
but does not reach, verisimilitude. Simulations lacking aspects that significantly approach reality don't
tend to elicit the response. On the other hand, simulations that simulate reality to a degree that satisfies
the viewer don't bring about the effect either. Near-realism and mixes of realism and surrealism most
often cause the eerie sensation. The effect is intensified if the simulation is moving.
A plotted graph of viewer response to increased realism illustrates the uncanny valley (Definition
continues below the graph):

The feelings diagrammed in the uncanny valley can reach extreme levels like revulsion, exceeding those
experienced when viewing a corpse. The uncanny valley is experienced at different levels by different
individuals, mostly depending on the familiarity of the subject materials. Designers can bridge the valley
with changes like the addition of cartoon-like or "cuter" features.

The uncanny valley phenomenon is usually spoken of in reference to human simulations but also can be
brought about by Photoshopped images, dolls, teddy bears and subjects of plastic surgery. The uncanny
valley can also be created by discrepancies in voice, movement or appearance.

Masahiro Mori, at that time a robotics professor, wrote about the effect in a 1970 essay, Bukimi no
Tani, which translates roughly as valley of eeriness. At that time, humanoid robots had yet to be
developed. Mori was intrigued by the uneasy feeling that wax figures had always evoked in him. The
English term uncanny valley was first mentioned in a 1978 book by Jasia Reichardt called "Robots: Fact,
Fiction, and Prediction."

Some other examples

https://www.youtube.com/watch?v=aYuBDkto2Vk
Motion Capture
The process or technique of recording patterns of movement digitally, especially the recording of an
actor's movements for the purpose of animating a digital character in a film or video game.

Motion capture, or mo-cap, is a process of digital recording of people’s or objects’ movements. Motion
capture technology is widely used in entertainment, notably in gaming industry and filmmaking. It is also
called performance capture if it refers to designing more complex patterns, such as facial expressions or
finger movements. The actions of people are recorded and this information is used to create a 3D digital
model. The movements are scanned many times per second and are broadcasted to the digital
environment. As a result, the character reproduces the human’s movements in real time.

Motion capture technology. Features-


There are two types of motion capture systems, optical and non-optical.

Optical motion capture systems


Optical motion capture uses several peculiar cameras that see an object or person from different angles.
This type of technology works like a binocular vision through which we see the surrounding world as
voluminous. In the same way, using 2 or more cameras that monitor one or several objects allows to
transform it (them) an object into a three-dimensional model(s).
Optical motion capture systems are divided into two types.

1. Marker motion capture system

• Active markers
• Time modulated active markers
• Passive markers
• Semi-passive imperceptible markers

2. Markerless motion capture systems

1. Marker motion capture systems

Marker motion capture system is the technology of motion tracking where special equipment is used. A
person wears a suit with built-in reflective markers. While actor’s moving or taking postures, the
markers’ positions are fixed by cameras and get to the computer, where it is summarized in a single
three-dimensional model that accurately reproduces the movements of the actor in real time. – Marker
motion capture system allows reproducing the facial expression of the actor. In this case markers on the
face that allow recognizing the main mimic activity are needed.
• Active markers

Active markers are sources of light themselves. As a rule, these are LEDs that emit an electromagnetic
field that is detected in real time. Each LED is assigned to an identifier, which allows motion capture
software not to confuse markers with each other, and also to recognize them after they have been
blocked and reappeared in the field of cameras’ view.
One of the active system’s primary problems is a large number of wires that may prevent the actor from
performing complex movements.
If using with high-resolution cameras, LED markers allow tracking more complex and smaller body
movements such as facial expressions. In addition, LED markers are fragile and relatively of a high cost.

• Time modulated active markers

This motion capture technology is a more accomplished variant of the previous one. Some markers are
tracked per time unit modulating the amplitude or pulse width to ensure marker identification.
– This methodology allows identifying the marker in time more accurately, in higher spatial and
temporal resolution.
Greater accuracy of motion transferring requires more careful data processing, and therefore, more
powerful equipment and motion capture software. Typically suchlike motion capture technology costs
from $20,000 to $100 000.
• Passive markers

Passive markers are covered with a reflective material that can reflect the light source. Motion capture
software is integrated to simulate a 3D model after obtaining data about the movement of an
object. The disadvantage of passive markers is that with rapid movement or close proximity of markers
to each other, the motion capture software can confuse them. This technology does not provide the
identification of each marker, unlike active markers.

• Semi-passive imperceptible markers

In this technology inexpensive multi-LED high-speed projectors are used instead of high-speed cameras.
Specially constructed multi-LED infrared projectors optically encode space. In place of active and passive
markers, this motion capture technology applies photosensitive marker tags to decode the optical
signals. Marker tags can be invisibly placed not only in clothes but also in any point of the scene.
Marker tags сan determine not only their location but also the lighting, their orientation, and
reflectivity. It is possible to install unlimited amount of tags that uniquely identifiable to prevent
reiteration of marker identification. As high-speed camera and a corresponding high-speed image
stream are not applied, less bandwidth is necessary. This technology is more suitable for fixed motion
capture and virtual sets’ broadcasting in real time.

2. Markerless motion capture systems

Markerless motion capture technology does not require special sensors or a special suit. It is based on
technologies of computer vision and pattern recognition. The silhouette of the actor is examined by
several cameras from different angles. Tracking is carried out using a usual camera, or a webcam, and a
personal computer. The actor can wear ordinary clothes, which allows performing complex movements,
such as falling or jumping, without risk of damaging sensors. Sometimes there is no need in special
equipment, lighting, and space.

REF - https://www.youtube.com/watch?v=fm-A1lknrxE
15) Virtual Film Making : Previz

Previsualization or Previz is a service typically done to conceptualize a shot that allows a production
team (including the producer and director of a film, commercial, or TV show) to realistically lay out or
visualize the scenes in a way that technically makes sense. For example, a production team can use
a 3D previz to map out a complex camera move so they can see how they can make the shot
happen before they have to do it on camera.

The advantage of previsualization is that it allows a director, cinematographer or VFX Supervisor to


experiment with different staging and art direction options—such as lighting, camera placement and
movement, stage direction and editing—without having to incur the costs of actual production. On
larger budget project, the directors work with actors in visual effects department or dedicated rooms.
Previsualizations can add music, sound effects and dialogue to closely emulate the look of fully
produced and edited sequences, and are most encountered in scenes that involves stunts and
special effects (such as chroma key). Digital video, photography, hand-drawn art, clip art and 3D
animation combine in use.
In the realm of pre-production, a previsualization is used pre- pre-pro—that is, before production
even begins. During this stage, previz is used to figure out and test a project’s idea, including a
concept, an unfinished script, information about where and how a camera moves in space, how a
shot is best laid out, and how it can be executed in real life. Here are some key features of a previz:

• Previz is more focused on the technical aspects of a shot and less focused on aesthetic
details and narrative flow.
• Previz can be more simplified and stripped down to focus mainly on how things are moving
and the composition of shots.
• Previz is something a director of a live-action, animation and commercial or movie would ask
for.
• Previz is a more detailed plan (think blueprint) for a director and producer to take and use on
set. Previsualizations are incredibly valuable because they help make sure the producer and
director are working as efficiently as possible on the shoot. They can check the full-up shot-
by-shot against previsualization to make sure every frame looks like it should.

Types of Previz –

1) Concept panel – Concept panel is a single frame composition of a shot, that explain position
and placement of shot elements.

2) Story boarding - A storyboard is a graphic or hand made representation of how your


video will unfold, shot by shot. It include composition, camera movement in diagram
form, and even a rough estimation of time and frame.

3) Animatics - An animatic is a way to layout the narrative or creative concept that will be used
to measure the effectiveness of the story itself. An animatic is traditionally used to test
commercial ideas in focus groups before the idea is fully produced, but can also be used for
film and television.
16) Planing and Pipeline

Planning and Implementation

The best way to convey the story and director's vision through visual effects begins with good
pre-shoot planning. Very often we can find substantial cost savings and/or improved VFX
quality with minor adjustments to the planned image acquisition. The right combination of
practical and digital effects work can save money and lead to better final results - and the time
to determine this is well before shooting.
Hierarchy of VFX production –
Pre Process-
1) Pre visualization-
Story boarding and Animatic help to pre visualize the director essence and requirement
of VFX substances, its style, color and mood.
2) Concept artist-
It helps to develop needed concept of character, props, environment into digital form.

Pipeline
The production pipeline of a typical animated short or a movie can be divided into three stages : pre-
production, production and post-production. In this article we will be discussing these three key stages
in detail.

Pre-Production
The first process in the animation pipeline, and also one of the most important, is pre-production. It
begins with the main concepts which are initially turned into a full story, and then, once the story has
been finalized, other things such as the script, shot sequence and camera angles are worked on.Some
major components of pre production are Story Boarding, Layouts, Model Sheets and Animatics....they
also provide a visual reminder of the original plan; something that can be referred back to
throughout the production.

Story Boarding
The Storyboard helps to finalize the development of the storyline, and is an essential stage of the
animation process. It is made up of drawings in the form of a comic strip, and is used to both help
visualise the animation and to communicate ideas clearly. It details the scene and changes in the
animation, often accompanied by text notes describing things occurring within the scene itself, such as
camera movements.
Layouts
Once the storyboards have been approved, they are sent to the layout department which then works
closely with the director to design the locations and costumes. With this done they begin to stage the
scenes, showing the various characters' positions throughout the course of each shot.

Model Sheets
Model sheets are precisely drawn groups of pictures that show all of the possible expressions that a
character can make, and all of the many different poses that they could adopt. These sheets are created
in order to both accurately maintain character detail and to keep the designs of the characters uniform
whilst different animators are working on them across several shots.
During this stage the character designs are finalized so that when production starts their blueprints can
be sent to the modeling department who are responsible for creating the final character models.
Animatics
In order to give a better idea of the motion and timing of complex animation sequences and VFX-heavy
scenes, the pre-visualization department within the VFX studio creates simplified mock-ups called
“Animatics” shortly after the storyboarding process.
These help the Director plan how they will go about staging the above sequences, as well as how visual
effects will be integrated into the final shot.

Production

Now that the storyboard has been approved the project enters the production phase. It's here that the
actual work can start, based on the guidelines established during preproduction. Some major parts are
layout, modeling, texturing, lighting, rigging and animation.
[layout artists] produce the 3D version of what storyboard artists had previously drawn on
paper.

Layout
Using lo-res models or blocks of geometry in the place of the final set and characters, the Layout Artist is
responsible for composing the shot and delivering rough animation to the animators as a guide. What
they produce is the 3D version of what the storyboard artists had previously drawn on paper.
During this stage the Director approves camera moves, depth of field and the composition of the models
making up the set and set dressing. It is then the responsibility of the Modeling department to deliver
these approved set, prop and character models in the final layout stages.

Modelling

Modelers are usually split into two or more departments. Whilst organic modelers tend to have a
sculpture background and specialise in building the characters and other freeform surfaces, hard-surface
modelers often have a more industrial design or architectural background, and as such they model the
vehicles, weapons, props and buildings.
Working closely with the Art Directors, Visual Effects Supervisors and Animation Supervisors, modelers
turn the 2D concept art and traditionally sculpted maquettes into high detail, topologically sound 3D
models. They then assist the Technical Animator and Enveloper as the model has a skeleton put in place
and the skin is developed. Following this, the model may be handed back to the Modeler, who will
proceed to sculpt facial expressions and any specific muscle tension/jiggle shapes that may be required.
Once the model is approved, it will be made available to the rigging and texture paint departments, who
complete the final stages in preparing the model for animation and rendering. With luck, the model will
move through the production pipeline without coming back for modeling fixes, although some amount
of fixes are inevitable - problems with models sometimes don't appear until the rendering stage, in
which case the lighter will send the model back to be fixed.

Texturing
Whether creating a texture from scratch or through editing an existing image, Texturing Artists are
responsible for writing shaders and painting textures as per the scene requirements.
Working hand-in-hand with the surfacing and shading departments, textures are painted to match the
approved concept art and designs which were delivered by the art department. These textures are
created in the form of maps which are then assigned to the model.

...lighting TDs combine the latest version of the animation, the effects, the camera moves,
the shaders and textures, and render out an updated version every day.
Lighting
Not only does a Lighting Artist have to think lighting the individual scenes, they also have to consider
how to bring together all of the elements that have been created by the other departments. In most
companies, lighting TDs combine the latest version of the animation, the effects, the camera moves, the
shaders and textures into the final scenes, and render out an updated version every day.
Lighters have a broad range of responsibilities, including placing lights, defining light properties, defining
how light interacts with different types of materials, the qualities and complexities of the realistic
textures involved, how the position and intensity of lights affect mood and believability, as well as color
theory and harmony. They are required to establish direct and reflected lighting and shadows for each
assigned shot, ensuring that each shot fits within the continuity of a sequence, all the while aiming to
fulfill the vision of the Directors, Production Designers, Art Directors and VFX Supervisors.
Rigging
Rigging is the process of adding bones to a character or defining the movement of a mechanical object,
and it's central to the animation process. A character TD will make test animations showing how a
creature or character appears when deformed into different poses, and based on the results corrective
adjustments are often made.
The rigging department is also involved in developing cloth simulation – so as well as making a character
able to clench their fist or rotate their arm, the rigging and cloth department is responsible for making
their costume move in a believable manner.

...planning a character's performance frame by frame uses the same basic principles first
developed for 2D animation.
Animation
In modern production companies, the practice of meticulously planning a character's performance
frame by frame is applied in 3D graphics using the same basic principles and aesthetic judgments that
were first developed for 2D and stop-motion animation. If motion capture is used at the studio to
digitize the motion of real actors, then a great deal of an animator's time will also be spent cleaning up
the motion captured performance and completing the portions of the motion (such as the eyes and
hands) that may not have been digitized during the process.
The effects team also produce elements such as smoke, dust, water and explosions, although
development on these aspects does not start until the final animation/lighting has been approved as
they are integral to the final shot and often computationally heavy.

Post-Production
Post-production is the third and final step in film creation, and it refers to the tasks that must be
completed or executed after the filming or shooting ends. These include the editing of raw footage to
cut scenes together, inserting transitional effects, working with voice and sound actors and dubbing to
name just a few of the many post-production tasks.
Overall, however, the three main phases of post-production are compositing, sound editing and video
editing.
Compositing
The compositing department brings together all of the 3D elements produced by the previous
departments in the pipeline, to create the final rendered image ready for film! Compositors take
rendered images from lighters and sometimes also start with compositing scripts that TDs develope in
order to initially comp together their dailies (working versions of the shot.)

General compositing tasks include rendering the different passes delivered by a lighting department to
form the final shot, paint fixes and rotoscoping (although compositors sometimes rely on mattes
created by a dedicated rotoscoping department), as well as the compositing of fx elements and general
color grading.
Sound Editing
This department is responsible for selecting and assembling the sound recordings in preparation for the
final sound mix, ensuring lip sync and adding all of the sound effects required for the final film.
Video Editing
Video editing is the process of manipulating and rearranging shots to create a seamless final product,
and it is at this stage that any unwanted footage and scenes are removed. Editing is a crucial step in
making sure the video flows in a way which achieves the initial goal. Other tasks include titling and
adding any effects to the final video and text.

Conclusion
The production pipeline detailed above is broadly common in most studios, however each studio is likely
to have a custom pipeline determined by the type of project they are currently undertaking. A 2D
production pipeline starts with workbook and goes all the way through final checking, composting and
film output, whilst the 3D CGI production process emphasizes the design, modeling and rigging and
animation stages. Moreover, animation production is a very coordinated process where different teams
of artists work together while utilizing optimum resources and achieving the initial goal in the time
available.
17) Crowd Simulation

Definition:
Process of simulating the movement or dynamics of a large number of entities or characters.

Application:
Crowd simulations have been used widely in real world situations, real time applications and non-real-
time applications.
• Real world situations
i. Urban Planning
The development of crowd simulation software has become a modern and useful tool in
designing urban environments. Whereas the traditional method of urban planning relies
on maps and abstract sketches, a digital simulation is more capable of conveying both
form and intent of design from architect to pedestrian. For example, street signs and
traffic lights are localized visual cues that influence pedestrians to move and behave
accordingly. Following this logic, a person is able to move from point A to point B in a
way that is efficient and that a collective group of people can operate more effectively
as a result. In a broader sense, bus systems and roadside restaurants serve a spatial
purpose in their locations through an understanding of human movement patterns.

ii. Evacuation and riot handling


Simulated realistic crowds can be used in training for riots handling, architecture, safety
science.

iii. Military
Being that crowd simulations are so prevalent in use for public planning and general
order with regards to chaotic situations, many applications can be drawn for
governmental and military simulations. Crowd modeling is essential in police and
military simulation in order to train officers and soldiers to deal with mass gatherings of
people. Not only do offensive combatants prove to be difficult for these individuals to
handle, but noncombatant crowds play significant roles in making these aggressive
situations more out of control. Game technology is used in order to simulate these
situations for soldiers and technicians to practice their skills.

iv. Sociology
The behavior of a modeled crowd plays a prominent role in analytical matters. These
dynamics rely on the physical behaviors of individual agents within a crowd rather than
the visual reality of the model itself. The social behaviors of people within these
constructs have been of interest for many years, and the sociological concepts which
underpin these interactions are constantly studied. The simulation of crowds in different
situations allows for sociological study of real life gatherings in a variety of
arrangements and locations. The variations in human behavior in situations varying in
stress-levels allows for the further development and creation of crowd control
strategies which can be more specifically applied to situations rather than generalized.

• Real time Applications


i. Games

• Non-real-time Applications
i. Virtual Cinematography
Crowd simulations have been used widely across films as a cost effective and realistic
alternative from hiring actors and capturing shots that would otherwise be unrealistic. A
significant example of its use lies in The Lord of the Rings (film series). One of the most
glaring problems for the production team in the initial stages were large scale battles, as
the author of the novels, J R R Tolkien, envisioned them to have at least 50,000
participants. Such a number was unrealistic had they decided to only attempt to hire
real actors and actresses. Instead they decided to use CG to simulate these scenes
through the use of the Multiple Agent Simulation System in a Virtual Environment,
otherwise known as MASSIVE. The Human Logic Engine based Maya plugin for crowd
simulation, Miarmy, was used for the development of these sequences. The software
allowed the filmmakers to provide each character model an agent based A.I. that could
utilize a library of 350 animations. Based on sight, hearing, and touch parameters
generated from the simulation, agents would react uniquely to each situation. Thus,
each simulation of the scene was unpredictable. The final product clearly displayed the
advantages to using crowd simulation software.

Crowd dynamics:

There exist several approaches to crowd simulation and AI, each one providing advantages and
disadvantages based on crowd size and time scale. Time scale refers to how the objective of the
simulation also affects the length of the simulation. For example, researching social questions such as
how ideologies are spread amongst a population will result in a much longer running simulation since
such an event can span up to months or years. Using those two characteristics, researchers have
attempted to apply classifications to better evaluate and organize existing crowd simulators.
• Flow-based Approach

Flow based crowd simulations focus on the crowd as a whole rather than its components. As
such individuals do not have any distinctive behaviors that occur due to input from their
surroundings and behavioral factors are largely reduced. This model is mainly used to estimate
the flow of movement of a large and dense crowd in a given environment. Best used in studying
large crowd, short time objectives.
• Entity-based Approach

Models that implement a set of physical, predefined, and global laws meant to simulate
social/psychological factors that occur in individuals that are a part of a crowd fall under this
category. Entities in this case do not have the capacity to, in a sense, think for themselves. All
movements are determined by the global laws being enforced on them. Simulations that use
this model often do so to research crowd dynamics such as jamming and flocking. Small to
medium-sized crowds with short term objectives fit this approach best.

• Agent-based Approach

Characterized by autonomous, interacting individuals. Each agent of a crowd in this approach, is


given a degree of intelligence; they can react to each situation on their own based on a set of
decision rules. Information used to decide on an action is obtained locally from the agent's’
surroundings. Most often, this approach is used for simulating realistic crowd behavior as the
researcher is given complete freedom to implement any behaviors.

Typical Components of a crowd simulation scene

i. Crowd obstacles: This part represents the objects and the areas that the crowd can walk
and areas where the crowds cannot.

ii. Crowd motion and action: Entities perform certain actions or just pass, described using
goals. Different interest points (IP) and action points(AP) are created by the user.

iii. Group knowledge: Knowledge such as location of other groups, usable resources, can be
written in the leader of a group in the crowd in to the leader of the group.

Crowd Simulation Tools


The crowd simulation tools that are widely used are:

• Sidefx Houdini
• Miarmy
• Massive
• Goelem
• PTV VISSIM
• Quadstone Paramics
18) VR & THE INVISIBLE OBSERVER

Virtual Reality
Virtual Reality refers to a high-end user interface that involves real-time simulation and interactions
through multiple sensorial channels.

VR is able to immerse you in a computer-generated world of your own making: a room, a city, the
interior of human body. With VR, you can explore any uncharted territory of the human imagination.

If we focus more strictly on the scope of virtual reality as a means of creating the illusion that we are
present somewhere we are not, then the earliest attempt at virtual reality is surely the 360-degree
murals (or panoramic paintings) from the nineteenth century. These paintings were intended to fill the
viewer’s entire field of vision, making them feel present at some historical event or scene.
History of Virtual Reality
Today’s virtual reality technologies build upon ideas that date back to the 1800s, almost to the very
beginning of practical photography. In 1838, the first stereoscope was invented, using twin mirrors to
project a single image. That eventually developed into the View-Master, patented in 1939 and still
produced today. The use of the term “virtual reality,” however, was first used in the mid-1980s when
Jaron Lanier, founder of VPL Research, began to develop the gear, including goggles and gloves, needed
to experience what he called “virtual reality.”

In the mid-1950s cinematographer Morton Heilig developed the


Sensorama (patented 1962) which was an arcade-style theatre cabinet
that would stimulate all the senses, not just sight and sound. It
featured stereo speakers, a stereoscopic 3D display, fans, smell
generators and a vibrating chair. The Sensorama was intended to fully
immerse the individual in the film. He also created six short films for
his invention all of which he shot, produced and edited himself. The
Sensorama films were titled, Motorcycle, Belly Dancer, Dune Buggy,
helicopter, A date with Sabina and I’m a coca cola bottle!

Even before that, however, technologists were developing simulated


environments. One milestone was the Sensorama in 1956. Morton
Heilig’s background was in the Hollywood motion picture industry. He
wanted to see how people could feel like they were “in” the movie.
The Sensorama experience simulated a real city environment, which
you “rode” through on a motorcycle. Multisensory stimulation let you see the road, hear the engine,
feel the vibration, and smell the
motor’s exhaust in the designed
“world.” Heilig also patented a head-
mounted display device, called the
Telesphere Mask, in 1960. Many
inventors would build upon his
foundational work.

By 1965, another inventor, Ivan


Sutherland, offered “the Ultimate
Display,” a head-mounted device that
he suggested would serve as a
“window into a virtual world.”
Types of VR System
• Windows on World(WoW)

Also called Desktop VR.

Using a conventional computer monitor to display the 3D virtual world.

• Immersive VR

Completely immerse the user's personal viewpoint inside the virtual 3D world.

The user has no visual contact with the physical word.

Often equipped with a Head Mounted Display (HMD).


• Telepresence

A variation of visualizing complete computer-generated worlds.

Links remote sensors in the real world with the senses of a human operator. The remote sensors might
be located on a robot. Useful for performing operations in dangerous environments.

• Mixed Reality (Augmented Reality)

The seamless merging of real space and virtual space.

Integrate the computer-generated virtual objects into the physical world which become in a sense an
equal part of our natural environment.

• Distributed VR

A simulated world runs on several computers which are connected over network and the people are
able to interact in real time, sharing the same virtual world.
Applications
• Entertainment

3D cinema has been used for sporting events, pornography, fine art, music videos and short
films. Since 2015, virtual reality has been installed onto a number of roller coasters and theme
parks.

• Medicine

Surgery training can be done through virtual reality. Other medical uses include virtual reality
exposure therapy (VRET), a form of exposure therapy for treating anxiety disorders such as post-
traumatic stress disorder (PTSD) and phobias. In some cases, patients no longer meet the DSM-V
criteria for PTSD after a series of treatments with VRET.

• Manufacturing

VR is being used in the manufacturing sector extensively specially at hazardous locations.

• Education & Training

VR can simulate real spaces for workplace occupational safety and health purposes, educational
purposes, and training purposes. It can be used to provide learners with a virtual environment
where they can develop their skills without the real-world consequences of failing. It has been
used and studied in primary education, military, astronaut training, flight simulators and driver
training.
Commercial VR Products

• Oculus Rift

• HTC Vive

• Samsung Gear

• Google Cardboard

• Zeiss VR One and Zeiss VR One GX


• Sony PS4

• Google Glass

• Microsoft HoloLens
18) INTERACTIVE NARRATIVES – GAMES AS STORY

What is Interactive Narrative?


Interactive storytelling (also known as interactive drama) is a form of digital entertainment in which the
storyline is not predetermined. The author creates the setting, characters, and situation which the
narrative must address, but the user (also reader or player) experiences a unique story based on their
interactions with the story world. All interactive storytelling systems must make use of artificial
intelligence (AI) to some degree. The architecture of an interactive storytelling program includes a drama
manager, user model, and agent model to control, respectively, aspects of narrative production, player
uniqueness, and character knowledge and behavior. Together, these systems generate characters that act
"human," alter the world in real-time reactions to the player and ensure that new narrative events unfold
comprehensibly.

The field of study surrounding interactive storytelling encompasses many disparate fields, including
psychology, sociology, cognitive science, linguistics, natural language processing, user interface design,
computer science, and emergent intelligence. They fall under the umbrella term of Human-Computer
Interaction (HCI), at the intersection of hard science and the humanities. The difficulty of producing an
effective interactive storytelling system is attributed to the ideological division between professionals in
each field: artists have trouble constraining themselves to logical and linear systems and programmers
are disinclined to appreciate or incorporate the abstract and unproven concepts of the humanities.

As defined by Stephen Dinehart, Interactive narrative design combines ludology, narratology and game
design to form interactive entertainment development methodologies. Interactive entertainment
experiences allow the player to witness data as navigable, participatory, and dramatic in real-time “a
narratological craft which focuses on the structuralist, or literary semiotic creation of stories." Interactive
Narrative design seeks to accomplish this via viewer/user/player (VUP) navigated dataspaces.

Interactive narrative design focuses on creating meaningful participatory story experiences with
interactive systems. The aim is to transport the player through play into the videogame (dataspace) using
their visual and auditory senses. When interactive narrative design is successful, the VUP
(viewer/user/player) believes that they are experiencing a story.
Ludology
Game studies, or ludology, is the study of games, the act of playing them, and the players and cultures
surrounding them. It is a discipline of cultural studies that deals with all types of games throughout history.
This field of research utilizes the tactics of, at least, anthropology, sociology and psychology, while
examining aspects of the design of the game, the players in the game, and finally, the role the game plays
in its society or culture. Game studies is oftentimes confused with the study of video games, but this is
only one area of focus; in reality game studies encompasses all types of gaming, including sports, board
games, etc.

Before video games, game studies often only included anthropological work, studying the games of past
societies. However, once video games were introduced and became mainstream, game studies were
updated to perform sociological and psychological observations; to observe the effects of gaming on an
individual, his or her interactions with society, and the way it could impact the world around us.

There are three main approaches to game studies: the social science approach asks itself how games affect
people and uses tools such as surveys and controlled lab experiments. The humanities approach asks itself
what meanings are expressed through games and uses tools such as ethnography and patient observation.
The industrial and engineering approach applies mostly to video games and less to games in general, and
examines things such as computer graphics, artificial intelligence, and networking. Like other media
disciplines, such as television and film studies, game studies often involves textual analysis and audience
theory.

Narratology
Narratology is the study of narrative and narrative structure and the ways that these affect our perception.
While in principle the word may refer to any systematic study of narrative, in practice its usage is rather
more restricted. It is an anglicisation of French “narratologie”, coined by Tzvetan Todorov. Narratology is
applied retrospectively as well to work predating its coinage. Its theoretical lineage is traceable to Aristotle
(Poetics) but modern narratology is agreed to have begun with the Russian Formalists, particularly
Vladimir Propp (Morphology of the Folktale, 1928), and Mikhail Bakhtin's theories of heteroglossia,
dialogism, and the chronotope first presented in The Dialogic Imagination (1975).

Narratology examines the ways that narrative structures our perception of both cultural artefacts and the
world around us. The study of narrative is particularly important since our ordering of time and space in
narrative forms constitutes one of the primary ways we construct meaning in general. As Hayden White
puts it, "far from being one code among many that a culture may utilize for endowing experience with
meaning, narrative is a meta-code, a human universal on the basis of which transcultural messages about
the nature of a shared reality can be transmitted". Given the prevalence and importance of narrative
media in our lives (television, film, fiction), narratology is also a useful foundation to have before one
begins analysing popular culture.

Narratology is complicated by the fact that different theorists have different terms for explaining the same
phenomenon, a fact that is fuelled by narratology's structuralist background: narratologists love to
categorize and to taxonomize, which has led to a plethora of terms to explain the complicated nature of
narrative form.

Ludology Vs. Narratology


A major focus in game studies is the debate surrounding narratology and ludology. Many ludologists
believe that the two are unable to exist together, while others believe that the two fields are similar but
should be studied separately. Many narratologists believe that games should be looked at for their stories,
like movies or novels. The ludological perspective says that games are not like these other mediums due
to the fact that a player is actively taking part in the experience and should therefore be understood on
their own terms. The idea that a videogame is "radically different to narratives as a cognitive and
communicative structure" has led the development of new approaches to criticism that are focused on
videogames as well as adapting, repurposing and proposing new ways of studying and theorizing about
videogames. A recent approach towards game studies starts with an analysis of interface structures and
challenges the keyboard-mouse paradigm with what is called a "ludic interface".

Academics across both fields provide scholarly insight into the different sides of this debate. Gonzalo
Frasca, a notable ludologist due to his many publications regarding game studies, argues that while games
share many similar elements with narrative stories, that should not prevent games to be studied as games.
He seeks not "to replace the narratologic approach, but to complement it."

Jesper Juul, another notable ludologist, argues for a stricter separation of ludology and narratology. Juul
argues that games "for all practicality cannot tell stories." This argument holds that narratology and
ludology cannot exist together because they are inherently different. Juul claims that the most significant
difference between the two is that in a narrative, events "have to" follow each other, whereas in a game
the player has control over what happens.

Garry Crawford and Victoria K. Gosling argue in favour of narratives being an essential part of games as
"it is impossible to isolate play from the social influences of everyday life, and in turn, play will have both
intended and unintended consequences for the individual and society." The Last of Us is a video game
released in 2013 that has been referred to as a narrative "masterpiece." Proponents of the narratology
side of game studies argue that The Last of Us and similar games that have followed it and preceded it,
serve as examples that games can in fact tell stories.
Non-Linear Story Telling
A video game with nonlinear gameplay presents players with challenges that can be completed in a
number of different sequences. Each player may take on (or even encounter) only some of the challenges
possible, and the same challenges may be played in a different order. Conversely, a video game with linear
gameplay will confront a player with a fixed sequence of challenges: every player faces every challenge
and has to overcome them in the same order.

A nonlinear game will allow greater player freedom than a linear game. For example, a nonlinear game
may permit multiple sequences to finish the game, a choice between paths to victory, different types of
victory, or optional side-quests and subplots. Some games feature both linear and nonlinear elements,
and some games offer a sandbox mode that allows players to explore an open-world game environment
independently from the game's main objectives, if any objectives are provided at all.

A game that is significantly nonlinear is sometimes described as being open-ended or a sandbox, though
that term is used incorrectly in those cases, and is characterized by there being no "right way" of playing
the game. Whether intentional or not, a common consequence of open-ended gameplay is emergent
gameplay.

Branching Storylines
Games that employ linear stories are those where the player cannot change the story line or ending of
the story. Many video games use a linear structure, thus making them more similar to other fiction.
However, it is common for such games to use interactive narration in which a player needs to interact
with something before the plot will advance, or nonlinear narratives in which events are portrayed in a
non-chronological order. Many games have offered premature endings should the player fail to meet an
objective, but these are usually just interruptions in a player's progress rather than actual endings. Even
in games with a linear story, players interact with the game world by performing a variety of actions along
the way.

More recently, some games have begun offering multiple endings to increase the dramatic effect of moral
choices within the game, although early examples also exist. Still, some games have gone beyond small
choices or special endings, offering a branching storyline, known as an interactive narrative, that players
may control at critical points in the game. Sometimes the player is given a choice of which branch of the
plot to follow, while sometimes the path will be based on the player's success or failure at a specific
challenge. For example, Black Isle Studios' Fallout series of role-playing video games features numerous
quests where player actions dictate the outcome of the story behind the objectives. Players can eliminate
in-game characters permanently from the virtual world should they choose to do so, and by doing so may
actually alter the number and type of quests that become available to them as the game progresses. The
effects of such decisions may not be immediate. Branches of the story may merge or split at different
points in the game, but seldom allow backtracking. Some games even allow for different starting points,
and one way this is done is through a character selection screen.

Despite experimenting with several nonlinear storytelling mechanisms in the 1990s, the game industry
has largely returned to the practice of linear storytelling. Linear stories cost less time and money to
develop, since there is only one fixed sequence of events and no major decisions to keep track of. For
example, several games from the Wing Commander series offered a branching storyline, but eventually
they were abandoned as too expensive. Nonlinear stories increase the chances for bugs or absurdities if
they are not tested properly, although they do provide greater player freedom. Some players have also
responded negatively to branching stories because it is hard and tedious for them to experience the "full
value" of all the game's content. As a compromise between linear and branching stories, there are also
games where stories split into branches and then fold back into a single storyline. In these stories, the plot
will branch, but then converge upon some inevitable event, giving the impression of a Nonlinear gameplay
through the use of nonlinear narrative, without the use of interactive narratives. This is typically used in
many graphic adventure games.

A truly nonlinear story would be written entirely by the actions of the player, and thus remains a difficult
design challenge. As such, there is often little or no story in video games with a truly nonlinear gameplay.
Facade, a video game often categorized as an interactive drama, features many branching paths that are
dictated by the user's text input based on the current situation, but there is still a set number of outcomes
as a result of the inherent limitations of programming, and as such, is non-linear, but not entirely so.

You might also like