AI in 3d Animation
AI in 3d Animation
ANIMATION
RABINA MAHARJAN
Researchers from the university of southern california, pinscreen, and microsoft have
developed a deep learning-based method to generate full 3D hair geometry from
single-view images in real time.
The researchers used a generative adversarial network (GAN) to create realistic hair
models from input images. A GAN consists of two neural networks: a generator
producing realistic outputs and a discriminator distinguishing between real and fake
outputs. The generator and the discriminator compete, improving their performance
over time.
The researchers trained their GAN on a large dataset of 3D hair models and then used it to generate hair geometry from 2D images.
They also used a neural rendering technique to render the hair with realistic lighting and shading effects.
Their system takes smartphone photos as input and produces 3D hair models as output. The process is then divided into two stages:
first, the system estimates the 2D orientation of each hair strand in the image; second, it reconstructs the 3D shape of each strand
using a geometric model.
The system can handle various hairstyles, colors, lengths, and densities. It can also deal with occlusions, such as when the face or
clothing partially hides hair. The system can generate 3D hair models on a standard GPU in less than a second.
The researchers claim their method is the first to produce realistic 3D hair geometry from single-view images in real-time. They also
say their method outperforms previous methods in accuracy, speed, and visual quality.
The researchers hope their method can be used for various applications, such as virtual try-on, face swapping, avatar creation, and
animation. They also plan to improve their method by incorporating more data sources, such as videos and depth maps.
THE RESEARCHERS PRESENTED THEIR WORK AT THE ACM SIGGRAPH CONFERENCE IN AUGUST 2023.
AI IN 3D ANIMATION
AI helps to generate 3D models, animate 3D objects, and render the 3D scenes for the
animation. It can also be used to automate tedious tasks such as rigging and texturing.
Ultimately, by using AI, 3D animators can create more complex animations faster and with
less manual intervention.
1 MOTION CAPTURE
Motion capture, also called “mocap,” is a technology-driven
method of capturing an actor’s motion and physical
performance so it may be translated to a CGI character. Mocap
can track various types of motion such as facial expressions
and body movements. Mocap can be used in animated films,
but is most popular in creating CGI characters live-action
movies.
Hand-to-ground contact Motion smoothing feature is only available to professional and studio plan
users.
Motion smoothing
ANIMATE 3D AND SAYMOTION BY DEEPMOTION
Bringing digital humans to life with AI
Features
• Users can export files to .FBX or .BVH.
• Integration with popular animation software such as Unity, Unreal Engine,
Autodesk Maya, and Blender.
• Full-body motion capture.
• You can import and retarget custom characters.
Pros Cons
Live stream to Unreal,
Free plan has limited support
Blender and Maya
Real-time retargeting
Rokoko’s starter plan lacks live stream capability.
CASCADEUR - AI-ASSISTED KEYFRAME
ANIMATION SOFTWARE
• Cascadeur is a software for creating
character animation without motion
capture. Using physics-based approach, it
allows for creating expressive and realistic
animations for movies and video games.
• It is a standalone 3D software that lets
you create keyframe animation, as well as
clean up and edit any imported ones.
Thanks to its AI-assisted and physics tools
you can dramatically speed up the
animation process and get high quality
results.
PHYSICS-BASED SIMULATIONS
When it comes to 3D animation, incorporating physics-based simulations can significantly enhance realism and dynamics.
Here are some ways physics simulations are applied in 3D animation:
1. Gravity: Simulating gravitational forces allows objects to fall naturally, creating lifelike motion. Whether it’s a character
jumping or an object dropping, accurate gravity modeling adds authenticity.
2. Collision Detection: Physics engines detect collisions between objects. When two objects collide, their behavior
(bounce, deformation, or fragmentation) can be realistically simulated.
3. Particle Effects: Particles like smoke, fire, water droplets, or sparks can be dynamically generated using physics-based
simulations. These effects add visual richness to animations.
4. Soft Bodies: Simulating soft materials (like cloth, rubber, or flesh) involves modeling their elasticity, deformation, and
response to external forces. Soft body simulations create realistic movement.
5. Fluid Dynamics: For scenes involving liquids (water, lava, or even splashes), fluid simulations handle fluid behavior,
surface tension, and interactions with other objects.
EXAMPLE OF AI-DRIVEN, PHYSICS-BASED CHARACTER ANIMATION
CASCADEUR
Features
• Cascadeur allows you to animate humans, animals and other subjects without using motion-
capture tools. With this program, you can make action-packed scenes more realistic.
• Powerful Rigging Tools
• Cascadeur allows you to customize a character’s trajectory, adjust the angular momentum and
change fulcrum points, such as ground contacts.
• Cascadeur works with .FBX, .DAE and .USD files making it simple to integrate it into almost any
animation workflow.
Pros Cons
Simple rigging No Mac support
Physics-based tools No full version yet
Options that replace MoCap
features
Compatible with other
software
Minimum Hardware Requirements
64-bit Intel® or AMD® multi-core processor for Windows/Linux
M1, M2 or M3 ARM-based processor for macOS
Processor SSE4.1 instruction set support
2.4 Ghz or higher
Memory 4 Gb
Video card NVIDIA GTX 550 ti or better
OpenGL 3.3 support
Traditional Approach:
• In traditional rigging, animators manually set up control points (joints) for facial
expressions.
• Each joint corresponds to a specific facial muscle or feature (e.g., eyebrows, lips,
cheeks).
• Animators painstakingly adjust these controls frame by frame to achieve desired
expressions.
2.1 AI-POWERED FACIAL RIGGING
Polywink: AI-Driven Facial Rigging:
POLYWINK: AI-DRIVEN FACIAL RIGGING:
Automatic Face Rigging
• The Advanced Rig on Demand service automatically generates FACS-based facial rigging 236
blendshapes. It is adapted to specific topologies and morphologies of any 3D characters, whether these
are scanned heads, photorealistic models or cartoonish characters.
Deliverable 3D Formats
Polywink offers three deliverable formats:
• FBX file: compatible with most software like Unity, Unreal Engine, Blender, and more
• Maya scene: up to Maya 2022 with a Faceboard for Keyframe animation AND a FBX file
• Unreal Engine 5 project: a LiveLink set-up working with the 236 blendShapes! And an animation
faceboard in Unreal Engine 5
THE MEDUSA FACIAL CAPTURE SYSTEM, WHICH HAS
BEEN USED IN 30+ MOVIES, INCLUDING AVENGERS:
ENDGAME
In the latest installment of the Avengers saga, Avengers: Infinity War,
Marvel features one of the mightiest villains of its universe very
prominently. Thanos, the alien supervillain was completely computer
generated by Digital Domain and is brought to life by the performance of
Josh Brolin.
In order to transfer the performance of Brolin onto Thanos, the visual effects
studio relied heavily on technology from Disney Research. Brolin was
captured using the Medusa Performance Capture System, a Disney
proprietary capture technology that can reconstruct the three-dimensional
shape of the human face over time at very high resolution. Based on this
data, Digital Domain built a digital double of Brolin, which they control
using their in-house tool Masquerade. The technology underneath
Masquerade is based on a research project by Disney Research
Facial Performance Enhancement Using Dynamic Shape Space Analysis,
which allows to control a high-quality face model from sparse input data.
This allowed to employ traditional marker-based motion capture from
helmet cameras but to achieve much higher capture fidelity.
CONCLUSION
• One year ago, the Colorado State Fair made headlines for
unknowingly awarding first place to an artwork created with help
from artificial intelligence. Now, officials with the 151-year-old fair
have amended the contest’s rules: Artists must disclose whether
they used A.I. to make their submissions, reports the Denver Post’
s John Wenzel.
• The saga began last August 2022, when game designer Jason Allen
won the top spot in the fair’s digital arts competition. When he
shared his victory online, he mentioned that he’d used
Midjourney—an A.I. program that turns text into images—to help
create his piece, titled Théâtre D’opéra Spatial.
https://www.nytimes.com/2022/09/02/technology/ai-artificial-
Jason Allen’s A.I.-generated work, “Théâtre D’opéra Spatial,” took
intelligence-artists.html first place in the digital category at the Colorado State Fair.