0% found this document useful (0 votes)
313 views23 pages

AI in 3d Animation

Artificial intelligence in animation industry.

Uploaded by

toonsbooms
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
313 views23 pages

AI in 3d Animation

Artificial intelligence in animation industry.

Uploaded by

toonsbooms
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

AI IN 3D

ANIMATION
RABINA MAHARJAN
Researchers from the university of southern california, pinscreen, and microsoft have
developed a deep learning-based method to generate full 3D hair geometry from
single-view images in real time.

The researchers used a generative adversarial network (GAN) to create realistic hair
models from input images. A GAN consists of two neural networks: a generator
producing realistic outputs and a discriminator distinguishing between real and fake
outputs. The generator and the discriminator compete, improving their performance
over time.

The researchers trained their GAN on a large dataset of 3D hair models and then used it to generate hair geometry from 2D images.
They also used a neural rendering technique to render the hair with realistic lighting and shading effects.

Their system takes smartphone photos as input and produces 3D hair models as output. The process is then divided into two stages:
first, the system estimates the 2D orientation of each hair strand in the image; second, it reconstructs the 3D shape of each strand
using a geometric model.

The system can handle various hairstyles, colors, lengths, and densities. It can also deal with occlusions, such as when the face or
clothing partially hides hair. The system can generate 3D hair models on a standard GPU in less than a second.
The researchers claim their method is the first to produce realistic 3D hair geometry from single-view images in real-time. They also
say their method outperforms previous methods in accuracy, speed, and visual quality.
The researchers hope their method can be used for various applications, such as virtual try-on, face swapping, avatar creation, and
animation. They also plan to improve their method by incorporating more data sources, such as videos and depth maps.

THE RESEARCHERS PRESENTED THEIR WORK AT THE ACM SIGGRAPH CONFERENCE IN AUGUST 2023.
AI IN 3D ANIMATION

Generative AI in animation involves using algorithms and machine learning


techniques to generate content autonomously. These algorithms are trained on vast
amounts of data, allowing them to learn animation patterns, styles, and
characteristics.

How can AI be used in 3D animation?

AI helps to generate 3D models, animate 3D objects, and render the 3D scenes for the
animation. It can also be used to automate tedious tasks such as rigging and texturing.
Ultimately, by using AI, 3D animators can create more complex animations faster and with
less manual intervention.
1 MOTION CAPTURE
Motion capture, also called “mocap,” is a technology-driven
method of capturing an actor’s motion and physical
performance so it may be translated to a CGI character. Mocap
can track various types of motion such as facial expressions
and body movements. Mocap can be used in animated films,
but is most popular in creating CGI characters live-action
movies.

AI algorithms analyze real-world motion data (captured from


actors or objects) to replicate natural movements in animated
characters.
DEEPMOTION
Deep Motion is a software that uses AI-powered markerless motion capture and real-time 3D
body tracking. You can use this tool to transform your videos into 3D animations with three-
dimensional full-body motion, including face and hand movements – in just a few clicks.
Features
• Support multiple output formats, including fbx, bvh, glb, dmpe, mp4, jpg, png and gif.
• Track basic facial expressions and hand gestures.
• Full body marker tracking, which includes face tracking, hand tracking and multi-person tracking.
• Foot locking capability.
DEEPMOTION
Pros Cons
Physics simulation Freemium plans are only available for personal and non-commercial use.

Hand-to-ground contact Motion smoothing feature is only available to professional and studio plan
users.
Motion smoothing
ANIMATE 3D AND SAYMOTION BY DEEPMOTION
Bringing digital humans to life with AI

ANIMATE :3D TURN VIDEO INTO 3D


ANIMATION
• Full-Body Markerless Tracking
• Face Tracking
• Hand Tracking
• Multi-person Tracking
SAYMOTION :TURN TEXT INTO 3D
ANIMATION
SAYMOTION :TURN TEXT INTO 3D
ANIMATION
ROKOKO VISION: BEST FOR REAL-TIME
FULL-BODY MOTION CAPTURE
Rokoko Vision is a video-to-3D AI generator. Rokoko is a technology company that
specializes in motion capture technology. It has developed a range of products and
software solutions for motion capture, including the SmartSuit Pro II, a full-body motion
capture suit that captures your body’s motion and streams the data over WiFi in real-
time to your digital characters.
ROKOKO VISION

Features
• Users can export files to .FBX or .BVH.
• Integration with popular animation software such as Unity, Unreal Engine,
Autodesk Maya, and Blender.
• Full-body motion capture.
• You can import and retarget custom characters.
Pros Cons
Live stream to Unreal,
Free plan has limited support
Blender and Maya
Real-time retargeting
Rokoko’s starter plan lacks live stream capability.
CASCADEUR - AI-ASSISTED KEYFRAME
ANIMATION SOFTWARE
• Cascadeur is a software for creating
character animation without motion
capture. Using physics-based approach, it
allows for creating expressive and realistic
animations for movies and video games.
• It is a standalone 3D software that lets
you create keyframe animation, as well as
clean up and edit any imported ones.
Thanks to its AI-assisted and physics tools
you can dramatically speed up the
animation process and get high quality
results.
PHYSICS-BASED SIMULATIONS
When it comes to 3D animation, incorporating physics-based simulations can significantly enhance realism and dynamics.
Here are some ways physics simulations are applied in 3D animation:
1. Gravity: Simulating gravitational forces allows objects to fall naturally, creating lifelike motion. Whether it’s a character
jumping or an object dropping, accurate gravity modeling adds authenticity.
2. Collision Detection: Physics engines detect collisions between objects. When two objects collide, their behavior
(bounce, deformation, or fragmentation) can be realistically simulated.
3. Particle Effects: Particles like smoke, fire, water droplets, or sparks can be dynamically generated using physics-based
simulations. These effects add visual richness to animations.
4. Soft Bodies: Simulating soft materials (like cloth, rubber, or flesh) involves modeling their elasticity, deformation, and
response to external forces. Soft body simulations create realistic movement.
5. Fluid Dynamics: For scenes involving liquids (water, lava, or even splashes), fluid simulations handle fluid behavior,
surface tension, and interactions with other objects.
EXAMPLE OF AI-DRIVEN, PHYSICS-BASED CHARACTER ANIMATION
CASCADEUR
Features
• Cascadeur allows you to animate humans, animals and other subjects without using motion-
capture tools. With this program, you can make action-packed scenes more realistic.
• Powerful Rigging Tools
• Cascadeur allows you to customize a character’s trajectory, adjust the angular momentum and
change fulcrum points, such as ground contacts.
• Cascadeur works with .FBX, .DAE and .USD files making it simple to integrate it into almost any
animation workflow.
Pros Cons
Simple rigging No Mac support
Physics-based tools No full version yet
Options that replace MoCap
features
Compatible with other
software
Minimum Hardware Requirements
64-bit Intel® or AMD® multi-core processor for Windows/Linux
M1, M2 or M3 ARM-based processor for macOS
Processor SSE4.1 instruction set support
2.4 Ghz or higher
Memory 4 Gb
Video card NVIDIA GTX 550 ti or better
OpenGL 3.3 support

Recommended Hardware Requirements

64-bit Intel® or AMD® multi-core processor for Windows/Linux


M1, M2 or M3 ARM-based processor for macOS
Processor AVX instruction set support
3.5 Ghz or higher
Memory 8 Gb
Video card AMD HD7000+/NVIDIA GTX 650 or better

Supported Operating Systems


•Windows 10
•Windows 11
•Ubuntu 20.04 or later
•macOS 13.3 or later
OVERVIEW OF THESE AI SOFTWARE
Image-to- Video-to- Starting
Best for Text-to-3D Free plan
3D 3D price
Deep $15 per
Full body marker tracking No No Yes Yes
Motion month
Rokoko $25 per
Real-time full-body motion capture No No Yes Yes
Vision month
Cascadeur AI-Assisted Keyframe Animation $99 per
No No Yes Yes
Software year
2 FACIAL EXPRESSIONS
• Facial Rigging: AI-driven rigging systems create flexible facial structures that respond to
muscle movements.
• Emotion Transfer: AI models transfer emotions from reference videos to animated
characters, capturing subtle nuances.
• Speech-Driven Animation: AI synchronizes lip movements with speech, enhancing realism.

Traditional Approach:
• In traditional rigging, animators manually set up control points (joints) for facial
expressions.
• Each joint corresponds to a specific facial muscle or feature (e.g., eyebrows, lips,
cheeks).
• Animators painstakingly adjust these controls frame by frame to achieve desired
expressions.
2.1 AI-POWERED FACIAL RIGGING
Polywink: AI-Driven Facial Rigging:
POLYWINK: AI-DRIVEN FACIAL RIGGING:
Automatic Face Rigging
• The Advanced Rig on Demand service automatically generates FACS-based facial rigging 236
blendshapes. It is adapted to specific topologies and morphologies of any 3D characters, whether these
are scanned heads, photorealistic models or cartoonish characters.

Deliverable 3D Formats
Polywink offers three deliverable formats:
• FBX file: compatible with most software like Unity, Unreal Engine, Blender, and more
• Maya scene: up to Maya 2022 with a Faceboard for Keyframe animation AND a FBX file
• Unreal Engine 5 project: a LiveLink set-up working with the 236 blendShapes! And an animation
faceboard in Unreal Engine 5
THE MEDUSA FACIAL CAPTURE SYSTEM, WHICH HAS
BEEN USED IN 30+ MOVIES, INCLUDING AVENGERS:
ENDGAME
In the latest installment of the Avengers saga, Avengers: Infinity War,
Marvel features one of the mightiest villains of its universe very
prominently. Thanos, the alien supervillain was completely computer
generated by Digital Domain and is brought to life by the performance of
Josh Brolin.

In order to transfer the performance of Brolin onto Thanos, the visual effects
studio relied heavily on technology from Disney Research. Brolin was
captured using the Medusa Performance Capture System, a Disney
proprietary capture technology that can reconstruct the three-dimensional
shape of the human face over time at very high resolution. Based on this
data, Digital Domain built a digital double of Brolin, which they control
using their in-house tool Masquerade. The technology underneath
Masquerade is based on a research project by Disney Research
Facial Performance Enhancement Using Dynamic Shape Space Analysis,
which allows to control a high-quality face model from sparse input data.
This allowed to employ traditional marker-based motion capture from
helmet cameras but to achieve much higher capture fidelity.
CONCLUSION

AI tools for 3D animation are making


the animation process more efficient
and accessible, while also enabling
the creation of complex and
imaginative scenes that were
previously difficult or time-
consuming to produce. As these
tools continue to evolve, we can
expect them to further transform the
landscape of 3D animation and
visual storytelling.
Jason Allen’s A.I.-generated work, “Théâtre D’opéra Spatial,” took first place in the digital category at
the Colorado State Fair.
ART MADE WITH A.I. WON A STATE FAIR LAST YEAR.
NOW, THE RULES ARE CHANGING
Artists who submit to the competition will need to disclose whether they used A.I. Tools like midjourney

• One year ago, the Colorado State Fair made headlines for
unknowingly awarding first place to an artwork created with help
from artificial intelligence. Now, officials with the 151-year-old fair
have amended the contest’s rules: Artists must disclose whether
they used A.I. to make their submissions, reports the Denver Post’
s John Wenzel.
• The saga began last August 2022, when game designer Jason Allen
won the top spot in the fair’s digital arts competition. When he
shared his victory online, he mentioned that he’d used
Midjourney—an A.I. program that turns text into images—to help
create his piece, titled Théâtre D’opéra Spatial.
https://www.nytimes.com/2022/09/02/technology/ai-artificial-
Jason Allen’s A.I.-generated work, “Théâtre D’opéra Spatial,” took
intelligence-artists.html first place in the digital category at the Colorado State Fair.

You might also like