Optimize Your Game Performance For Mobile XR and The Web in Unity Unity 6 Edition
Optimize Your Game Performance For Mobile XR and The Web in Unity Unity 6 Edition
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Rendering optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Profiling tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Adaptive Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Compress textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Unity DataTools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Use ScriptableObjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Project configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Vsync Count. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
GPU optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Benchmark the GPU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Disable shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Spatial-Temporal Post-Processing . . . . . . . . . . . . . . . . . . 60
Shaders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Optimize SkinnedMeshRenderers . . . . . . . . . . . . . . . . . . 64
Audio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Optimize workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Simplify colliders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Use Physics.BakeMesh. . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Framerate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Chrome DevTools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
XR optimization tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Render Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Foveated rendering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
This guide brings together all the best and latest mobile, XR, and Unity Web performance
optimization tips for Unity 6. It is one of two optimization guides available, the other being
Optimize your game performance for consoles and PC in Unity.
Optimizing your mobile, XR, or Unity Web application is an essential process that underpins the
entire game development cycle. Hardware continues to evolve, and your game’s optimization –
along with its art, game design, audio, and monetization strategy – plays a key role in shaping
the player experience.
Mobile, XR, and web games have active user bases reaching the billions. In the case of mobile,
if your game is highly optimized, it has a better chance at passing certification from platform-
specific stores. Aim for a performant application on the widest range of devices to maximize
your opportunity for success at launch and beyond.
This e-book assembles knowledge and advice from Unity engineers who have partnered with
developers across the industry to help them launch the best games possible.
1
Note that many of the optimizations discussed here may introduce additional complexity, which can mean extra maintenance and potential
bugs. Balance performance gains against the time and labor cost when implementing these best practices.
Unity recommends the Universal Render Pipeline (URP) for developing XR (extended
reality), web, and mobile games and applications. URP is designed for high performance
and scalability, offering efficient rendering that can adapt to a wide range of hardware. It
enables you to achieve better visual quality while maintaining smooth performance, making it
ideal for platforms where resource efficiency is crucial, such as WebGL and mobile devices.
Additionally, URP allows for easier customization, ensuring your applications run optimally
across diverse environments.
Choose URP as your render pipeline if you are developing a Unity mobile, XR, or web game.
In addition to selecting the URP you can adjust the render pipeline asset to further customize
your settings.
Rendering optimization
URP offers presets tailored for quality and performance. for tetherless VR experiences or AR
apps on mobile devices. Selecting the appropriate render settings optimizes your application
for mobile hardware, ensuring efficient rendering and smooth performance. The optimized
settings of URP manage texture quality, shadow resolution, and lighting efficiently, providing
a balance between visual fidelity and performance suitable for the constraints of mobile and
tetherless XR devices.
For mobile, XR, and web projects, it’s crucial to profile your application early and throughout the
development cycle, not just when you’re nearing launch. Address performance issues such as
glitches or spikes as soon as they appear, and benchmark performance before and after major
changes. By developing a clear “performance profile” for your project, you can more easily
identify and resolve new issues, ensuring optimal performance across all target platforms.
While profiling in the Editor can give you an idea of the relative performance of different
systems in your game, profiling on each device gives you the opportunity to gain more accurate
insights. Profile a development build on target devices whenever possible. Remember to profile
and optimize for both the highest- and lowest-spec devices that you plan to support.
the Unity Profiler, the Memory Profiler and Profile Analyzer. There are also native tools from
iOS and Android for further performance testing on their respective hardware:
— Android Studio: The latest Android Studio includes a new Android Profiler that
replaces the previous Android Monitor tools. Use it to gather real-time data about
hardware resources on Android devices.
— Arm Mobile Studio: This suite of tools can help you profile and debug your games
in great detail, catering toward devices running Arm hardware.
— Developer tools for Meta Quest: See Meta’s developer tools website for
information about developing apps for Meta Quest headsets.
Certain hardware can also take advantage of Intel VTune, which helps you to find and fix
performance bottlenecks on Intel platforms (with Intel processors only).
Of course, not every optimization described here will apply to your application. Something
that works well in one project may not translate to yours. Identify genuine bottlenecks and
concentrate your efforts on what benefits your work. To learn more about how to plan your
profiling workflows see the Ultimate guide to profiling Unity games.
A chart from the profiling e-book featuring a workflow you can follow to profile your Unity projects efficiently
The Profiler is instrumentation-based; it profiles timings of game and engine code that are
automatically marked up (such as MonoBehaviour’s Start or Update methods, or specific API
calls), or explicitly wrapped with the help of ProfilerMarker API.
Begin by enabling the CPU and Memory tracks as your default. You can monitor
supplementary Profiler Modules like Renderer, Audio, and Physics, as needed for your game
(e.g., physics-heavy or music-based gameplay).
Use the Unity Profiler to test performance and resource allocation for your application.
To capture profiling data from an actual mobile device within your chosen platform, check
the Development Build and Autoconnect Profiler boxes before you click Build and Run.
Alternatively, if you want the app to start separately from your profiling, you can uncheck the
Autoconnect Profiler box, and then connect manually once the app is running.
Choose the platform target to profile. The Record button tracks several seconds of your
application’s playback (300 frames by default). Go to Unity > Preferences > Analysis >
Profiler > Frame Count to increase this as far as 2000 if you need longer captures. While this
means that the Unity Editor has to do more CPU work and take up more memory, it can be
useful depending on your specific scenario.
If you need in-depth analysis capturing detailed information about your application you can
also use the Deep Profiling setting. This enables Unity to profile the beginning and end of
every function call in your script code, telling you exactly which part of your application is
being executed and potentially causing a delay. However, deep profiling adds overhead to
every method call and may skew the performance analysis as it slows down the execution of
your game during the profiling session.
Click in the window to analyze a specific frame. Next, use either the Timeline or Hierarchy
view for the following:
— Timeline shows the visual breakdown of timing for a specific frame. This allows you to
visualize how the activities relate to one another and across different threads. Use this
option to determine if you are CPU- or GPU-bound:
— If the CPU frame time is significantly higher than the GPU frame time, your game
is CPU-bound. This means the CPU is taking longer to process the game logic,
physics, or other calculations, and the GPU is waiting for the CPU to finish its tasks.
— Similarly, if the GPU frame time is higher than the CPU frame time, your game is
GPU-bound. This indicates that the GPU is taking longer to render graphics, and
the CPU is waiting for the GPU to finish rendering.
— The Timeline Hierarchy shows the hierarchy of ProfileMarkers, grouped together. This
allows you to sort the samples based on time cost in milliseconds (Time ms and Self
ms). You can also count the number of Calls to a function and the managed heap
memory (GC Alloc) on the frame. By sorting by Time ms or Self ms, you can then identify
the functions that are taking the most time, either on their own or due to the functions
they call. This helps you focus your optimization efforts on the areas that will give the
biggest performance gains.
You can find a complete overview of the Unity Profiler here. If you’re new to profiling, you can
also watch this Introduction to Unity Profiling.
Before optimizing anything in your project, save the Profiler .data file. Implement your changes
and compare the saved .data before and after the modification. Rely on this cycle to improve
performance: profile, optimize, and compare. Then, rinse and repeat.
Take an even deeper dive into frames and marker data with the Profile Analyzer, which complements the existing Profiler.
When developing virtual reality (VR) apps, maintaining a high and stable frame rate is even
more critical to ensure a smooth and immersive experience, and to prevent motion sickness.
The most common target for VR applications is 90 fps, which gives you a strict frame budget
of just 11.11 ms per frame (1000 ms / 90 fps). This higher requirement is necessary because VR
needs to render each frame twice—once for each eye—and small imperfections in timing can
be far more noticeable to the user.
A consistent and high frame rate is also essential for Unity Web Builds, the performance of
which is highly dependent on the browser’s efficiency and the hardware capabilities. A tight
time budget per frame remains a critical factor. For example, if you’re targeting 60 fps in a
Unity WebGL build, you still have only 16.66 ms per frame to work with. This budget includes
all aspects of rendering, physics calculations, and game logic, which means that optimizing
every part of your application is crucial. Efficient management of assets, reducing the
complexity of scenes, and optimizing shaders and scripts are all necessary steps to ensure
that your application can meet its performance targets.
It’s also important to consider the impact of WebAssembly (Wasm) performance, which Unity
uses to compile and run your code in the browser. While Wasm offers significant performance
improvements over traditional JavaScript, it’s still important to profile and optimize your code
to ensure that you’re making the most of the available frame time.
Devices can exceed this budget for short periods of time (e.g., for cutscenes or loading
sequences) but not for a prolonged duration.
Most mobile devices do not have active cooling like their desktop counterparts. Physical heat
levels can directly impact performance.
If the device is running hot, the Profiler might perceive and report poor performance, even if
it is not cause for long-term concern. To combat profiling overheating, profile in short bursts.
This cools the device and simulates real-world conditions. Our general recommendation is to
keep the device cool for 10-15 minutes before profiling again.
The Profiler can tell you if your CPU is taking longer than your allotted frame budget, or if the
culprit is your GPU. It does this by emitting markers prefixed with Gfx as follows:
— If you see the Gfx.WaitForCommands marker, it means that the render thread is ready,
but you might be waiting for a bottleneck on the main thread.
— If you frequently encounter Gfx.WaitForPresent, it means that the main thread is ready
but might be waiting for the GPU to present the frame.
Effective memory management is crucial for ensuring smooth performance. Unity handles
automatic memory management for your scripts and user-generated code, allocating small,
transient data on the stack and larger, long-term data on managed or native heaps. However,
the demands of XR, web, and mobile applications require a more careful approach to memory
usage, as inefficient memory management can lead to performance issues such as slow
frame rates, increased load times, and even application crashes. In this section, we’ll explore
strategies to optimize memory usage across these platforms, helping you deliver responsive
and stable applications.
Manage object lifecycles: Properly manage the creation and destruction of objects to prevent
memory leaks and unnecessary resource usage. Use Destroy() to remove unused objects and
set references to null when they are no longer needed, which can free up memory.
Object pooling: Reuse frequently used objects, such as bullets, enemies, and UI elements,
rather than creating and destroying them repeatedly. Implementing object pools can
significantly reduce the overhead associated with object instantiation and destruction and
save on memory resources. Read more about object pooling in a Unity project.
Reduce garbage collection impact: Minimize allocations to reduce the frequency and impact
of garbage collection, which can cause performance hitches. Avoid frequent allocations
in update loops by preallocating arrays and lists where possible. Use value types (structs)
instead of reference types (classes) when appropriate, as structs are allocated on the stack
and do not contribute to garbage collection overhead.
— Lazy loading: Defer the loading of resources until they are actually needed. This can
facilitate faster initial load times and more efficient resource management.
— Use the Addressable Asset System: Utilize the Addressable Asset System to manage
assets asynchronously at runtime. This system is particularly beneficial for web and
mobile platforms, supporting remote asset hosting, dynamic content updates, and lazy
loading.
The garbage collector periodically identifies and deallocates unused managed heap memory.
The asset garbage collection runs on demand or when you load a new scene and deallocates
native objects and resources. While this runs automatically, the process of examining all the
objects in the heap can cause the game to stutter or run slowly.
Optimizing your memory usage means being conscious of when you allocate and deallocate
heap memory, and how you minimize the effect of garbage collection.
Use the Unity Objects tab to identify areas where you can eliminate duplicate memory entries
or find which objects use the most memory. The All of Memory tab displays a breakdown of
all the memory in the snapshot that Unity tracks.
Learn how to leverage the Memory Profiler in Unity for improved memory usage.
— Strings: In C#, strings are reference types, not value types. Reduce unnecessary string
creation or manipulation if you are using them at large scale. Avoid parsing string-
based data files such as JSON and XML; store data in ScriptableObjects or formats
like MessagePack or Protobuf instead. Use the StringBuilder class if you need to build
strings at runtime.
— Unity function calls: Some functions create heap allocations. Cache references to arrays
rather than allocating them in the middle of a loop. Also, take advantage of certain
functions that avoid generating garbage. For example, use GameObject.CompareTag
instead of manually comparing a string with GameObject.tag (as returning a new string
creates garbage).
— Coroutines: Though yield does not produce garbage, creating a new WaitForSeconds
object does. Cache and reuse the WaitForSeconds object rather than creating it in the
yield line.
— LINQ and Regular Expressions: Both of these generate garbage from behind-the-
scenes boxing. Avoid LINQ and Regular Expressions if performance is an issue. Write for
loops and use lists as an alternative to creating new arrays.
For more information, see the manual page on Garbage collection best practices.
See Understanding automatic memory management for examples of how to use this to your
advantage.
With Unity and Samsung’s Adaptive Performance, you can monitor the device’s thermal and
power state to ensure that you are ready to react appropriately. When users play for an
extended period of time, you can reduce your level of detail (LOD) bias dynamically so that
your game continues to run smoothly. Adaptive Performance allows developers to increase
performance in a controlled way while maintaining graphics fidelity.
While you can use Adaptive Performance APIs to fine-tune your application, Adaptive
Performance also offers several automatic modes. In these modes, Adaptive Performance
determines the game settings along several key metrics:
These four metrics dictate the state of the device, and Adaptive Performance tweaks the
adjusted settings to reduce the bottleneck. This is done by providing an integer value, known
as an Indexer, to describe the state of the device.
To learn more about Adaptive Performance, you can view the samples we’ve provided in
the Package Manager by selecting Package Manager > Adaptive Performance > Samples.
Each sample interacts with a specific scaler, so you can see how the different scalers impact
your game. We also recommend reviewing the end-user documentation to learn more about
Adaptive Performance configurations and how you can interact directly with the API.
A well-optimized asset pipeline can speed up load times, reduce memory usage, and improve
runtime performance. By working with an experienced technical artist, your team can define
and enforce asset formats, specifications, and import settings to ensure an efficient and
streamlined workflow.
Don’t rely solely on default settings. Take advantage of the platform-specific override tab to
optimize assets like textures, mesh geometry, and audio files. Incorrect settings can result in
larger build sizes, longer build times, and poor memory usage.
Consider using Presets to establish baseline settings tailored to your specific project needs.
This proactive approach helps ensure that assets are optimized from the start, leading to
better performance and a more consistent experience across all platforms.
For more guidance, refer to the best practices for art assets or explore the 3D Art Optimization
for Mobile Applications course on Unity Learn. These resources provide valuable insights that
can help you make informed decisions about asset optimization for Unity web builds, mobile,
and XR applications.
— Lower the Max Size: Use the minimum settings that produce visually acceptable results.
This is non-destructive and can quickly reduce your texture memory.
— Use powers of two (POT): Unity requires POT texture dimensions for mobile texture
compression formats (PVRCT or ETC).
Proper texture import settings will help optimize your build size.
Compress textures
Consider these two examples using the same model and texture. The settings on the left
consume almost 26 times the memory as those on the right, without much improvement in
visual quality.
Use Adaptive Scalable Texture Compression (ATSC) for mobile, XR and Web. The vast majority
of games in development tend to target min-spec devices that support ATSC compression.
— iOS games targeting A7 devices or lower (e.g., iPhone 5, 5S, etc.) – use PVRTC
— Android games targeting devices prior to 2016 – use ETC2 (Ericsson Texture
Compression)
If compressed formats such as PVRTC and ETC aren’t sufficiently high-quality, and if ASTC is
not fully supported on your target platform, try using 16-bit textures instead of 32-bit textures.
See the manual for more information on recommended texture compression format by
platform.
— Compress the mesh: Aggressive compression can reduce disk space (memory at
runtime, however, is unaffected). Note that mesh quantization can result in inaccuracies,
so experiment with compression levels to see what works for your models.
— Disable Read/Write: Enabling this option duplicates the mesh in memory, which keeps
one copy of the mesh in system memory and another in GPU memory. In most cases,
you should disable it (in Unity 2019.2 and earlier, this option is checked by default).
— Disable rigs and blend shapes: If your mesh does not need skeletal or blendshape
animation, disable these options wherever possible.
— Disable normals and tangents: If you are absolutely certain that the mesh material will
not need normals or tangents, uncheck these options for extra savings.
Unity DataTools
Unity DataTools is a collection of open source tools provided by Unity that aims to enhance
data management and serialization capabilities in Unity projects. It includes features for
analyzing and optimizing project data, such as identifying unused assets, detecting asset
dependencies, and reducing build size.
If you split your non-code assets (models, textures, Prefabs, audio, and even entire scenes)
into an AssetBundle, you can separate them as downloadable content (DLC).
Then, use Addressables to create a smaller initial build for your mobile application. Cloud
Content Delivery lets you host and deliver your game content to players as they progress
through the game.
Click here to see how the Addressable Asset System can take the hassle out of asset
management.
The Unity PlayerLoop contains functions for interacting with the core of the game engine. This
structure includes a number of systems that handle initialization and per-frame updates. All of
your scripts will rely on this PlayerLoop to create gameplay.
When profiling, you’ll see your project’s user code under the PlayerLoop (with Editor
components under the EditorLoop).
The Profiler will show your custom scripts, settings, and graphics in the context of the entire engine’s execution.
If you do need to use Update, consider running the code every n frames. This is one way
to apply time slicing, a common technique of distributing a heavy workload across multiple
frames. In this example, we run the ExampleExpensiveFunction once every three frames:
— Awake
— OnEnable/OnDisable
— Start
Avoid expensive logic in these functions until your application renders its first frame.
Otherwise, you might encounter longer loading times than necessary.
Refer to the order of execution for event functions for more details.
Use preprocessor directives if you are employing these methods for testing:
#if UNITY_EDITOR
void Update()
{
}
#endif
Here, you can freely use the Update in-Editor for testing without unnecessary overhead
slipping into your build.
To do this more easily, consider making a Conditional attribute along with a preprocessing
directive. For example, create a custom class like this:
Generate your log message with your custom class. If you disable the ENABLE_LOG
preprocessor in the Player Settings, all of your Log statements disappear in one fell swoop.
The same thing applies for other use cases of the Debug Class, such as Debug.DrawLine
and Debug.DrawRay. These are also only intended for use during development and can
significantly impact performance.
When using a Set or Get method on an animator, material, or shader, harness the integer-
valued method instead of the string-valued methods. The string methods simply perform
string hashing and then forward the hashed ID to the integer-valued methods.
Use Animator.StringToHash for Animator property names and Shader.PropertyToID for material
and shader property names. Get these hashes during initialization and cache them in variables
for when they’re needed to pass to a Get or Set method.
Instantiating a prefab with the desired components already set up is generally more performant.
Here’s an example that demonstrates the inefficient use of a repeated GetComponent call:
void Update()
{
Renderer myRenderer = GetComponent<Renderer>();
ExampleFunction(myRenderer);
}
It’s more efficient to invoke GetComponent only once, as the result of the function is cached.
The cached result can be reused in Update without any further calls to GetComponent.
A best practice for object pooling is to create the reusable instances when a CPU spike is less
noticeable (e.g., during a menu screen). Then track this “pool” of objects with a collection.
During gameplay, enable the next available instance when needed, disable objects instead of
destroying them, and return them to the pool.
This reduces the number of managed allocations in your project and can prevent garbage
collection problems. Unity includes a built-in object pooling feature via the UnityEngine.Pool
namespace. Available in Unity 2021 LTS and later, this namespace facilitates the management
of object pools, automating aspects like object lifecycle and pool size control.
Learn how to create a simple object pooling system in Unity here. You can also see the
object pooling pattern, and many others, implemented in a Unity scene in this sample project
available on the Unity Asset Store.
Use ScriptableObjects
Store static values or settings in a ScriptableObject instead of a MonoBehaviour. The
ScriptableObject is an asset that lives inside of the project that you only need to set up once.
MonoBehaviours carry extra overhead since they require a GameObject – and by default
a Transform – to act as a host. That means you need to create a lot of unused data before
storing a single value. The ScriptableObject slims down this memory footprint by dropping the
GameObject and Transform. It also stores the data at the project level, which is helpful if you
need to access the same data from multiple scenes.
A common use case is having many GameObjects that rely on the same duplicate data that
does not need to change at runtime. Rather than having this duplicate local data on each
GameObject, you can funnel it into a ScriptableObject. Then, each of the objects stores a
reference to the shared data asset, rather than copying the data itself. This can provide
significant performance improvements in projects with thousands of objects.
Create fields in the ScriptableObject to store your values or settings, then reference the
ScriptableObject in your MonoBehaviours.
In this example, a ScriptableObject called Inventory holds settings for various GameObjects.
Using those fields from the ScriptableObject can prevent unnecessary duplication of data
every time you instantiate an object with that MonoBehaviour.
In software design, this is an optimization known as the flyweight pattern. Restructuring your
code in this way using ScriptableObjects avoids copying a lot of values and reduces your
memory footprint. Learn more about the flyweight pattern and many others, as well as design
principles, in the e-book Level up your code with design patterns and SOLID.
Watch this Introduction to ScriptableObjects devlog to see how ScriptableObjects can benefit
your project. Reference Unity documentation here as well as the technical guide Create
modular game architecture in Unity with ScriptableObjects.
There are a few Project Settings that can affect your mobile performance.
Ensure your Accelerometer Frequency is disabled if you are not making use of it in your mobile game.
When targeting XR platforms, the frame rate considerations are even more critical. A
frame rate of 72 fps, 90 fps, or even 120 fps, is often necessary to maintain immersion and
prevent motion sickness. These higher frame rates help ensure a smooth and responsive
experience, which is crucial for comfort in VR environments. However, these come with their
own challenges in terms of power consumption and thermal management, particularly in
standalone VR headsets.
Choosing the right frame rate is about understanding the specific demands and constraints of
your target platform, whether it’s a mobile device, a standalone VR headset, or an AR device.
By carefully selecting an appropriate frame rate, you can optimize both performance and user
experience across different platforms.
You can also adjust the frame rate dynamically during runtime with Application.
targetFrameRate. For example, you could drop below 30 fps for slow or relatively static
scenes and reserve higher fps settings for gameplay.
GameObject.Instantiate(prefab, parent);
GameObject.Instantiate(prefab, parent, position, rotation);
— Mobile Platforms: Mobile devices typically enforce Vsync to match the display’s refresh
rate, often at 60Hz or higher on newer devices. If your application’s frame rate drops
below this target, the device will hold the previous frame, causing noticeable stuttering
or input lag. It’s crucial to optimize rendering performance to maintain a steady frame
rate, ensuring smooth operation across a variety of mobile devices with different
performance capabilities.
— Web Platforms: Web browsers also tend to enforce Vsync, particularly in Unity Web, to
ensure synchronization with the display’s refresh rate. Given the additional overhead
of running within a browser, optimizing your application to maintain a consistent frame
rate is essential to avoid visible performance drops. Test across different browsers and
devices as web platforms can vary in their capabilities.
— XR Platforms: In XR environments, maintaining a high and stable frame rate is even more
critical due to the immersive nature of these experiences. Most XR devices enforce
Vsync at 90Hz or higher, and any drop in frame rate can lead to discomfort or motion
sickness for users. Optimizing every aspect of your application, from rendering to
physics calculations, is essential to ensure the GPU can consistently meet these high
demands.
By understanding how Vsync is managed across XR, web, and mobile platforms, and by
optimizing your application to maintain a consistent frame rate, you can deliver smoother,
more responsive experiences that meet the expectations of users on these diverse platforms.
Vsync Count
The Vsync Count setting in Unity’s Quality settings determines how the rendering of frames is
synchronized with the display’s refresh rate. When set to Every V Blank (equivalent to a Vsync
Count of 1), Unity synchronizes the rendering of frames with each vertical blank, effectively
capping the frame rate to match the display’s refresh rate (e.g., 60Hz = 60 FPS). This helps
prevent screen tearing and ensures smooth visual output.
Alternatively, setting it to Every Second V Blank (Vsync Count of 2) halves the frame rate,
which might be useful in situations where your application struggles to maintain full refresh
rate performance. Disabling Vsync (Don’t Sync) allows for maximum FPS but can result
in screen tearing. On some platforms, Vsync may still be enforced at the hardware level
regardless of this setting.
With each frame, Unity determines the objects that must be rendered and then creates draw
calls. A draw call is a call to the graphics API to draw objects (e.g., a triangle), whereas a batch
is a group of draw calls to be executed together.
As your projects become more complex, you’ll need a pipeline that optimizes the workload
on your GPU. The Universal Render Pipeline (URP) supports three options for rendering:
Forward, Forward+, and Deferred.
Forward rendering evaluates all lighting in a single pass and is generally recommended as default
for mobile games. Forward+, introduced with Unity 2022 LTS, improves upon standard Forward
rendering by culling lights spatially rather than per object. This significantly increases the overall
number of lights that can be utilized in rendering a frame. Deferred mode is a good choice for
specific cases, such as for games with lots of dynamic light sources. The same physically based
lighting and materials from consoles and PCs can also scale to your phone or tablet.
Maximum
Unlimited; the
number of real-
9 per-Camera limit Unlimited
time lights per
applies
object
Two options:
Learn more about using URP in Unity projects in the e-book Introduction to the Universal
Render Pipeline for advanced Unity creators.
To optimize your graphics rendering, it’s essential to understand the limitations of your target
hardware – whether it’s VR, mobile, or web – and how to effectively profile the GPU. Profiling
allows you to check and verify that your optimizations are having the desired impact.
— VR: VR hardware demands high frame rates (typically 90 FPS or higher) and low
latency to maintain a smooth and immersive experience. The GPU needs to render
complex scenes twice (once for each eye), which requires careful optimization of both
performance and visual fidelity.
— Mobile: Mobile devices have limited processing power and memory compared to
desktops and consoles. Optimizations should focus on minimizing draw calls, reducing
texture sizes, and using simplified shaders to ensure smooth performance without
draining the battery or overheating the device.
— Web: Web platforms, particularly when using Unity Web, must balance performance
with the constraints of running in a browser environment. Optimization should prioritize
reducing build size, minimizing load times, and ensuring compatibility across different
browsers and hardware configurations.
Use these best practices for reducing the rendering workload on the GPU.
See GFXBench for a great list of different industry-standard benchmarks for GPUs and
graphics cards. The website provides a good overview of the current GPUs available and how
they stack up against each other.
— CPU Main: Total time to process one frame (and update the Editor for all windows)
— CPU Render thread: Total time to render one frame of the Game view
— SetPass calls: The number of times Unity must switch shader passes to render the
GameObjects onscreen; each pass can introduce extra CPU overhead.
Note: In-Editor fps does not necessarily translate to build performance. We recommend
that you profile your build for the most accurate results. Frame time in milliseconds is a
more accurate metric than frames per second when benchmarking.
While PC and console hardware can handle a large number of draw calls, the overhead
remains significant enough to justify optimization. On mobile devices, VR headsets, and web
browsers, draw call optimization is crucial for maintaining performance. By reducing the
number of draw calls, you can ensure smoother and more efficient rendering, especially on
resource-constrained platforms.
To optimize performance, especially on web, VR, and mobile platforms, reducing draw calls is
essential. Here are key strategies to achieve this:
1. Use a texture atlas: Combine multiple textures into a single texture atlas to minimize
the number of texture bindings and draw calls. This is particularly important in web and
mobile environments where reducing state changes can improve rendering efficiency.
2. Optimize materials: Limit the number of materials and shaders used in your project.
Shared materials are easier to batch together and reduce the draw call overhead.
3. Implement LOD (Level of Detail): Use LOD techniques to decrease the complexity of
distant objects, reducing the number of draw calls for objects that are far from the
camera. This approach is vital for VR, where maintaining high frame rates is critical to
prevent motion sickness, and for mobile platforms, where processing power is limited.
4. Apply culling techniques: Use frustum culling and occlusion culling to ensure that only
visible objects are rendered. By not drawing objects that are outside the camera’s view
or obscured by other geometry, you can reduce the number of draw calls, improving
performance across all platforms, especially in resource-constrained web and mobile
environments.
Draw call batching minimizes these state changes and reduces the CPU cost of rendering
objects. Unity can combine multiple objects into fewer batches using several techniques:
— SRP Batching: If you are using HDRP or URP, enable the SRP Batcher in your Pipeline
Asset settings under Advanced. When using compatible shaders, the SRP Batcher
reduces the GPU setup between draw calls and makes material data persistent in GPU
Memory. This can speed up your CPU rendering times significantly. Use fewer Shader
Variants with a minimal amount of Keywords to improve SRP batching. Consult this SRP
documentation to see how your project can take advantage of this rendering workflow.
— GPU instancing: If you have a large number of identical objects (e.g., buildings, trees,
grass, and so on with the same mesh and material), use GPU instancing. This technique
batches them using graphics hardware. To enable GPU Instancing, select your material
in the Project window, and in the Inspector, check Enable Instancing.
— Static batching: For non-moving geometry, Unity can reduce draw calls for any meshes
sharing the same material. It is more efficient than dynamic batching, but it uses more
memory.
Mark all meshes that never move as Batching Static in the Inspector. Unity combines
all static meshes into one large mesh at build time. The StaticBatchingUtility also allows
you to create these static batches yourself at runtime (for example, after generating a
procedural level of non-moving parts).
— Dynamic Batching: For small meshes, Unity can group and transform vertices on the
CPU, then draw them all in one go. Note: Do not use this unless you have enough
low-poly meshes (no more than 300 vertices each and 900 total vertex attributes).
Otherwise, enabling it will waste CPU time looking for small meshes to batch.
You can maximize the effects of batching with a few simple rules:
— Use as few textures in a scene as possible. Fewer textures require fewer unique
materials, making them easier to batch. Additionally, use texture atlases wherever
possible.
— Always bake lightmaps at the largest atlas size possible. Fewer lightmaps require fewer
material state changes, but keep an eye on the memory footprint.
— Keep an eye on the number of static and dynamic batch counts versus the total draw call
count by using the Profiler or the rendering stats during optimizations.
Please refer to the Draw Call Batching documentation for more information.
The GPU Resident Drawer uses the BatchRendererGroup API to draw GameObjects with GPU
instancing, which reduces the number of draw calls and frees CPU processing time. The GPU
Resident Drawer works only with the following:
— Graphics APIs and platforms that support compute shaders, except OpenGL ES
GPU Resident Drawer: Selecting Instanced Drawing in the Render Pipeline Asset
Upon selecting the Instanced Drawing option you may get a message in the UI warning you
that “BatchRenderGroup Variants setting must be ‘Keep All’”. Adjusting this option in the
graphics settings completes the setup for the GPU Resident Drawer.
Set the BatchRenderGroup Varient to Keep All within the Graphics settings.
The Frame Debugger breaks each frame into its separate steps.
New to the Frame Debugger? Check out this introductory tutorial here.
The performance improvements from this new threading mode scale with the number of draw
calls submitted in each frame. Scenes with more draw calls, e.g., complex scenes with many
objects and textures, can see significant performance enhancements.
Instead, consider using alternatives such as custom shader effects and light probes for
dynamic objects, which can simulate lighting without the heavy performance cost. For static
objects, baked lighting is a more efficient option, as it provides high-quality lighting without
the runtime overhead. By carefully managing lighting, you can maintain visual quality while
optimizing performance across XR, mobile, and web applications.
See this feature comparison table for the specific limits of URP and Built-in Render Pipeline
real-time lights.
Disable shadows
Shadow casting can be disabled per MeshRenderer and light. Disable shadows whenever
possible to reduce draw calls.
You can also create fake shadows using a blurred texture applied to a simple mesh or quad
underneath your characters. Otherwise, you can create blob shadows with custom shaders.
Baked shadows and lighting can then render without a performance hit at runtime. The
Progressive CPU and GPU Lightmappers can accelerate the baking of Global Illumination.
To limit memory usage, adjust the Lightmapping Settings (Windows > Rendering > Lighting Settings) and Lightmap size.
Follow the manual guide and this article on light optimization to get started with Lightmapping
in Unity.
Adaptive Probe Volumes (APV) in Unity offer a range of features to enhance global
illumination, particularly in dynamic and large scenes. URP now supports per-vertex sampling
for improved performance on lower-end devices, while VFX particles benefit from indirect
lighting baked into probe volumes.
APV data can be streamed from disk to CPU and GPU, optimizing lighting information for large
environments. Developers can bake and blend multiple lighting scenarios, allowing real-time
transitions like day/night cycles. The system also supports sky occlusion, integrates with the
Ray Intersector API for more efficient probe calculations, and offers control over light probe
sample density to reduce light leaking and speed up iterations. The new C# baking API further
refines the workflow, enabling independent baking of APV from lightmaps or reflection probes.
To get started, check out the talk Efficient and impactful lighting with Adaptive Probe Volumes
from GDC 2023
See the Working with LODs course on Unity Learn for more detail.
While frustum culling outside the camera view is automatic, occlusion culling is a baked
process. Simply mark your objects as Static Occluders or Occludees, then bake via Window
> Rendering > Occlusion Culling. Though not necessary for every scene, culling can improve
performance in specific cases, so be sure to profile before and after enabling occlusion culling
to check if it has improved performance.
Check out the Working with Occlusion Culling tutorial for more information.
5. GPU acceleration: Unlike previous versions that relied heavily on CPU for occlusion
culling, Unity 6 leverages GPU acceleration. This shift allows for more efficient real-time
calculations, reducing the overhead on the CPU and enabling more complex scenes
without sacrificing performance.
6. Integration with GPU Resident Drawer: The GPU occlusion culling works in tandem with
the GPU Resident Drawer, which handles large sets of objects and their visibility, further
optimizing rendering pipelines for both static and dynamic objects.
7. Dynamic and static object culling: Unity 6’s occlusion culling system can manage both
static and dynamic objects more effectively. Dynamic objects are now culled using a
portal-based system, which ensures that only the visible objects are processed, even
when they move within the scene.
8. Baking and real-time adjustments: Developers can bake occlusion data in the Editor,
which is then used at runtime. This process divides the scene into cells and computes
visibility between them, allowing for real-time adjustments as the camera moves. The
system also supports visualizing occlusion culling in the Editor, helping developers
optimize their scenes better.
To activate GPU occlusion culling locate the Render Pipeline Asset and toggle the GPU
Occlusion check box.
You can use Screen.SetResolution(width, height, false) to lower the output resolution and
regain some performance. Profile multiple resolutions to find the best balance between quality
and speed.
Spatial-Temporal Post-Processing
Spatial-Temporal Post-Processing (STP) is designed to enhance visual quality across a wide
range of platforms, from mobile devices to consoles and PCs. STP is a spatio-temporal anti-
aliasing upscaler that works with both HDRP and URP render pipelines, offering high-quality
content scaling without the need for changes to existing content. This solution is optimized
for GPU performance, ensuring faster rendering times and making it easier to achieve high
performance while maintaining visual quality.
— In the Inspector navigate to Quality > Upscaling Filter, and select Spatial-Temporal
Post-Processing.
If the default URP shaders don’t meet your specific needs, you can customize them using
Shader Graph, which allows you to visually design and optimize shaders for your project. Here
are a few shader optimization tips:
— Use combined textures: Utilize combined textures like occlusion, roughness, and
metallic (ORM) maps to reduce the number of texture lookups. This approach
consolidates multiple maps into a single texture, lowering the workload on the GPU,
which is crucial for maintaining performance on mobile, web, and XR platforms.
— Optimize Shader Graph: When using Shader Graph, focus on streamlining shader logic
to enhance performance. This is particularly important for mobile and XR applications,
where the efficiency of each shader directly impacts overall performance.
— Profile regularly: Continuously test and profile your shaders on the target devices,
whether web, mobile, or XR, to ensure they meet performance requirements. Regular
profiling helps you catch potential issues early and optimize accordingly for each
platform’s specific needs.
Optimize SkinnedMeshRenderers
Rendering skinned meshes is expensive. Make sure that every object using a
SkinnedMeshRenderer requires it. If a GameObject only needs animation some of the time,
use the BakeMesh function to freeze the skinned mesh in a static pose, then swap to a
simpler MeshRenderer at runtime.
Learn much more about lighting workflows in Unity from these guides:
Unity offers two UI systems, the older Unity UI and the new UI Toolkit. UI Toolkit is intended to
become the recommended UI system. It’s tailored for maximum performance and reusability,
with workflows and authoring tools inspired by standard web technologies, meaning UI
designers and artists will find it familiar if they already have experience designing web pages.
However, as of Unity 6, UI Toolkit does not have some features that Unity UI and Immediate
Mode GUI (IMGUI) support. Unity UI and IMGUI are more appropriate for certain use cases and
are required to support legacy projects. See the Comparison of UI systems in Unity for more
information.
Take advantage of UGUI’s ability to support multiple Canvases. Divide UI elements based on
how frequently they need to be refreshed. Keep static UI elements on a separate Canvas, and
dynamic elements that update at the same time on smaller sub-canvases.
Ensure that all UI elements within each Canvas have the same Z value, materials, and textures.
If you only need to turn off the Canvas’s visibility, disable the Canvas component rather
than the whole GameObject. This can prevent your game from having to rebuild meshes and
vertices when you re-enable it.
Remove the default GraphicRaycaster from the top Canvas in the hierarchy. Instead, add
the GraphicRaycaster exclusively to the individual elements that need to interact (buttons,
scrollrects, and so on).
In addition, disable Raycast Target on all UI text and images that don’t need it. If the
UI is complex with many elements, all of these small changes can reduce unnecessary
computation.
If you do need to use Layout Groups (Horizontal, Vertical, Grid) for your dynamic elements,
avoid nesting them to improve performance.
Use the Device Simulator to preview the UI across a wide range of supported devices. You can
also create virtual devices in XCode and Android Studio.
Consider lowering the Application.targetFrameRate during a fullscreen UI, since you should
not need to update at 60 fps.
Consider using Screen Space – Overlay for your Canvas RenderMode if possible, as that does
not require a camera.
When using World Space Render Mode, make sure to fill in the Event Camera.
UI Toolkit offers improved performance over Unity UI, is tailored for maximum performance
and reusability, and provides workflows and authoring tools inspired by standard web
technologies. One of its key benefits is that it uses a highly optimized rendering pipeline that is
specifically designed for UI elements.
Here are some general recommendations for optimizing performance of your UI with UI Toolkit:
— Use Vorbis for most sounds (or MP3 for sounds not intended to loop).
— Use ADPCM for short, frequently used sounds (e.g., footsteps, gunshots). This shrinks
the files compared to uncompressed PCM, but is quick to decode during playback.
Sound effects on mobile devices should be 22,050 Hz at most. Using lower settings usually
has minimal impact on the final quality; your own ears can help you judge for yourself.
Dialog, short
If reducing memory usage is the priority, select
Medium (>= music, medium/
Compressed In Memory.
200 KB) non-noisy
sounds effects
If CPU usage is a concern, clips should be set to
Decompress On Load.
Background
Large (> music, ambient
Set to Streaming. Streaming has a 200 KB overhead, so
350-400 background
it is only suitable for sufficiently large AudioClips.
KB) noise, long
dialog
For mobile platforms, 22050 Hz should be sufficient. Use 44100Hz (i.e. CD-quality) sparingly.
48000Hz is excessive.
The following tips will help you when working with animation in Unity. For a comprehensive
guide through the animation system, download the free e-book The definitive guide to
animation in Unity.
The current animation system is optimized for animation blending and more complex setups. It
has temporary buffers used for blending, and there is additional copying of the sampled curve
and other data.
Also, if possible, consider not using the animation system at all. Create easing functions or
use a third-party tweening library where possible (e.g., DOTween). These can achieve very
natural-looking interpolation with mathematical expressions.
Note: This does not apply to constant curves (curves that have the same value for the length
of the animation clip). Constant curves are optimized, and these are less expensive than
normal curves.
Optimize workflow
Other optimizations are possible at the scene level:
— Implement a small AI Layer to control the Animator. You can make it provide simple
callbacks for OnStateChange, OnTransitionBegin, and other events.
— Use State Tags to easily match your AI state machine to the Unity state machine.
— Use additional curves to mark up your animations, for example in conjunction with target
matching.
Physics can create intricate gameplay, but this comes with a performance cost. When you
know these costs, you can tweak the simulation to manage them appropriately. These tips can
help you stay within your target frame rate and create smooth playback with Unity’s built-in
physics (NVIDIA PhysX).
Simplify colliders
Mesh colliders can be expensive. Substitute more complex mesh colliders with primitive or
simplified mesh colliders to approximate the original shape.
Make sure that you edit your Physics settings (Project Settings > Physics) as well. Simplify
your Layer Collision Matrix wherever possible.
The Fixed Timestep field defines the time delta used by each physics step. For example, the
default value of 0.02 seconds (20 ms) is equivalent to 50 fps, or 50 Hz.
The default Fixed Timestep in the Project Settings is 0.02 seconds (50 frames per second).
Because each frame in Unity takes a variable amount of time, it is not perfectly synced
with the physics simulation. The engine counts up to the next physics time step. If a frame
runs slightly slower or faster, Unity uses the elapsed time to know when to run the physics
simulation at the proper time step.
In the event that a frame takes a long time to prepare, this can lead to performance issues. For
example, if your game experiences a spike (e.g., instantiating many GameObjects or loading
a file from disk), the frame could take 40 ms or more to run. With the default 20 ms Fixed
Timestep, this would cause two physics simulations to run on the following frame in order to
“catch up” with the variable time step.
Extra physics simulations, in turn, add more time to process the frame. On lower-end
platforms, this potentially leads to a downward spiral of performance.
A subsequent frame taking longer to prepare makes the backlog of physics simulations longer
as well. This leads to even slower frames and even more simulations to run per frame. The
result is worse and worse performance.
Eventually the time between physics updates could exceed the Maximum Allowed Timestep.
After this cutoff, Unity starts dropping physics updates, and the game stutters.
— Reduce the simulation frequency. For lower-end platforms, increase the Fixed Timestep
to slightly more than your target frame rate. For example, use 0.035 seconds for 30ps on
mobile. This could help prevent that downward performance spiral.
— Decrease the Maximum Allowed Timestep. Using a smaller value (like 0.1 s) sacrifices
some physics simulation accuracy, but also limits how many physics updates can happen
in one frame. Experiment with values to find something that works for your project’s
requirements.
— Simulate the physics step manually if necessary by choosing the SimulationMode during
the Update phase of the frame. This allows you to take control when to run the physics
step. Pass Time.deltaTime to Physics.Simulate in order to keep the physics in sync with
the simulation time.
This approach can cause instabilities in the physics simulation in scenes with complex physics
or highly variable frame times, so use it with caution.
A MeshCollider has several CookingOptions to help you validate the mesh for physics. If you
are certain that your mesh does not need these checks, you can disable them to speed up
your cook time.
Also, if you are targeting PC, make sure you keep Use Fast Midphase enabled. This switches
to a faster algorithm from PhysX 4.1 during the mid-phase of the simulation (which helps
narrow down a small set of potentially intersecting triangles for physics queries).
Use Physics.BakeMesh
If you are generating meshes procedurally during gameplay, you can create a Mesh Collider
at runtime. Adding a MeshCollider component directly to the mesh, however, cooks/bakes the
physics on the main thread. This can consume significant CPU time.
Use Physics.BakeMesh to prepare a mesh for use with a MeshCollider and save the
baked data with the mesh itself. A new MeshCollider referencing this mesh will reuse this
prebaked data (rather than baking the mesh again). This can help reduce Scene load time or
instantiation time later.
To optimize performance, you can offload mesh cooking to another thread with the C# job
system. Refer to this example for details on how to bake meshes across multiple threads.
— The broad phase, which collects potential collisions using a sweep and prune algorithm
— The narrow phase, where the engine actually computes the collisions
The broad phase default setting of Sweep and Prune BroadPhase (Edit > Project Settings >
Physics > BroadPhase Type) can generate false positives for worlds that are generally flat
and have many colliders.
If your scene is large and mostly flat, avoid this issue and switch to Automatic Box Pruning or
Multibox Pruning Broadphase. These options divide the world into a grid, where each grid cell
performs sweep-and-prune.
Multibox Pruning Broadphase allows you to specify the world boundaries and the number of
grid cells manually, while Automatic Box Pruning calculates that for you.
This overrides the Physics.defaultSolverIterations, which can also be found in Edit > Project
Settings > Physics > Default Solver Iterations.
To optimize your physics simulations, set a relatively low value in the project’s
defaultSolveIterations. Then apply higher custom Rigidbody.solverIterations values to the
individual instances that need more detail.
— Use manual syncing: For better performance, manually synchronize Transforms with
Physics.SyncTransforms() before calls that require the latest Transform data. This
approach is more efficient than enabling autoSyncTransforms globally.
The general recommendation is to always enable Reuse Collision Callbacks for performance
benefits. You should only disable this feature for legacy projects where the code relies on
individual Collision class instances, making it impractical to store individual fields.
In the Unity Console, there is a single collision instance on Collision Entered and Collision Stay.
Note that you can move a static collider, contrary to the term “static.” To do so, simply modify
the position of the physics body. Accumulate the positional changes and sync before the
physics update. You don’t need to add a Rigidbody component to the static collider just to
move it.
However, if you want the static collider to interact with other physics bodies in a more complex
way, give it a kinematic Rigidbody. Use Rigidbody.position and Rigidbody.rotation to move it
instead of accessing the Transform component. This guarantees more predictable behavior
from the physics engine.
Physics queries that return multiple colliders as an array, like OverlapSphere or OverlapBox,
need to allocate those objects on the managed heap. This means that the garbage collector
eventually needs to collect the allocated objects, which can decrease performance if it
happens at the wrong time.
To reduce this overhead, use the NonAlloc versions of those queries. For example,
if you are using OverlapSphere to collect all potential colliders around a point, use
OverlapSphereNonAlloc instead.
This allows you to pass in an array of colliders (the results parameter) to act as a buffer.
The NonAlloc method works without generating garbage. Otherwise, it functions like the
corresponding allocating method.
Note that you need to define a results buffer of sufficient size when using a NonAlloc method.
The buffer does not grow if it runs out of space.
2D Physics
Note that the above advice does not apply to 2D physics queries, because in Unity’s 2D
physics system, methods do not have a “NonAlloc” suffix. Instead, all 2D physics methods,
including those that return multiple results, provide overloads that accept arrays or lists. For
instance, while the 3D physics system has methods like RaycastNonAlloc, the 2D equivalent
simply uses an overloaded version of Raycast that can take an array or List<T> as a
parameter, such as:
By using overloads, you can perform non-allocating queries in the 2D physics system without
needing specialized NonAlloc methods.
Use RaycastCommand to batch the query using the C# Job System. This offloads the work
from the main thread so that the raycasts can happen asynchronously and in parallel.
The Physics Debugger helps you visualize how your physics objects can interact with one another.
Building an application in Unity is a demanding endeavor that often involves many developers.
Make sure that your project is set up optimally for your team to collaborate.
A version control system (VCS) allows you to keep a historical record of your entire project. It
brings organization to your work and enables teams to iterate efficiently.
Project files are stored in a shared database called a repository, or “repo.” You backup your
project at regular intervals to the repo, and if something goes wrong, you can revert back to
an earlier version of the project.
With a VCS, you can make multiple individual changes and commit them as a single group for
versioning. This commit sits as a point on the timeline of your project, so that if you need to
revert back to a previous version, everything from that commit is undone and restored to the
state it was at the time. You can review and modify each change grouped within a commit or
undo the commit entirely.
With access to the project’s entire history, it’s easier to identify which changes introduced
bugs, restore previously removed features, and easily document changes between your game
or product releases.
What’s more, because version control is typically stored in the cloud or on a distributed
server, it supports your development team’s collaboration from wherever they’re working – an
increasingly important benefit as remote work becomes commonplace.
There are three ways to access UVCS: via multiple applications and repositories through the
UVCS desktop client, by adding it to your projects through the Unity Hub, or accessing the
repository on Unity cloud via your web browser.
— Work knowing that your art assets are securely backed up.
Additionally, UVCS helps you centralize your development with excellent visualization tools.
Artists especially will appreciate the user-friendly workflows that encourage tighter integration
between development and art teams with the Gluon application, which makes it easier to
see and manage just the files they need without dealing with the entire project repository
complexity. Besides offering a simplified workflow it also offers tooling that makes it easy to
see visual differences in asset versions and easier to contribute to a unified version control
environment.
To help with version control merges, make sure your Editor Settings have Asset Serialization
Mode set to Force Text.
If you’re using an external version control system (such as Git) in the Version Control settings,
verify that the Mode is set to Visible Meta Files.
Unity also has a built-in YAML (a human-readable, data-serialization language) tool specifically
used for merging scenes and prefabs. For more information, see Smart Merge in the Unity
documentation.
Learn more about Unity VCS, and general version control and project organization best
practices, in the e-book Version control and project organization best practices for game
developers.
Note that, at runtime, your project can load scenes additively using SceneManager.
LoadSceneAsync passing the LoadSceneMode.Additive parameter mode.
All of the optimization tips demonstrated in this e-book will benefit your game regardless of
your platform. This next section focuses on specific optimization tips for XR and web.
Framerate
— If you have changed default settings in your project, you can also create a new C# script
or open an existing script where you want to set the target frame rate. Typically, this
would be in a script that initializes your game or in a central game manager script.
Brotli: Provides a higher compression ratio compared to gzip, resulting in smaller file sizes and
faster load times; supported by modern browsers, and requires a web site to be served from a
secure https:// URL or from a http://localhost/ testing URL
Gzip: Widely supported and still effective; use this option if content is being served over an
insecure http:// server, hosted on a web server that has not yet been configured to serve
Brotli-compressed content, or if you’re using more complex CDN load balancing or caching
infrastructure that are not yet compatible with Brotli
Uncompressed: Generally not recommended for production due to the significantly larger
file sizes and slower load times; deploy with this setting only if the web server has been only
configured to use an on-the-fly compression cache and does not support pre-compressed
Brotli or Gzip content.
Brotli is generally the best compression method when publishing a Unity Web build due to its
high compression ratio, browser support and performance.
It’s recommended that the Decompression Fallback setting is set to Disabled to ensure fast
site startup times; additionally, the web server that is hosting the page should be configured
to serve pre-compressed Unity content.
Having Decompression Fallback enabled is harmful for battery usage for mobile browsers,
and slows down game startup times. Follow the web server configuration guidelines on this
documentation page.
Enabling Strip Engine Code is a recommended practice to ensure an efficient build, particularly
for Unity Web projects. This feature removes unused engine code, which can significantly
reduce the size of the final build, leading to faster load times and better performance.
Alternatively, if your application does require exception handling support, consider enabling
WebAssembly 2023, as it optimizes the code size for exception handling in general.
In the Player Settings window under the Unity Web Build tab, expand Publishing Settings,
and set Enable Exceptions to None if you don’t need exceptions in your build.
Chrome DevTools
Chrome DevTools is a comprehensive set of web development tools built into the Google
Chrome browser. It offers features for profiling performance, debugging JavaScript, analyzing
network activity, and inspecting rendering. Here are the basic steps for activating Chrome
DevTools:
2. Go to the Performance tab. Click the Record button and interact with your Unity Web
game to capture performance data. Click Stop to end the recording and analyze the
captured data, focusing on frame rate, CPU usage, and rendering performance.
3. Navigate to the Network tab and reload your Unity Web game to capture all network
requests. Examine the timeline, request details, and loading times to identify any
network-related performance bottlenecks.
4. Use the Sources tab to set breakpoints in your JavaScript code to pause execution and
inspect variables. Use the call stack and scope information to debug and optimize your
code.
5. Use the Console tab to log Unity Web states and debug rendering issues. Utilize Unity
Web-specific tools and extensions, such as Unity Web Insight or Unity Web Debugging
for deeper analysis.
This section covers optimization tips for VR, AR, and MR applications built with Unity
(collectively known as XR). Many of these techniques are mentioned in other parts of this
guide because they apply to mobile devices generally but they’re collected here as well for
readers focusing exclusively on XR apps.
Try these techniques to help your XR applications run efficiently, particularly for VR, as these
experiences demand high performance and low latency to maintain immersion and prevent
motion sickness. High-resolution, 3D-rendered environments and responsive interactions
require optimization to ensure smooth experiences that are also physically comfortable.
For a comprehensive guide on developing XR apps in Unity, download the e-book Create
virtual and mixed reality experiences in Unity.
Render Mode
The correct Render Mode setting will make a big difference to the performance of a VR
game. If you’re using Unity’s OpenXR plugin then, within the Project Settings > XR Plugin-
management > plugin provider, there is a menu option for Render Mode. From the drop down
select Single Pass Instanced. This mode renders both eyes in a single pass using instancing.
The scene is rendered once and the shaders are run for both eyes simultaneously.
Select Single Pass Instanced as the render mode when developing XR applications.
Foveated rendering
Unity 6 integrates Foveated rendering with support for Oculus XR and OpenXR including
support for PlayStation VR2. Foveated rendering is an optimization technique for VR that
leverages the human eye’s tendency to focus on a small area at a time. By rendering the
focal area in high resolution and the peripheral areas in lower resolution, it reduces the GPU
workload significantly. Implementing foveated rendering can enhance performance, allowing
for higher frame rates and improved visual quality where it matters most.
Standardize interactions: Use built-in interaction patterns to reduce custom code and ensure
consistency.
Use event-driven architecture: Utilize event-driven input handling to minimize polling and
improve performance.
Increase ease of use: Simplify the development process with ready-to-use components,
leading to faster iteration and optimization.
Overall, the XR Interaction Toolkit helps streamline and optimize input handling, enhancing
responsiveness and user experience in XR applications.
By implementing these strategies, you can ensure smooth and responsive input handling,
enhancing the overall user experience in your XR applications.
XR-specific profilers:
You can download many more e-books for advanced Unity developers and creators from the
Unity best practices hub. Choose from over 30 guides, created by industry experts, and Unity
engineers and technical artists, that provide best practices for game development and will
help you develop efficiently with Unity’s toolsets and systems.
You’ll also find tips, best practices, and news on the Unity Blog and Unity community forums,
as well as through Unity Learn and the #unitytips hashtag.