|Published by Hammed Arowosegbe|
AR VR Software Engineer PwC Nigeria | Chapter President – Nigeria, The VR AR Association
Optimizing virtual reality (VR) applications can be daunting at first and really, there is no fixed path on how you should optimize. Once your application is hitting 72 frames/second (oculus mobile recommended) constantly and you have no drop in frame rate, you are good to go. Sometimes over-optimizing can break your application making you miss out on where the problem is exactly. The performance targets are shown below;
I am sharing some of my best practices, which if you follow, I am optimistic you will be hitting your target frame rate. The best practices will be structured from a project point of view (from preparing your 3D model to shipping your project). Now that we are ready to go, grab a cup of coffee, and if you are in a hurry scroll to the end there is a quick TL;DR (Too long didn’t read) list.
NB: Due to the rapid change in the VR development space, some of these techniques may be obsolete by the time you are reading.
Preparing your 3D Models
Not all 3D asset that you download on the internet or receive from someone or a client is VR ready, you need to check the “tris count” (how many triangles make up the 3D model), number of parts, number of materials and the UV mapping. Remember the maximum number of triangles (all 3D models) your VR application should have is 500,000 per frame.
Reduce your Tris Count
A typical workflow in VR application development is downloading ready-made 3D models and reducing the tris count in Maya/Blender. See below for some tris reduction procedures.
1- Use cubes rather than cylinders:
Making your models simple is a great way to start 3D modeling optimization, as shown above, model A still communicates the same concept as the detailed model B but with a lower triangle count. Imagine what this will affect across a wide range of models, I have reduced more than 100,000 tris using this technique.
2- Delete unused faces:
Some faces of a model will not be visible to the user so “remove them” as they are not useful.
3- Use the right amount of sub-divisions:
Knowing the right amount of subdivisions has always been a challenge for 3D artists, but a great way to start is with a single sub-division then build up as required.
Combine Static Parts
A static part is a part of an object that does not move independently of the parent object.
Combining static parts helps in lowering draw calls, but what is a draw call? a draw call refers to how many objects are being drawn to the screen by the graphics API. The higher the parts the higher the objects draw call and vise versa. As shown below only the four drone rotors are dynamic, other parts are static and should be combined.
Getting Ready for Mobile VR Aliasing Issues
Anti-aliasing: simple smoothing of edges and colors.
Anti-aliasing is a big issue with VR, even with top Tripple-A VR games. Understanding that you cannot completely eliminate aliasing issues in your application before setting out is super important. I will be sharing some techniques but these are not full-proof and hopefully, mobile VR hardware gets better in the future. The techniques are in the environmental staging section.
Z-fighting: when two or more 3D primitives fight for rendering to determine which object is closer to the camera.
Avoid overlapping faces/meshes:
Passthrough: Since the horizontal part is clipping into the vertical part it will cause z-fighting.
Cutout: This is a good technique as it creates an offset distance for the vertical part to stay in between the two horizontal parts.
Overlay: A simple technique involving moving stacking parts on top of each other.
Manually create joint offset:
As shown in the image above, manually create spaces between two joints, the offset distance may be visible to the user (depends on your design), this somewhat helps in solving flickering edges in VR. Another way is to “union” your model, although I do not advise this unless you are sure the model will not be needing modification in the future. Unioning your model helps in welding parts together and making it a single mesh.
Understand the Concept of Shared Materials
How you use materials is a major determinant of how performant your application will be, your target should be using a small number of materials. I have tried experimenting with 10 – 20 materials and it works fine, this depends on your rendering technique and shader type.
A good practice is to share one material with four models, with a texture size of 4096 * 4096 and each model takes 1 – 4 spot (2048 * 2048) on the UV map.
Sometimes, wide range models (e.g floor) should have a single material enabling a clearer and crisper user view, it is also a good trade-off for a 1 – 2 draw call as compared to a pixelated view.
When you share materials with two or more models, you lose some texel density (explain texel density) as you gain performance. This texel density change is less evident in smaller models when compared to larger models. Based on your design, using a single material on a selected larger model solves this issue with a draw call loss (a good trade-off) but makes the texture clearer.
Importing your 3D Assets into Unity 3D
Confirm the settings you need, only enable as required.
Model tab: Enable import BlendShapes if your 3D model uses blend poses or spline animation. Mesh compression/Low lowers the file size but retains the 3D model vertex information, mesh compression/High further lowers the file size but you lose vertex information. Enable generate lightmap UVs, if you want the 3D model included for baked lighting.
Rig tab: Use animation type/generic for models that do not have a custom rig. Properly defining your bones for rigged models helps in solving weird skinned mesh renderer movement at runtime.
Animation tab: Disable import animation if the model does not have a custom animation.
Materials tab: Assign your materials to the model here, it saves you iteration time.
Now that our assets are properly imported into unity, environmental staging is key, as it determines how you segment your LODs (level of details), and select a good camera rendering distance [near and far clipping distance]. Unity terrain system is not 100 percent performant on mobile VR, so might want to switch to a plane if you do not have height details.
Understanding your 3D pipeline
Unity comes with some preset 3D pipelines, such as the standard 3D, universal render pipeline (URP), high definition render pipeline (HDRP), etc. Choosing a pipeline is dependent on your application functionalities (lighting, shadows, physics, shaders, etc) but, the universal render pipeline (URP) – medium quality is a great way to start.
Switching to the universal render pipeline (URP): Unity designed URP with performance in mind, although configuring for VR can be confusing at first with some drop in frame rate. The URP comes with a BakedLit material that is specifically built for baked lighting, and super performant. It has the shader graph and VFX graph which you can use to create a VR optimized material and particle effect. See below on how to configure your URP medium asset.
Universal Render Pipeline (URP) Pipeline Settings
These settings can serve as a good base point for configuring your projects, use baked lighting and increase your lighting settings as required which is computed during bake time.
Increasing your render scale: This solves some anti-aliasing issues and makes the rendering clearer but drops the performance (you can adjust if you have extra performance space after optimization).
Turning on HDR: Solves color banding and wrong color interpretation but drops the performance.
Anti-aliasing (MSAA): Adjusting to 8x solves some anti-aliasing issues but drops the performance.
Camera Rendering Distance
In VR we want to reduce the camera near clipping distance to avoid cut-out when the user moves an object towards his/her eye. This is a good practice but it also causes severe z-fighting issues, understanding how to assign values for near and far clipping distance is dependent on your environment and application design.
Near clipping plane value: This value is a big determinant of z-fighting optimization in your application. I found 0.1 – 0.15 to be a great value but this totally depends on your design, test these values and determine your best fit.
Far clipping plane value: This should be driven by how large your environment length or breadth, pick the larger value, and assign it to the far clipping distance.
Texture filtering: This a technique that is used to solve aliasing issues on textures. Read more
Material textures: Use trilinear filtering and increase the anisotropic (how clear is a texture when viewed from an angle) level as required.
UI Textures: Enabling mipmaps is important for UI textures, it helps in solving flickering edges. If you are using text mesh pro make sure you enable extra padding.
Level of Details (LOD) Segmentation Technique
A LOD group is a unity component that dynamically switches an object model as a camera moves closer to it. It helps in rendering a lower resolution model from afar and a higher one when close.
Combine your objects based on how you segment your VR application and give the segment a LOD group, this helps in reducing polygon rendering outside the camera sight.
When you combine scene objects (using the same material), it reduces draw call but, renders (camera view) the mesh based on the total number of triangles. Combining wrongly will result in an unnecessary polygon rendering which can cause a less performant application.
Probes and Materials
Light probe: If you are using baked lightning (performant), light probes are an efficient way to dynamically light moving objects.
Reflection probe: Add reflection probes to models that have specular reflective materials.
Materials: Use more URP/Bakedlit materials.
Static Flag and Occlusion
Enabling static flag: Enable the static checkmark on assets that will not move during runtime, unity combines these objects together to lower draw calls.
Bake Occlusion Data:
Unity camera renders anything in the camera view, with occlusion culling you can stop the rendering of smaller objects obscured by a larger object. Navigate to Window/Rendering/Occlusion culling to view the window.
Canvas graphics raycaster: Unity canvas is quite expensive, completely disable canvases that are not in use.
Image raycast target: Disable raycast target on child images and icons, mostly buttons and background images should have raycast target on.
Unity TextMesh Pro raycast target: Disable raycast target if not needed and turn on extra padding (makes the text clearer).
Limit your Start() and Awake() function call, this causes a lag when loading between two scenes. Most times we use Start() and Awake() to get object references, OnValidate (the function is called when a value on the inspector is changed) is a great way to get references.
Knowing when to call Update()
Use more of OnTriggerEnter() and OnCollisionEnter() for logic, limit Update() calling only when required.
Fixed time step
Edit/Project Settings/Time/Fixed Timestep: The fixed timestep value should be 1/[VR hardware refresh rate]
Oculus Quest = 1/60 or 1/72 = [0.0167 or 0.01389]
Oculus Quest 2 = 1/60, 1/72 or 1/90 = [0.0167, 0.01389 or 0.0111]
Maximum Angular Velocity: The default value in unity is 7, increase it to 20. You can increase it from Edit/Project Settings/Physics but this affects all rigidbodies, a better way is to get the rigidbody reference via a script and set it to 20.
Collision detection: Use continuous or continuous speculative (if isKinematic), this allows for proper collision and prevents clipping into one another.
Interpolate: Change from none to interpolate for smoother movements.
Physics Materials: Assign physic materials to all interacting rigidbody, this helps in lowering weird behavior during gameplay.
Build Out Frequently
Do frequent build-out to the quest/quest 2 headset, editor testing run on your PC resources as compared with the chipset on the mobile VR headset (completely different).
Use the profiler to get bottlenecks in rendering, physics, and scripting.
OVR Metrics Tool
Oculus OVR Metrics tool is a fantastic tool, giving you a real-time analysis of your VR application.
Give me the TL;DR
- Reduce your tris-count [delete unused faces, use more cubes rather than cylinders]
- Prepare your models for z-fighting issues [avoid overlapping meshes, manually create spaces between joints]
- Anti-aliasing is a general issue, so do not overwork yourself.
- Switch to unity universal render pipeline (URP) and use medium quality.
- Properly segment your environmental LODs to avoid extra polygon rendering.
- Replace your terrain with a plane, if you do not have height data.
- Use 4x MSAA, turn off HDR and use a render scale of 1.
- Enable static flag for non-moving objects and bake occlusion data.
- Enable mipmaps for UI textures and use 2x trilinear filtering on all textures.
- Disable raycast target option on images and text, limit it to background images and buttons.
- Enable extra padding on TextMesh pro text.
- Use baked lightning, probes, and URP/BakedLit material.
- Change your fixed time step to 1/[VR hardware refresh rate], 1/72 (0.01389 ) for oculus quest 2.
- Increase interacting rigidbodies maximum angular velocity from 7 [default] to 20.
- Change rigidbody collision detection to continuous or continuous speculative (if IsKinematic) and Interpolate from none to interpolate.
- Make sure you assign physic materials to all interacting colliders.
- Build out frequently, editor performance is different from mobile VR headset performance.
- Use unity profiler and OVR metrics tool to determine performance bottlenecks.
Mobile VR is paving the way for virtual reality adoption, but due to the current hardware capabilities, it is hard to deploy a performant VR application. Some of the issues have been covered in this article and I hope it helps you in achieving your target frame rate.