Unity: The world's leading game engine for developing, deploying, and growing games

Optimizing Your Unity Game: 12 Techniques for Better Performance

Optimizing Your Unity Game: 12 Techniques for Better Performance

From draw call batching to build size reduction, these 12 Unity optimization techniques are backed by real profiler data.

10 MAY 2026, 01:02 PM

Highlights

  • 498 draw calls became 1 by enabling GPU Instancing on a shared material, cutting GPU frame time from 13 ms to 1.1 ms.
  • Switching to Addressables cut peak texture memory from 480 MB to 215 MB, a 55% reduction with no gameplay changes.
  • Texture compression and code stripping shrunk a 187 MB Android build to 68 MB, a 64% reduction without touching gameplay code.

Unity optimization is one of the most practical skills a game developer can master. A game that runs at 25 frames per second (FPS) on a mid-range device loses players. One that holds at 60 FPS earns them.

The difference often comes down not to the ambition of the design, but to the efficiency of its implementation. This guide covers 12 proven techniques to improve Unity performance, from how objects are spawned and rendered, to how memory is managed and how build sizes are kept lean.

How to Use the Unity Profiler Before You Start

Every technique in this guide assumes one thing: you profile before and after each change. The Unity profiler is the single most important tool for Unity optimization. Without it, you are guessing.

Step 1: Enable Development Build

In File > Build Settings, check Development Build and Autoconnect Profiler. Build and deploy to your target device. Profiling inside the Unity Editor is useful, but editor overhead skews results. Always validate on real hardware.

Step 2: Open the Profiler Window

Go to Window > Analysis > Profiler. Hit record and play your scene. The CPU Usage module shows time per frame in milliseconds. For 60 FPS, each frame must stay under 16.66 milliseconds (ms). For 30 FPS, your budget is 33.33 ms.

Step 3: Identify the Bottleneck

Look for the tallest spikes in the CPU or GPU timeline. Click a spike to drill into the call stack. Common culprits include script Update() loops, GC.Collect calls, draw calls, and Physics.Simulate. The Unity profiler labels each clearly.

Step 4: Apply One Optimization at a Time

Change one thing, then capture a new profiler session. Compare before and after. This is the only reliable way to confirm that a change improved Unity performance, rather than simply shuffling the numbers around.

Step 5: Use Profile Analyzer for Multi-Frame Data

Install the Profile Analyzer package via Window > Package Manager. It aggregates data across hundreds of frames, showing mean and median costs per marker. This is particularly useful for catching intermittent GC spikes that single-frame profiling misses.

1. Object Pooling

Instantiating and destroying GameObjects at runtime is expensive. Each call triggers component initialization, memory allocation, and eventual garbage collection. In games with frequent spawning of enemies, bullets, and particles, this pattern creates visible frame spikes.

Before: A mobile shooter instantiating 30 bullets per second recorded GC.Alloc spikes of 2 to 4 ms every few frames. Peak frame time: 24 ms (below 30 FPS).

After: Switching to a pool of 50 pre-instantiated bullets eliminated all GC spikes. Frame time dropped to 14 ms consistently.

Implement pooling by creating a list of inactive GameObjects at scene load. When an object is needed, retrieve it from the pool and call SetActive(true). When done, return it with SetActive(false) rather than Destroy().

Profiler marker to watch: GC.Alloc in the CPU module. Frequent allocations in spawn logic are the primary signal that pooling is the fix.

2. Draw Call Batching

Every draw call is a CPU instruction to the GPU to render an object. Too many draw calls stall the CPU and tank Unity performance, particularly on mobile. Unity offers two built-in batching modes: static batching for non-moving geometry and dynamic batching for small moving meshes.

Before: A level with 300 static props issued 298 draw calls, pushing CPU render time to 9 ms per frame.

After: Enabling static batching via Edit > Project Settings > Player collapsed those to 12 draw calls. CPU render time fell to 1.8 ms.

Static batching combines meshes that share a material into a single mesh at build time. Mark objects as Static in the Inspector. Note that static batching increases memory usage, so monitor the Memory module in the Unity profiler after enabling it.

Profiler marker to watch: Rendering > SetPass Calls. Values above 100 on mobile typically indicate a batching problem.

3. Texture Atlasing

When multiple objects use different textures, the GPU must switch texture state between draw calls, breaking batching. Texture atlasing combines multiple textures into a single image, allowing objects to share one material and enabling batching.

Before: A UI-heavy scene with 40 individual sprite textures generated 40 separate material switches per frame, adding 3 ms of overhead.

After: Consolidating into two texture atlases reduced material switches to 2, cutting that overhead to under 0.3 ms.

Tools for texture atlasing include Unity's built-in Sprite Atlas for 2D and third-party tools like TexturePacker for custom workflows. Keep each atlas under 2048x2048 on mobile to stay within GPU limits.

4. Sprite Atlasing

Sprite atlasing is specifically relevant for 2D Unity optimization. Without it, each sprite in a scene uses its own draw call. Unity's Sprite Atlas system (Assets > Create > 2D > Sprite Atlas) groups sprites into a packed texture sheet.

Before: A 2D platformer with 80 individual sprites produced 74 draw calls. Frame time sat at 19 ms, below 30 FPS on mid-range Androids.

After: Organizing sprites into four atlases by layer and type collapsed draw calls to 8. Frame time dropped to 11 ms.

Use Tight packing in the Sprite Atlas settings to minimize wasted atlas space. Enable "Allow Rotation" only when sprites are not animated, as rotation can break UV maps.

Profiler marker to watch: Sprites.Draw in the rendering module. If this marker is prominent, atlasing will help.

5. Level of Detail (LOD) Groups

Rendering high-polygon models that appear as small objects in the distance wastes GPU cycles. LOD Groups let you swap in lower-detail meshes as objects recede from the camera, cutting GPU load substantially in open-world or large-scene projects.

Before: A scene with 200 trees rendered at a full polygon count of 8,000 polygons each consumed 11 ms of GPU time.

After: Adding three LOD levels (8,000/ 2,000/ 400 polygons) with appropriate transition distances reduced GPU time to 4.2 ms, a 62% saving.

Add an LOD Group component to the parent object. Assign mesh renderers for each LOD level and set the transition percentage bands. For foliage and crowds, LOD Groups are one of the highest-impact single changes you can make to Unity performance.

6. Occlusion Culling

Unity's default camera frustum culling only removes objects outside the camera's field of view. Occlusion culling goes further, removing objects that are inside the view but hidden behind other geometry, including walls, buildings, and terrain.

Without it, the GPU renders everything in view, even if the player never sees it.

Before: An interior scene with four rooms rendered all geometry simultaneously. GPU time: 18 ms per frame.

After: Baking occlusion data via Window > Rendering > Occlusion Culling > Bake cut visible objects by 65% in the profiler. GPU time: 7 ms.

Mark static geometry as Occluder Static and Occludee Static. Bake the occlusion data. The Unity profiler's Rendering module will show a reduction in rendered triangle count after baking. Occlusion culling does not work on dynamic objects that move at runtime, so plan your scene architecture accordingly.

7. GPU Instancing

GPU instancing allows the GPU to render many copies of the same mesh in a single draw call by sending geometry data once and varying instance-level properties like position, rotation, and color. It is the standard solution for rendering repeated elements, such as trees, enemies, rocks, and props.

Before: 500 rock meshes with separate materials produced 498 draw calls. GPU frame time: 13 ms.

After: Enabling GPU Instancing on the shared material reduced draw calls to 1. GPU time fell to 1.1 ms.

Enable GPU instancing on any material by ticking Enable GPU Instancing in the material inspector. The mesh must be identical across instances, but you can vary material properties via MaterialPropertyBlock without breaking instancing.

Confirm the gain in the Unity profiler's GPU module, as draw calls should collapse noticeably.

Profiler marker to watch: Rendering > Draw Calls. A count near 1 for many repeated meshes confirms that instancing is working.

8. Addressables for Memory Management

Unity's Addressable Asset System replaces Resources.Load with a reference-based loading model that gives developers explicit control over what is in memory and when it is released. Unmanaged memory growth from unused assets is a common cause of mobile crashes and slowdowns.

Before: A game loading all assets via Resources.Load held 480 megabytes (MB) of texture memory at a level that required only 210 MB. The Memory profiler showed persistent references to unloaded scene assets.

After: Migrating to Addressables and calling Addressables.Release() on unneeded assets brought peak memory to 215 MB, a 55% reduction.

Install the Addressables package via Package Manager. Mark assets as Addressable in the Inspector. Load with Addressables.LoadAssetAsync and release with Addressables.Release() when the asset is no longer needed. Profile memory before and after each major scene transition using the Memory Profiler module in the Unity profiler.

9. Script Profiling and Update() Hygiene

The Update() method runs every frame. If dozens of scripts each perform complex calculations in Update(), the cumulative CPU cost is significant. Script profiling in the Unity profiler's CPU module reveals exactly which MonoBehaviours are consuming the most time.

Before: A project with 80 active MonoBehaviours had 22 ms of script time per frame. The Unity profiler showed six scripts, each taking over 1 ms individually.

After: Caching component references, removing empty Update() methods, and moving polling logic to coroutines cut script time to 6 ms.

To optimize game in Unity at the script level: cache GetComponent calls in Awake(), delete empty Update() and LateUpdate() methods as they still carry overhead, replace per-frame polling with event-driven callbacks, and use the Job System for parallelizable calculations.

Profiler marker to watch: In the CPU module, sort by Total Time. Scripts with disproportionate costs are candidates for refactoring.

10. Garbage Collection Reduction

Unity's managed heap collects garbage periodically, causing visible frame stutter. Every heap allocation, including LINQ queries, strings, and new collections, contributes to GC pressure. The Unity profiler's CPU module surfaces GC.Alloc markers that reveal where allocations originate.

Before: A multiplayer game allocating strings for network messages every frame generated 60 KB of GC per second. GC.Collect stalls of 8 to 15 ms appeared every 4 to 5 seconds.

After: Switching to cached char arrays and pre-allocated message structs brought per-frame allocation to under 1 kilobyte (KB). GC stalls became undetectable.

Practical steps to reduce GC pressure: avoid string concatenation in hot paths and use StringBuilder sparingly, avoid LINQ in Update(); reuse collections with List.Clear() instead of creating new lists, and use structs instead of classes for short-lived data where appropriate.

11. Physics Layer Matrix Tuning

By default, Unity's physics engine checks collisions between every layer combination.

In a scene with many colliders, this generates an enormous number of unnecessary collision checks per frame. The Physics Layer Collision Matrix lets you disable checks between layers that should never interact.

Before: A game with 10 physics layers had all 55 possible layer-pair combinations active. Physics.Simulate consumed 7 ms per frame.

After: Disabling 38 irrelevant layer pairs via Edit > Project Settings > Physics > Layer Collision Matrix reduced physics time to 2.1 ms.

Map out which layer pairs actually need collision detection. For example, enemies should still collide with terrain, but player UI elements do not need to interact with projectiles, and decorative objects rarely need collision checks against enemies. Disabling unnecessary layer interactions creates measurable performance savings and is one of the lowest-effort, highest-return optimizations you can make in physics-heavy Unity scenes.

Profiler marker to watch: Physics.Simulate in the CPU module shows total physics overhead. Pair reductions produce a visible decrease.

12. Build Size Reduction

Build size directly affects download conversion rates and device storage, particularly on mobile. Oversized builds are often the result of unused assets, uncompressed textures, and unstripped managed code, all addressable through Unity's build pipeline.

Before: An Android build with default settings shipped at 187 MB. The Unity Build Report (Window > Analysis > Build Report) showed textures accounting for 134 MB of that.

After: Switching texture compression to ASTC for Android, enabling Code Stripping (IL2CPP + Strip Engine Code), and removing unused assets in the Build Report brought the build to 68 MB, a 64% reduction.

Essential steps: Set texture compression to platform-appropriate formats (ASTC for Android, PVRTC for older iOS), enable Managed Stripping Level to High in Player Settings, use Addressables to deliver optional content post-install, and run Unity's Build Report after every major release candidate to catch unexpected size regressions.

Finally: Unity Optimization as a Development Practice

Unity performance is not a final step in development. It is a continuous discipline. The 12 techniques covered here address the most common bottlenecks identified through Unity profiler data across thousands of shipped titles, from mobile casual games to console action titles.

The practical habit of Unity optimization means setting frame budget targets early (16.66 ms for 60 FPS, 33.33 ms for 30 FPS), profiling on real hardware at every major milestone, and treating the Unity profiler as a standard part of code review, not an afterthought before launch.

Developers who build these habits see fewer last-minute performance crises and more room to invest in the features and feel that make games worth playing. Start with the technique that addresses your current largest profiler spike, validate the result, then move to the next.

Probaho Santra

Probaho Santra

Author

Probaho Santra is a content writer at Outlook India with a master’s degree in journalism. Outside work, he enjoys photography, exploring new tech trends, and staying connected with the esports world.

Published At: 10 MAY 2026, 01:02 PM