Develop

Nav Mesh sample overview

Updated: May 7, 2026

Overview

This sample demonstrates using the Mixed Reality Utility Kit’s (MRUK) SceneNavigation component with Unity’s NavMesh system to enable AI character navigation within a user’s physical room. An animated character named Breeze navigates to randomly placed apple targets on the generated NavMesh, picks them up, and eats them — all while pathfinding around real-world furniture.

What you will learn

  • Generate a Unity NavMesh at runtime from MRUK scene data
  • Configure navigable surfaces and obstacles using physical room elements
  • Validate navigation points against room boundaries
  • Toggle between room-mesh and global-mesh navigation modes
  • Implement area-weighted random point generation on a NavMesh

Requirements

  • Meta Quest 3 or Quest 3S with Horizon OS
  • Unity development environment configured for Meta Quest development
  • For detailed setup instructions, see Setting up your environment

Get started

Clone or download the Unity MRUK Sample repository from GitHub. Open the project in Unity and navigate to Assets/MRUKSamples/NavMesh/NavMesh.unity to open the sample scene. Build and deploy to your Meta Quest device following the instructions in the sample README. After launching, Breeze begins navigating to apple targets that spawn on your room’s floor.

Explore the sample

The following table lists the key files and their purposes:
File / SceneWhat it demonstratesKey concepts
Scripts/NavMeshSampleController.cs
Scene controller with global mesh toggle
Hand trigger input, navigation mode switching
NavMeshComponents/ToolkitScripts/NavMeshAgentController.cs
AI agent behavior and random target generation
Area-weighted random point selection, room boundary validation
Scripts/NavMeshCharacterController.cs
Character animation and eating behavior
Animation state machine, velocity-based animation
prefabs/SceneNavigation.prefab
MRUK SceneNavigation configuration
Floor-based navigation, furniture as obstacles
prefabs/AppleWithCollider.prefab
Interactive target object
Collision detection for pickup
Scripts/WireCubeOnBoundingBoxes.cs
Debug visualization for EffectMesh bounds
EffectMesh object iteration, LineRenderer integration

Runtime behavior

When you run the sample, MRUK scans your physical room and generates a NavMesh on floor surfaces, treating furniture and other obstacles as non-navigable areas. The character Breeze spawns and navigates to a randomly generated apple target on the floor, avoiding real-world furniture. After reaching the apple, Breeze plays a pickup animation, eats the apple with a particle effect, and a new target spawns. Press either hand trigger to toggle between room-mesh mode (navigation constrained to the current room) and global-mesh mode (navigation across multiple rooms if available). Press A, X, or Space (in editor) to force generation of a new target position.

Key concepts

Runtime NavMesh from scene data

The SceneNavigation component builds a Unity NavMesh automatically when MRUK finishes loading room data. The sample configures floor surfaces as navigable and furniture as obstacles in the SceneNavigation.prefab, eliminating the need for pre-baked navigation data. See the SceneNavigation component reference for configuration details.

Area-weighted random point generation

The sample generates random navigation targets using triangle-area weighting to ensure uniform spatial distribution across the NavMesh. NavMeshAgentController.cs calls NavMesh.CalculateTriangulation() to retrieve mesh geometry, computes the area of each triangle, selects a triangle weighted by area, and generates a random point using barycentric coordinates. This approach avoids clustering targets in small NavMesh regions.

Room boundary validation

Because Unity’s NavMesh can extend slightly beyond room bounds, the sample validates generated positions before setting them as destinations. The SetNewDestination() method in NavMeshAgentController.cs calls room.IsPositionInRoom(newPos, false) to confirm the target lies within physical room boundaries. If validation fails, the position resets to Vector3.zero and the agent navigates to the room origin instead.
The sample supports switching between room-mesh and global-mesh navigation modes at runtime. When you press a hand trigger, NavMeshSampleController.cs calls SceneNavigation.ToggleGlobalMeshNavigation(useGlobalMesh) to switch modes and refreshes the debug visualization accordingly. Global-mesh mode enables navigation across multiple rooms when connectRoomsInNavMesh is enabled on the SceneNavigation component.

Animation-event-driven character behavior

The character uses Unity animation events to trigger pickup and eating actions at precise moments in the animation timeline. The PickUp() and EatObject() methods in NavMeshCharacterController.cs are invoked by animation events rather than script polling. The character’s movement speed (NavMeshAgent.velocity.magnitude) drives the “Moving” animator parameter for smooth walk-to-idle transitions.

Extend the sample

  • Enable multi-room navigation: Set connectRoomsInNavMesh to true on the SceneNavigation component in SceneNavigation.prefab to allow Breeze to navigate across multiple connected rooms in your space.
  • Add multiple agents: Instantiate additional character prefabs with their own NavMeshAgent components to create multiple independent AI characters that navigate simultaneously and avoid each other using Unity’s NavMesh avoidance system.
  • Implement interactive target placement: Replace random target generation with controller-based ray casting and NavMesh.SamplePosition() to let users manually place apples on valid NavMesh surfaces.