Health and Safety Recommendation: While building mixed reality experiences, we highly recommend evaluating your content to offer your users a comfortable and safe experience. Please refer to the Health and Safety and Design guidelines before designing and developing your app using Scene.
Overview
Mixed Reality Utility Kit provides a set of utilities and tools on top of Scene API (not to be confused with Spatial SDK Scene) to perform common operations when building spatial apps. This makes it easier to program against the physical world, and allows you to focus on what makes your app unique.
How does scene work?
Scene model
Scene model is a comprehensive, current representation of the physical world that can be indexed and queried. It provides a geometric and semantic representation of the user’s space, allowing you to build mixed reality experiences. You can think of it as a scene graph for the physical world.
The main use cases include physics, static occlusion, and navigation in the physical world. For example, you can attach a virtual screen to the user’s wall or have a virtual character navigate the floor with realistic occlusion.
Scene model is managed and persisted by the Meta Quest operating system. All apps can access scene model. You can use the entire scene model or query the model for specific elements.
As the scene model contains information about the user’s space, you must request the app-specific runtime permission for spatial data in order to access the data. See Spatial Data Permission for more information.
Space setup
Space setup is a system flow that generates a scene model. Users can navigate to Settings > Environment Setup > Space Setup to capture their scene. The system will assist the user in capturing their environment. It also provides a manual capture experience as a fallback. In your app, you can query the system to check whether a scene model of the user’s space exists. You can also invoke space setup as needed. See Requesting Space Setup for more information.
You cannot perform space setup over Link. You must perform space setup on-device prior to loading the scene model over Link.
Scene anchors
The scene anchor is the fundamental element of a scene model. Each scene anchor is attached with geometric components and semantic labels. For example, the system organizes a user’s living room around individual anchors with semantic labels, such as the floor, the ceiling, walls, a table, and a couch. Many anchors are also associated with a geometric representation. This can be a 2D functional surface or 3D bounding box, or both. A scene mesh is also a form of geometric representation, presented as a scene anchor component.
Scene model can be considered a collection of scene anchors. Each scene anchor has any number components on them that provide further information. Components hold information such as whether the anchor is a plane, whether it’s a volume, or whether it has a mesh. Anchors are generic objects, and an API user queries the components on the anchor to find what information they contain.
For example, if you have a scene model with four walls, you have have four scene anchors. Each anchor will have a Semantic Classification of WALL, and will be a Plane that holds dimensions.
Differences between spatial and scene anchors
Scene anchors are created by Meta Horizon OS during Space Setup, while spatial anchors are created by your application. Scene anchors contain other information specific to the scene, such as the anchor’s pose. Finally, your app can only create spatial anchors, but it can query scene anchors.
Multiple spaces
Space setup allows the user to scan and maintain multiple rooms (spaces), not just one. The user can scan a new room without erasing a previous room. The OS can maintain up to 15 rooms, and may locate some or all of the rooms depending on the user’s current location.
Getting started
Prerequisites
Include the meta-spatial-sdk-mruk.aar and org.jetbrains.kotlinx:kotlinx-serialization-json:1.6.3 dependencies in your project as described in the setup tutorial. If you are using one of the sample projects, this step is already completed for you.
Adding the MRUK feature
MR Utility Kit is provided as a SpatialFeature. In order to enable it in your app, add MRUKFeature to the list returned by the registerFeatures function.
MR Utility Kit requires the USE_ANCHOR_API and USE_SCENE permissions to be enabled to access scene data from the device. Open projects/android/AndroidManifest.xml and add these permissions:
USE_SCENE is a runtime permission. In addition to declaring it in the AndroidManifest.xml you must also add code to your app to prompt the user for permission.
Here is an example:
companion object {
const val TAG = "SampleActivity"
const val PERMISSION_USE_SCENE: String = "com.oculus.permission.USE_SCENE"
const val REQUEST_CODE_PERMISSION_USE_SCENE: Int = 1
}
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
if (checkSelfPermission(PERMISSION_USE_SCENE) != PackageManager.PERMISSION_GRANTED) {
Log.i(TAG, "Scene permission has not been granted, requesting " + PERMISSION_USE_SCENE)
requestPermissions(arrayOf(PERMISSION_USE_SCENE), REQUEST_CODE_PERMISSION_USE_SCENE)
} else {
// Scene permissions already granted, safe to access scene data
}
}
override fun onRequestPermissionsResult(
requestCode: Int,
permissions: Array<out String>,
grantResults: IntArray
) {
if (requestCode == REQUEST_CODE_PERMISSION_USE_SCENE &&
permissions.size == 1 &&
permissions[0] == PERMISSION_USE_SCENE) {
val granted = grantResults[0] == PackageManager.PERMISSION_GRANTED
if (granted) {
Log.i(TAG, "Use scene permission has been granted")
// Safe to access scene data
} else {
Log.i(TAG, "Use scene permission was DENIED!")
// Scene data not accessible
}
}
}
Wait until permission is granted before attempting to load scene data. As this is an asynchronous operation, you must wait until the onRequestPermissionsResult callback is received. Handle cases where the user denies permission by implementing a suitable fallback.
Loading scene data
Once you have permission to access scene data, you can call loadSceneFromDevice from the MRUKFeature class:
var future = mrukFeature.loadSceneFromDevice()
future.whenComplete { result: MRUKLoadDeviceResult, _ ->
Log.i(TAG, "Load scene from device result: ${result}")
}
loadSceneFromDevice returns a CompletableFuture. This is an asynchonous operation. You must wait until the operation is completed before attempting to access the data. Use the whenComplete method to do this.
JSON data
You can load scene data from a JSON string, in addition to loading it from your device. This can be useful for testing how your app will behave in a variety of different rooms without being physically present.
Here is an example:
val file = applicationContext.assets.open("scene.json")
val text = file.bufferedReader().use { it.readText() }
mrukFeature.loadSceneFromJsonString(text)
Unlike loadSceneFromDevice, this is a synchronous operation. You can access the data immediately after calling it.
Accessing the scene data
Once you have loaded the scene data (either from device or from JSON), you can access it through the rooms property on the MRUKFeature class. This provides a list of MRUKRoom instances. Each room has an anchors property, which is a list of entities. Each entity has a Transform and MRUKAnchor component associated with it. It optionally has a MRUKPlane and/or MRUKVolume component.
Here is an example of how to iterate over the data and print it out:
for (room in mrukFeature.rooms) {
Log.d("MRUK", "Room ${room.anchor}")
for (anchorEntity in room.anchors) {
val anchor = anchorEntity.getComponent<MRUKAnchor>()
Log.d("MRUK", "Anchor: ${anchor.uuid}, labels: ${anchor.labels}")
val transform = anchorEntity.getComponent<Transform>()
Log.d("MRUK", "Transform: ${transform.transform}")
val plane = anchorEntity.tryGetComponent<MRUKPlane>()
if (plane != null) {
Log.d("MRUK", "Plane min: ${plane.min}, max: ${plane.max}, boundary: ${plane.boundary}")
}
val volume = anchorEntity.tryGetComponent<MRUKVolume>()
if (volume != null) {
Log.d("MRUK", "Volume min: ${volume.min}, max: ${volume.max}")
}
}
}
You can use Query to find entities without going through the MRUKFeature class. This is described in the ECS documentation.
AnchorMeshSpawner
AnchorMeshSpawner provides a convenient way to spawn glTF meshes that are scaled and positioned to match the bounds of a MRUKVolume or MRUKPlane. This allows you to create virtual representations of your furniture and have them appear in the same location as your physical furniture.
Here is an example:
val meshSpawner: AnchorMeshSpawner =
AnchorMeshSpawner(
mrukFeature,
mapOf(
MRUKLabel.TABLE to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Table.glb")),
MRUKLabel.COUCH to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Couch.glb")),
MRUKLabel.WINDOW_FRAME to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Window.glb")),
MRUKLabel.DOOR_FRAME to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Door.glb")),
MRUKLabel.OTHER to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/BoxCardBoard.glb")),
MRUKLabel.STORAGE to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Storage.glb")),
MRUKLabel.BED to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/TwinBed.glb")),
MRUKLabel.SCREEN to
AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/ComputerScreen.glb")),
MRUKLabel.LAMP to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/Lamp.glb")),
MRUKLabel.PLANT to
AnchorMeshSpawner.AnchorMeshGroup(
listOf(
"Furniture/Plant1.glb",
"Furniture/Plant2.glb",
"Furniture/Plant3.glb",
"Furniture/Plant4.glb")),
MRUKLabel.WALL_ART to AnchorMeshSpawner.AnchorMeshGroup(listOf("Furniture/WallArt.glb")),
))
meshSpawner.spawnMeshes(room)
AnchorProceduralMesh
AnchorProceduralMesh provides a way to create a procedural mesh that matches the 2D plane boundary of an anchor. This is useful for creating meshes for the floor, ceiling, and walls. You can supply a custom material.
Here is an example:
val floorMaterial =
Material().apply { baseTextureAndroidResourceId = R.drawable.carpet_texture }
val wallMaterial = Material().apply { baseTextureAndroidResourceId = R.drawable.wall_texture }
val procMeshSpawner: AnchorProceduralMesh =
AnchorProceduralMesh(
mrukFeature,
mapOf(
MRUKLabel.FLOOR to AnchorProceduralMeshConfig(floorMaterial, false),
MRUKLabel.CEILING to AnchorProceduralMeshConfig(wallMaterial, false),
MRUKLabel.WALL_FACE to AnchorProceduralMeshConfig(wallMaterial, false),
))
procMeshSpawner.spawnMeshes(room)
Raycasting
The MRUKFeature allows raycasting against a room. You can query for hits starting from a specified origin and direction, typically a head or controller pose. TheraycastRoom function returns the first hit encountered within the room or null if no hit is found. The raycastRoomAll returns all hits as a collection. The result is of type MRUKHit and provides the distance, position, and normal of a hit.
Here’s an example of querying for hits from the right hand to the current room:
val hits =
mrukFeature.raycastRoomAll(
currentRoom.anchor.uuid,
rightHandPose.t,
rightHandDirection,
maxDistance,
surfaceMask)