The IWSDK provides a comprehensive scene understanding system that enables AR/VR applications to detect and interact with real-world geometry. This chapter covers everything you need to know about implementing plane detection, mesh detection, and anchoring in your WebXR applications.
What You’ll Build
By the end of this chapter, you’ll be able to:
Set up scene understanding systems for plane and mesh detection
Automatically detect flat surfaces like floors, walls, and ceilings
Detect complex 3D geometry including furniture and room structure
Create stable anchor points that persist across tracking loss
Build semantic-aware interactions based on detected object types
Place virtual content accurately on real-world surfaces
Overview
The scene understanding system leverages WebXR’s scene understanding capabilities to provide:
Plane Detection - Automatically detect flat surfaces like floors, walls, and ceilings
Mesh Detection - Detect complex 3D geometry including furniture and room structure
Anchoring - Create stable reference points that persist across tracking loss
Automatic Entity Management - Real-world geometry is automatically converted to ECS entities
Anchors objects to stable real-world positions. User creates this component to anchor entities.
// Create an anchored object
const hologram = world.createTransformEntity(hologramMesh);
hologram.addComponent(XRAnchor);
// The system will automatically:
// 1. Create a stable anchor at the current world position
// 2. Attach the object to the anchored reference frame
// 3. Maintain stable positioning across tracking loss
Anchor Properties
attached - Whether the entity has been attached to the anchor group (managed by system)
Plane Detection
Understanding Plane Detection
WebXR plane detection identifies flat surfaces in the real world:
Horizontal planes: Floors, tables, desks
Vertical planes: Walls, doors, windows
Arbitrary planes: Angled surfaces like ramps
Working with Detected Planes
// React to new planes
class PlaneProcessSystem extends createSystem({
planeEntities: { required: [XRPlane] },
}) {
init() {
this.queries.planeEntities.subscribe('qualify', (entity) => {
const planeObject = entity.object3D;
const position = planeObject.position;
const rotation = planeObject.quaternion;
console.log('Plane detected at:', position);
// Place content on the plane
const marker = createMarkerMesh();
marker.position.copy(position);
marker.position.y += 0.1; // Slightly above the plane
scene.add(marker);
});
this.queries.planeEntities.subscribe('disqualify', (entity) => {
console.log('Plane lost:', entity.object3D?.position);
// Clean up any content placed on this plane
});
}
}
Plane-based Content Placement
// Place objects on detected floor planes
class PlaneProcessSystem extends createSystem({
planeEntities: { required: [XRPlane] },
}) {
init() {
this.queries.planeEntities.subscribe('qualify', (entity) => {
const plane = entity.getValue(XRPlane, '_plane');
// Check if it's a horizontal plane (floor-like)
if (plane.orientation === 'horizontal') {
const planePosition = entity.object3D.position;
// Place a virtual object on the floor
const virtualObject = createVirtualObject();
virtualObject.position.copy(planePosition);
virtualObject.position.y += 0.5; // Above the floor
scene.add(virtualObject);
}
});
}
}
Mesh Detection
Understanding Mesh Detection
WebXR mesh detection provides detailed 3D geometry:
Bounded meshes: Individual objects with semantic labels
Global mesh: Generic scene mesh objects
Semantic classification: The label of the detected mesh object if it’s a bounded 3d mesh. Available sementic label values are listed here.
Working with Detected Meshes
class MeshProcessSystem extends createSystem({
meshEntities: { required: [XRMesh] },
}) {
init() {
this.queries.meshEntities.subscribe('qualify', (entity) => {
const isBounded = entity.getValue(XRMesh, 'isBounded3D');
const semanticLabel = entity.getValue(XRMesh, 'semanticLabel');
if (isBounded) {
handleBoundedObject(entity, semanticLabel);
} else {
handleGlobalMesh(entity);
}
});
}
}
function handleBoundedObject(entity, semanticLabel) {
const dimensions = entity.getValue(XRMesh, 'dimensions');
const position = entity.object3D.position;
switch (semanticLabel) {
case 'table':
console.log(`Table detected: ${dimensions[0]}x${dimensions[2]}m`);
break;
case 'chair':
console.log('Chair detected');
// Maybe spawn a virtual cushion or highlight
break;
case 'wall':
console.log('Wall detected');
// Could place artwork or UI panels
break;
}
}
function handleGlobalMesh(entity) {
console.log('Room structure detected');
// Could use for occlusion, physics, or environmental effects
}
Semantic-based Interactions
// Example: Interactive table system
class TableInteractionSystem extends createSystem({
tables: { required: [XRMesh], where: [eq(XRMesh, 'semanticLabel', 'table')] },
}) {
update() {
this.queries.tables.entities.forEach((entity) => {
const dimensions = entity.getValue(XRMesh, 'dimensions');
const position = entity.object3D.position;
// Create interactive surface
this.createTableInterface(position, dimensions);
});
}
createTableInterface(position, dimensions) {
// Create UI or interactive elements for the table
const interface = createTableUI(dimensions);
interface.position.copy(position);
interface.position.y += 0.01; // Just above table surface
this.scene.add(interface);
}
}
Anchoring
Understanding Anchors
Anchors provide stable positioning that persists it’s world location that won’t be changed with recentering the view.