Build Apps that See, Hear, and Respond: Explore What’s Possible With Wearables Device Access Toolkit
Building hands-free experiences has historically meant stitching together camera, audio, and inference systems, often with inconsistent and unpredictable real-world behavior.
With the Wearables Device Access Toolkit, developers can now directly access these signals to start building experiences that respond in real time to what users see, hear, and do, using the sensors on AI glasses from Meta. These hands-free, contextual interactions open up new ways to integrate existing mobile apps into everyday interactions, with no need for a screen or manual input.
It’s still early, but selected partners are already prototyping and shipping initial integrations.
Inspiring real-world integrations
The Wearables Device Access Toolkit lets developers integrate glasses-based camera and audio inputs into their mobile apps, opening the door to experiences that weren’t previously possible.
The first wave of third-party integrations shows how developers are using the camera, microphones, and speakers to extend their apps into hands-free, real-world use cases. Teams have been exploring a wide range of use cases, from POV livestreaming to industrial assistance tools; from navigation to new experiences we couldn’t have predicted.
To ensure our platform supports the needs of developers and AI glasses users, publishing is currently limited to select partners. Now is the perfect time to explore the Device Access Toolkit and get inspired by what selected partners have been developing. Here’s just a few of the ways teams are approaching their builds.
Simply Draw
Simply Draw uses camera-based capture and audio with AI glasses so users can record time-lapse videos of their drawing process and receive step-by-step instruction without needing to reference a separate screen. This keeps the focus on the physical act of drawing while the app provides audio guidance in the background.
Planta
Planta brings plant identification and care into a hands-free workflow. Available on iOS, the app uses the AI glasses camera to identify plant species and provides recommendations based on real-world conditions including lighting, pot size, and soil type. This allows users to get real-time guidance, particularly in situations where manual interaction isn’t practical.
Three smartphone screens showing Planta analyzing camera input to identify a houseplant.
OOrion
OOrion is building accessibility-enhanced experiences for blind and low-vision users. Available for use on iOS, the app combines voice activation, camera input, object recognition, and text recognition, using spatial audio to help users better understand their environments by finding objects such as keys, locating text like an airport gate number or food packaging information, or getting a description of nearby text or objects. Being hands-free makes these interactions seamless and intuitive for vision-impaired users in everyday situations.
Mockup of POV image via AI glasses of OOrion app identifying a park bench. Image of man from behind using AI glasses and OOrion app to identify park bench.
Aira
Aira integrates real-time visual interpretation with AI glasses by connecting its service for vision-impaired users directly to the user’s point of view. Using the low-latency camera feed, professional interpreters can provide live, verbal guidance based on what the user is seeing, enabling hands-free support for navigation and everyday tasks. This approach allows Aira to extend its existing service into more immediate, in-context interactions without requiring additional devices or input.
Some of these apps will begin shipping today. But that’s not the only development taking place. In fact, many of our community members have been hard at work on their own prototypes, like these:
MultiSet AI
MultiSet AI is using AI glasses’ low-latency visual processing to stream frames from the camera to their VPS to enable real-time location tracking, even without full 6DoF:
Splitscreen video showing 1st and 3rd-person point of view of man using MultiSet AI app with AI glasses to enable real-time location tracking.
DB CREATIONS
Meanwhile, DB Creations built Hide n’Seek, combining video recording on AI glasses with real-world gameplay, giving parents the ability to relive and share playful moments with their children.
Video showing 1st and 3rd-person perspective of a woman using Hide n’Seek app with AI glasses to record a game of Hide and Seek with two young children.
SmartSight
SmartSight built Luna, a study coach app that uses the camera and contextual analysis to automatically track what students work on, analyze learning quality, and log time, eliminating manual input so they can stay fully focused.
Video showing student using Luna app with AI glasses to automatically track and log study activity.
Getting started with Wearables Device Access Toolkit
If you’re a developer interested in building hands-free experiences, you can start exploring the Wearables Device Access Toolkit today. While the SDK is still under development—we’re working on voice invocation and Wifi direct—you don’t even need access to the hardware—Mock Device Kit helps you build and test integrations with a simulated device that mirrors the behavior of AI glasses, including media streaming, permissions, and device state changes.
Wearables Device Access Toolkit makes it possible to extend your existing iOS or Android app to work with AI glasses, opening up new interaction models that go beyond the screen.
We’re just starting to see what’s possible, and we’re excited to see what the developer community builds next.