[MatchIntent] attributeListenable base class| File / Scene | What it demonstrates | Key concepts |
|---|---|---|
Loader.unity | Persistent voice infrastructure | AppVoiceExperience, SpeakGestureWatcher, LevelLoader singleton |
Level_1.unity + BaseLayout_Day.unity | Tutorial level with progressive object enablement | Level_1_Manager, coroutine-driven narrative, highlight-only mode |
Level_2.unity + BaseLayout_Day.unity | Multi-puzzle voice interactions | Radio, treasure chest, water faucet, HeroPlant movement |
Level_3.unity + BaseLayout_Night.unity | Advanced entity extraction and state | Drawer puzzles, color selection via enum entities, PlayerPrefs persistence |
Scripts/Voice/Listenable.cs | Abstract base class for voice-interactive objects | Event subscription, shimmer effect, timeout handling |
Listenable Objects/ForceMovable.cs | Physics-based voice commands | Manual entity extraction, force application from direction entities |
Listenable Objects/HaroldTheBird.cs | NPC with TTS responses | Multiple intents, TTSSpeaker.SpeakQueued(), animation sync |
Scripts/Voice/SpeakGestureWatcher.cs | Gesture-based mic activation | Hand position tracking, cone-cast object selection, AppVoiceExperience lifecycle |
Scripts/Voice/VoiceUI.cs | Per-object voice feedback UI | Transcription display, status icons, mic level visualization |
SpeakGestureWatcher measures distance from both hand transforms to a reference point, then cone-casts from the headset to find Listenable objects in view:var havePose = leftDist < _handsDistThresh && rightDist < _handsDistThresh; if (!_allowSpeak || !_hands.SpeakHandsReady) havePose = false;
AppVoiceExperience.Activate() starts the microphone. See SpeakGestureWatcher.cs for the full implementation.[MatchIntent] attributes to auto-route intents to handler functions. Conduit supports zero-parameter handlers, WitResponseNode injection (raw response data from Wit.ai), and auto-mapped enum parameters:[MatchIntent("move")]
public void Move(ForceDirection direction, WitResponseNode node)
{
ForceMove(direction, node);
}
ForceDirection enum is automatically populated from Wit.ai entities. See HeroPlant.cs for enum usage and ForceMovable.cs for manual entity extraction patterns.Listenable class, which manages Voice SDK event subscription, response handling, and visual feedback:_appVoiceExperience?.VoiceEvents.OnResponse.AddListener(HandleResponse); _appVoiceExperience?.VoiceEvents.OnError.AddListener(HandleWitFailOnError);
ForceMovable (physics commands), Openable (open/close), Radio (on/off/station), and HaroldTheBird (NPC dialogue). See Listenable.cs for the full hierarchy.TTSSpeaker to speak responses. Animation events sync with TTS playback:var whatToSay = witResponse.GetFirstEntityValue("something:something");
_speaker.SpeakQueued(whatToSay);
OnTextPlaybackStart triggers the “Talk” animation, and OnTextPlaybackStop returns to “Idle_Alt”. See HaroldTheBird.cs for the complete pattern.Listenable gets a VoiceUI instance showing real-time transcription, mic levels, and status icons (listening/thinking/success/fail/error). The UI is instantiated by LevelManager.Start() and positioned above the object. See VoiceUI.cs for UI state management.Listenable, decorate methods with [MatchIntent], and add corresponding intents to your Wit.ai app. Reference Radio.cs for simple on/off patterns or ForceMovable.cs for entity extraction.LevelManager subclass with coroutine-driven narrative beats. Register your scenes in LevelLoader and use progressive object enablement to gate progression.