Use Oculus Lipsync for Unreal
Updated: Apr 17, 2026
End-of-Life Notice for Oculus Lipsync Plugin
The Oculus Lipsync Plugin is in end-of-life stage and will not receive further updates or support. For audio-driven lip sync, the
Movement SDK provides equivalent viseme functionality through audio-based face tracking, delivering the same 15 visemes via the
XR_META_face_tracking_visemes OpenXR extension.
The OculusXRMovement module in the Movement SDK supports both visual-based face tracking (Meta Quest Pro) and audio-based face tracking (Meta Quest 2 and later). Audio-based face tracking generates visemes from audio input, providing a migration path from the Oculus Lipsync Plugin.
This documentation is no longer being updated and is subject to removal.
This guide describes how to use the Oculus Lipsync Plugin in your Unreal Projects your own projects. You may find it helpful to use the demo project as reference. You should complete the
download and setup steps to add Lipsync to your Unreal project.
Using the OVRLipSync Actor Component
To use Lipsync in live mode:
The OVRLipSync Actor component must be added to each Actor which has a morph targets that you want to control. Select the Actor you want to use to drive lip animation, choose Add Component, and add the OVRLipSync Actor component. The following image shows an example.

- The OVRLipSync Actor component provides the following options:
- Provider Kind specifies what type of laughter provider to use. Available options are:
- Original
- Enhanced
- Enhanced with Laughter
- Sample rate of the input audio stream.
- Enable Hardware Acceleration specifies whether DSP acceleration should be used on supported platforms. The following image shows an example.
- In the actor or level Blueprint, read visemes and change appropriate morph targets in the On Visemes Ready event.
- Start live capture by calling the Start function of the component
When a prediction is ready, the OVRLipSync Actor component will trigger the On Visemes Ready event.
Driving Your Actor Lip Animations with Lipsync
The OVRLipSync Actor component also defines following Blueprint functions to drive your Actor lip animations:
| Function/Method | Result |
|---|
GetVisemes | Returns the current array of viseme probabilities. |
GetVisemeNames | Returns the default list of viseme names. |
GetLaughterScore | returns laughter probability of the current audio frame (non-zero only when component is configured to use Enhanced with Laughter provider. |
FeedAudio | Feeds the audio data (as package mono 16-bit signed integer audio stream at the specified sample rate) into the Oculus Lipsync engine. |
Assign Visemes To Morph Targets | Takes an array of Morph Targets names and Skeletal Mesh component and assigns current viseme weights to those targets. |
Start | Starts live processing of an audio stream. |
Stop | Stops live processing of audio stream. |