Develop
Develop
Select your platform

Oculus Lipsync for Unreal Engine

Updated: Jul 1, 2020
End-of-Life Notice for Oculus Spatializer Plugin
The Oculus Spatializer Plugin has been replaced by the Meta XR Audio SDK and is now in end-of-life stage. It will not receive any further support beyond v47. We strongly discourage its use. Please navigate to the Meta XR Audio SDK documentation for your specific engine:
- Meta XR Audio SDK for Unity Native
- Meta XR Audio SDK for FMOD and Unity
- Meta XR Audio SDK for Wwise and Unity
- Meta XR Audio SDK for Unreal Native
- Meta XR Audio SDK for FMOD and Unreal
- Meta XR Audio SDK for Wwise and Unreal
This documentation is no longer being updated and is subject for removal.
Oculus Lipsync offers an Unreal Engine plugin for use on Windows or macOS that can be used to sync avatar lip movements to speech sounds and laughter. Lipsync analyzes the audio input stream from microphone input or an audio file and predicts a set of values called visemes, which are gestures or expressions of the lips and face that correspond to a particular speech sound. The term viseme is used when discussing lip reading and is a basic visual unit of intelligibility. In computer animation, visemes may be used to animate avatars so that they look like they are speaking.
Lipsync uses a repertoire of visemes to modify avatars based on a specified audio input stream. Each viseme targets a specified geometry morph target in an avatar to influence the amount that target will be expressed on the model. With Lipsync we can generate realistic lip movement in sync with what is being spoken or heard. This enhances the visual cues that one can use when populating an application with avatars, whether the character is controlled by the user or is a non-playable character (NPC).
The Lipsync system maps to 15 separate viseme targets: sil, PP, FF, TH, DD, kk, CH, SS, nn, RR, aa, E, ih, oh, and ou. The visemes describe the face expression produced when uttering the corresponding speech sound. For example the viseme sil corresponds to a silent/neutral expression, PP corresponds to pronouncing the first syllable in “popcorn” and FF the first syllable of “fish”. See the Viseme Reference Images for images that represent each viseme.
These 15 visemes have been selected to give the maximum range of lip movement, and are agnostic to language. For more information, see the Viseme MPEG-4 Standard.

Animated Lipsync Example

The following animated image shows how you could use Lipsync to say “Welcome to the Oculus Lipsync demo.”

Laughter Detection

In Lipsync version 1.30.0 and newer, Lipsync offers support for laughter detection, which can help add more character and emotion to your avatars.
The following animation shows an example of laughter detection.

Requirements

The Oculus Lipsync Unreal plugin is compatible with Unreal Engine 4.20 or later, targeting Android, Windows and macOS platforms. See the Unreal Engine guide for more details on the recommended versions.

Download and Setup

To start using Lipsync in your Unreal project:
  • Download the the Oculus Lipsync Unreal package from the downloads page.
  • Extract the zip archive.
  • Copy the OVRLipSync folder, which contains OVRLipSync.uproject, into your Unreal Engine plugins folder.
    • Find the OVRLipSync folder at the following location: [download-dir]\LipSync\UnrealPlugin\OVRLipSyncDemo\Plugins.
    • You typically find the Unreal plugins folder under [Install-Directory]\Epic Games\UE_x.xx\Engine\Plugins. For example for Unreal version 4.20 on Windows, you could find this folder at the following location, C:\Program Files\Epic Games\UE_4.20\Engine\Plugins.
  • Create a new project or open an existing project in Unreal Engine. From the Edit menu, select Plugins and then Audio. You should see the Oculus Lipsync plugin as one of the options. Select Enabled to enable the plugin for your project.
    The following image shows an example.
  • Alternatively, you can open the OVRLipSync.uproject with Unreal Engine.

Topic Guide

DescriptionTopic
Using Oculus Lipsync
Use precomputed visemes to improve performance
Lipsync Sample
Viseme reference images
Did you find this page helpful?
Thumbs up icon
Thumbs down icon