Precompute Visemes to Save CPU Processing with Unreal
Updated: Sep 14, 2023
End-of-Life Notice for Oculus Spatializer Plugin
This documentation is no longer being updated and is subject for removal.
You can save a lot of processing power by pre-computing the visemes for recorded audio instead of generating the visemes in real-time. This is particularly useful for lip synced animations on non-playable characters or in mobile apps as there is less processing power available.
To Generate LipSync Sequence:
- Import an audio file to your Unreal project
- Right click the audio file and choose Generate LipSyncSequence
The following image shows an example:

- Add an
OVRLipSyncPlaybackActor
component to your scene. The OVRLipSyncPlaybackActor
works the same as an OVRLip Sync Actor component, but reads the visemes from a pre-computed sequence asset instead of generating them in real-time. - Set the sequence of the
OVRLipSyncPlaybackActor
to the previously precomputed lipsync asset. The following image shows an example:
