Develop

Fixed foveated rendering (FFR)

Updated: Apr 7, 2026
This guide covers the technical details of fixed foveated rendering (FFR) and its implementation in the Quest operating system. To learn how to implement FFR in your application, go to the Unreal, Unity, or Native documentation pages.
Meta Quest devices support fixed foveated rendering (FFR). FFR enables the edges of an application-generated frame to be rendered at a lower resolution than the center portion of the frame. This lowers the fidelity of the scene in the viewer’s peripheral vision and likely will not be noticed.
This reduction in rendered pixels can provide several benefits:
  • Improves framerate in applications with GPU fill bottlenecks.
  • Reduces power consumption, and thereby reduces heat and increases battery life.
  • Enables applications to increase the resolution of eye textures, which improves the viewing experience, while maintaining performance and power consumption levels.
Note: FFR does not rely on eye-tracking. Rather, the high-resolution pixels (the fovea) are fixed in the center of the frame. However, the Meta Quest Pro does have eye-tracking cameras, which are required to place high-resolution pixels where the eye is looking. For more information about this, see Eye Tracked Foveated Rendering.
FFR has some tradeoffs:
  • FFR may not improve performance in applications with simple shaders.
  • Applications using FFR should aim to place high-contrast items, such as text, in the center of the frame. Applications that encourage players to look at the edges of the screen (i.e., by placing user interface on your avatar’s belt) will cause users to notice the degraded image quality.

FFR in more detail

The image below shows a user’s perception of a 135° field of view (hemisphere), with two 20° arcs highlighted. The 2D plane of the screen that renders this view (horizontal line) is overlaid on top. Notice how, when comparing the 20° arc at the edge of the field of view with the 20° arc at the center, the arc at the edge takes up much more of the screen. This distortion is an unavoidable part of rendering a 3D world on a screen.
More pixels are required to create the post-distortion areas at the edge of the FOV than the center of the FOV, resulting in a higher pixel density at the edge of the FOV than in the middle. This is highly counterproductive, since users generally look toward the center of the screen. On top of that, lenses blur the edge of the field of view, so even though many pixels have been rendered in that part of the eye texture, the sharpness of the image is lost. The GPU spends a lot of time rendering pixels at the edge of the FOV that can’t be clearly seen. This is very inefficient. FFR reclaims wasted GPU resources by lowering the resolution of these screen portions.
Like most mobile computers, Meta Quest headsets use tiled rendering, in which a frame is split into dozens of “tiles” of clustered pixels, and each tile is rendered separately. FFR is implemented by controlling the resolution of individual render tiles on the GPU. When FFR is enabled, tiles that are closer to the edges of the eye buffer are rendered at a lower resolution than tiles closer to the center.
The gains (or losses) provided by FFR typically depend on your application’s pixel shader costs. FFR can result in a 25% gain in performance with pixel-intensive applications. On the other hand, applications with very simple shaders, which are not bound on GPU fill, will likely not see a significant improvement from FFR. A highly ALU-bound application will benefit from this, as shown in the graph below that collects GPU utilization on a scene. Given a 16% GPU utilization coming from timewarp (and therefore not affected by FFR), this graph shows a 6.5% performance improvement from the low setting, 11.5% improvement from medium setting, and a 21% improvement from the high setting.
This demonstrates a best case scenario for using FFR. If you perform the same test on an application that has very simple pixel shaders, it is possible to actually have a net loss on the low setting, due to the fact that the fixed overhead of using FFR can be higher than the rendering savings on a relatively small number of few pixels. In fact, in this situation, you might experience a slight gain with the high setting, but it won’t be worth the image quality loss.

FFR render examples

The images below show the FFR resolution multiplier map (also called the fragment density map) for the left eye, and an example tilemap for a frame for the left eye, on a Meta Quest 3, at each FFR setting. Note that the positions and sizes of tiles can change depending on your headset and render settings, but will always approximate the FFR resolution multiplier map. The colors represent the following resolution levels:
  • White = Full resolution: This is the center of the FOV. Every pixel of the texture is computed independently by the GPU.
  • Green = 1/4 resolution: Only one quarter of the pixels are calculated by the GPU.
  • Black = no resolution: the GPU does not render anything in this area.
FFR SettingLeft Eye Resolution Multiplier MapExample Left Eye Tile Map
Low
Low FFR Map
Low FFR Tiles
Medium
Medium FFR Map
Medium FFR Tiles
High
High FFR Map
High FFR Tiles
High Top (VrApi only)
High Top FFR Map
LowHigh Top FFR Tiles
Note: The un-calculated (black) pixels on the right edge of the left eye resolution multiplier map are due to rendering with Symmetric Field of View enabled. This is a recommended optimization for Meta Quest devices, in which the left and right eye render at the same field of view, and pixels on the far edge for each eye are not rendered. This results in the same output image as an asymmetric field of view.
Note: The resolution multiplier maps for the right eye are just horizontal mirrors of the displayed left-eye maps.
Note: High Top is not available when using the OpenXR interface (see OpenXR, VRAPI, and LibOVR); setting FFR to High Top will be treated like setting to High. In OpenXR, High Top FFR can be generated by using the verticalOffset parameter on the High FFR level (see OpenXR docs).

Tips and best practices

Many developers simply use the high FFR setting for their entire app as a general solution for performance, which some have found to have a very noticeable impact on visuals. Here are some tips and best practices on better tuning the FFR settings in-game:
  • FFR levels can be changed on a per-frame basis and should be changed according to the content being displayed. Starting a new level, opening/closing menus, and entering new map areas are generally good points to consider changing the FFR setting. However, avoid changing FFR levels frequently within the same scene without another transition as the jump between FFR levels can be fairly noticeable.
    • A simple, effective example would be to turn off FFR on menu screens, where there is performance headroom to spare and a lot of text elements, and then turning it on after loading into the game where the performance is needed.
    • Another example would be to set FFR to high on a complex outdoor scene, then turning it off on a simpler indoors level.
  • Applications can use FFR for their in-world scene, but preserve the pixel density of user interfaces on the edge of the frame, by placing user interfaces on Compositor Layers. Compositor layers are not affected by FFR settings.
  • The system property debug.oculus.foveation.level is a system-wide FFR setting override that can be used to quickly test different FFR settings without changing, reinstalling, or restarting the app, with 0 = Off, 1 = Low, 2 = Medium, 3 = High, 4 = High Top. For example, adb shell setprop debug.oculus.foveation.level 2 sets the FFR level to medium.
    • When using this system property for debugging, please make sure dynamic foveation is disabled (by adb shell setprop debug.oculus.foveation.dynamic 0). Otherwise you won’t be able to toggle the foveation level by debug.oculus.foveation.level since it is controlling the maximum foveation level when dynamic foveation is enabled.
  • Use the VrApi logcat outputs to determine suitable FFR levels. VrApi generates performance information in logcat every second, including FFR level, GPU rendering times, and average FPS. For instance, a VrApi log that shows an app running at 72 fps (FPS=72), with the app’s GPU rendering time at 5.51ms (App=5.51ms), and FFR on high (Fov=3), would indicate more than enough room to turn the FFR level down for a general improvement in the visual quality.
    • When dynamic foveation is enabled, you should see the postfix D like Fov=3D in the VrApi logcat outputs.

Implementing FFR in native OpenXR (Vulkan)

Native OpenXR applications use the XR_FB_foveation family of extensions to configure FFR and retrieve the fragment density map for Vulkan rendering.

Required OpenXR extensions

Enable the following extensions at instance creation:
ExtensionPurpose
XR_FB_swapchain_update_state
Required dependency; provides xrUpdateSwapchainFB
XR_FB_foveation
Core foveation API: profile creation and swapchain foveation state
XR_FB_foveation_configuration
Foveation level configuration (none/low/medium/high, dynamic toggle)
XR_FB_foveation_vulkan
Vulkan-specific: retrieves fragment density map VkImage handles
XR_META_vulkan_swapchain_create_info
Passes Vulkan-specific create flags (for example, subsampled bit) to the runtime

Required Vulkan device extensions

Enable these Vulkan device extensions on your VkDevice:
  • VK_EXT_fragment_density_map — Core fragment density map support
  • VK_EXT_fragment_density_map2 — Improved latency for reading fragment density maps on Qualcomm hardware

Step 1: Create a swapchain with foveation enabled

Chain XrSwapchainCreateInfoFoveationFB into your XrSwapchainCreateInfo to request fragment density map–based foveation:
XrSwapchainCreateInfoFoveationFB foveationCreateInfo = {
    .type = XR_TYPE_SWAPCHAIN_CREATE_INFO_FOVEATION_FB,
    .flags = XR_SWAPCHAIN_CREATE_FOVEATION_FRAGMENT_DENSITY_MAP_BIT_FB,
};

XrSwapchainCreateInfo swapchainCreateInfo = {
    .type = XR_TYPE_SWAPCHAIN_CREATE_INFO,
    .next = &foveationCreateInfo,
    .usageFlags = XR_SWAPCHAIN_USAGE_SAMPLED_BIT | XR_SWAPCHAIN_USAGE_COLOR_ATTACHMENT_BIT,
    .format = vulkanColorFormat,
    .sampleCount = 1,
    .width = width,
    .height = height,
    .faceCount = 1,
    .arraySize = 2,  // multiview
    .mipCount = 1,
};

XrSwapchain swapchain;
xrCreateSwapchain(session, &swapchainCreateInfo, &swapchain);

Step 2: Retrieve fragment density map images

When enumerating swapchain images, chain XrSwapchainImageFoveationVulkanFB to receive the fragment density map VkImage for each swapchain buffer:
XrSwapchainImageVulkanKHR colorImages[bufferCount];
XrSwapchainImageFoveationVulkanFB fdmImages[bufferCount];

for (uint32_t i = 0; i < bufferCount; i++) {
    fdmImages[i].type = XR_TYPE_SWAPCHAIN_IMAGE_FOVEATION_VULKAN_FB;
    fdmImages[i].next = NULL;
    fdmImages[i].image = VK_NULL_HANDLE;

    colorImages[i].type = XR_TYPE_SWAPCHAIN_IMAGE_VULKAN_KHR;
    colorImages[i].next = &fdmImages[i];
}

xrEnumerateSwapchainImages(swapchain, bufferCount, &bufferCount,
    (XrSwapchainImageBaseHeader*)colorImages);

// fdmImages[i].image now contains the VkImage for the fragment density map
// fdmImages[i].width and fdmImages[i].height contain the FDM dimensions
Use the returned fdmImages[i].image as the fragment density map attachment in your VkRenderPassFragmentDensityMapCreateInfoEXT.

Step 3: Create and apply a foveation profile

// Get function pointers
PFN_xrCreateFoveationProfileFB xrCreateFoveationProfileFB;
PFN_xrDestroyFoveationProfileFB xrDestroyFoveationProfileFB;
PFN_xrUpdateSwapchainFB xrUpdateSwapchainFB;
xrGetInstanceProcAddr(instance, "xrCreateFoveationProfileFB",
    (PFN_xrVoidFunction*)&xrCreateFoveationProfileFB);
xrGetInstanceProcAddr(instance, "xrDestroyFoveationProfileFB",
    (PFN_xrVoidFunction*)&xrDestroyFoveationProfileFB);
xrGetInstanceProcAddr(instance, "xrUpdateSwapchainFB",
    (PFN_xrVoidFunction*)&xrUpdateSwapchainFB);

// Create a foveation level profile
XrFoveationLevelProfileCreateInfoFB levelProfile = {
    .type = XR_TYPE_FOVEATION_LEVEL_PROFILE_CREATE_INFO_FB,
    .level = XR_FOVEATION_LEVEL_HIGH_FB,  // NONE, LOW, MEDIUM, or HIGH
    .verticalOffset = 0.0f,
    .dynamic = XR_FOVEATION_DYNAMIC_LEVEL_ENABLED_FB,  // or DISABLED
};

XrFoveationProfileCreateInfoFB profileCreateInfo = {
    .type = XR_TYPE_FOVEATION_PROFILE_CREATE_INFO_FB,
    .next = &levelProfile,
};

XrFoveationProfileFB foveationProfile;
xrCreateFoveationProfileFB(session, &profileCreateInfo, &foveationProfile);

// Apply the profile to the swapchain
XrSwapchainStateFoveationFB foveationState = {
    .type = XR_TYPE_SWAPCHAIN_STATE_FOVEATION_FB,
    .profile = foveationProfile,
};
xrUpdateSwapchainFB(swapchain, (XrSwapchainStateBaseHeaderFB*)&foveationState);
The foveation profile can be updated on a per-frame basis by calling xrUpdateSwapchainFB with a new profile.

Enabling subsampled layout in native OpenXR (Vulkan)

Subsampled layout is a Vulkan extension that works with foveated rendering to avoid wasting memory bandwidth. When FFR reduces the rendering resolution in peripheral regions, the rendered content is kept at its original resolution within a subregion of the image buffer, instead of being stretched to fill the full buffer.
This provides two benefits:
  • Performance: Reduces the amount of data that must be copied from GPU memory to main memory, saving bandwidth.
  • Visual quality: Sampling at the original render resolution produces a less aliased result — bilinear filtering works on the actual rendered pixels rather than stretched ones, eliminating the blocky artifacts visible in the peripheral regions without subsampled layout.
Subsampled layout is available on all Quest devices and is strongly recommended for any application using FFR with Vulkan. It requires the VK_EXT_fragment_density_map2 Vulkan device extension (in addition to VK_EXT_fragment_density_map).
If your application uses post-processing effects, subsampled layout may hurt performance because intermediate render passes will not have foveation applied. Test with and without subsampled layout enabled when using post-processing pipelines.

How to enable subsampled layout

To enable subsampled layout, chain XrVulkanSwapchainCreateInfoMETA into your swapchain creation with the VK_IMAGE_CREATE_SUBSAMPLED_BIT_EXT flag, in addition to the foveation create info:
// Request foveation with fragment density map
XrSwapchainCreateInfoFoveationFB foveationCreateInfo = {
    .type = XR_TYPE_SWAPCHAIN_CREATE_INFO_FOVEATION_FB,
    .flags = XR_SWAPCHAIN_CREATE_FOVEATION_FRAGMENT_DENSITY_MAP_BIT_FB,
};

// Pass the subsampled bit to the Vulkan image
XrVulkanSwapchainCreateInfoMETA vulkanCreateInfo = {
    .type = XR_TYPE_VULKAN_SWAPCHAIN_CREATE_INFO_META,
    .additionalCreateFlags = VK_IMAGE_CREATE_SUBSAMPLED_BIT_EXT,
    .additionalUsageFlags = 0,
};

// Chain: swapchainCreateInfo -> foveationCreateInfo -> vulkanCreateInfo
foveationCreateInfo.next = &vulkanCreateInfo;

XrSwapchainCreateInfo swapchainCreateInfo = {
    .type = XR_TYPE_SWAPCHAIN_CREATE_INFO,
    .next = &foveationCreateInfo,
    .usageFlags = XR_SWAPCHAIN_USAGE_SAMPLED_BIT | XR_SWAPCHAIN_USAGE_COLOR_ATTACHMENT_BIT,
    .format = vulkanColorFormat,
    .sampleCount = 1,
    .width = width,
    .height = height,
    .faceCount = 1,
    .arraySize = 2,
    .mipCount = 1,  // must be 1 for subsampled images
};

XrSwapchain swapchain;
xrCreateSwapchain(session, &swapchainCreateInfo, &swapchain);

Important constraints for subsampled images

  • Mip count must be 1: Subsampled images do not support multiple mipmap levels.
  • Immutable samplers: Samplers used with subsampled images must be immutable samplers bound at descriptor set layout creation time via pImmutableSamplers. Create the sampler with VK_SAMPLER_CREATE_SUBSAMPLED_BIT_EXT.
  • Sampler configuration: Use VK_FILTER_LINEAR, VK_SAMPLER_MIPMAP_MODE_NEAREST, and clamp LOD to [0, 0].
  • Device support: The runtime checks device capability at swapchain creation. If the device does not support subsampled layout, the flag is silently ignored.
For full details on subsampled image constraints, see the Vulkan specification for fragment density map.

Debugging subsampled layout

Use the following system property to force subsampled layout on or off for testing:
adb shell setprop debug.oculus.foveation.subsampled 1  # force on
adb shell setprop debug.oculus.foveation.subsampled 0  # force off