Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
FSR2 uses temporal feedback to reconstruct high-resolution images while maintaining and even improving image quality compared to native rendering.
FSR2 can enable “practical performance” for costly render operations, such as hardware ray tracing.
HLSL
CS_6_2
CS_6_6*
* - CS_6_6 is used on some hardware which supports 64-wide wavefronts.
To use FSR2 you should follow the steps below:
Double click GenerateSolutions.bat
in the build
directory.
Open the solution matching your API, and build the solution.
Copy the API library from bin/ffx_fsr2_api
into the folder containing a folder in your project which contains third-party libraries.
Copy the library matching the FSR2 backend you want to use, e.g.: bin/ffx_fsr2_api/ffx_fsr2_api_dx12_x64.lib
for DirectX12.
Copy the following core API header files from src/ffx-fsr2-api into your project: ffx_fsr2.h
, ffx_types.h
, ffx_error.h
, ffx_fsr2_interface.h
, ffx_util.h
, shaders/ffx_fsr2_common.h
, and shaders/ffx_fsr2_resources.h
. Care should be taken to maintain the relative directory structure at the destination of the file copying.
Copy the header files for the API backend of your choice, e.g. for DirectX12 you would copy dx12/ffx_fsr2_dx12.h
and dx12/shaders/ffx_fsr2_shaders_dx12.h
. Care should be taken to maintain the relative directory structure at the destination of the file copying.
Include the ffx_fsr2.h
header file in your codebase where you wish to interact with FSR2.
Create a backend for your target API. E.g. for DirectX12 you should call ffxFsr2GetInterfaceDX12
. A scratch buffer should be allocated of the size returned by calling ffxFsr2GetScratchMemorySizeDX12
and the pointer to that buffer passed to ffxFsr2GetInterfaceDX12
.
Create a FSR2 context by calling ffxFsr2ContextCreate
. The parameters structure should be filled out matching the configuration of your application. See the API reference documentation for more details.
Each frame you should call ffxFsr2ContextDispatch
to launch FSR2 workloads. The parameters structure should be filled out matching the configuration of your application. See the API reference documentation for more details, and ensure the frameTimeDelta
field is provided in milliseconds.
When your application is terminating (or you wish to destroy the context for another reason) you should call ffxFsr2ContextDestroy
. The GPU should be idle before calling this function.
Sub-pixel jittering should be applied to your application's projection matrix. This should be done when performing the main rendering of your application. You should use the ffxFsr2GetJitterOffset
function to compute the precise jitter offsets. See Camera jitter section for more details.
For the best upscaling quality it is strongly advised that you populate the Reactive mask and Transparency & composition mask according to our guidelines. You can also use ffxFsr2ContextGenerateReactiveMask
as a starting point.
Applications should expose scaling modes, in their user interface in the following order: Quality, Balanced, Performance, and (optionally) Ultra Performance.
Applications should also expose a sharpening slider to allow end users to achieve additional quality.
For the convenience of end users, the FSR2 API provides a number of preset scaling ratios which are named.
Quality | Per-dimension scaling factor |
---|---|
Quality | 1.5x |
Balanced | 1.7x |
Performance | 2.0x |
Ultra performance | 3.0x |
We strongly recommend that applications adopt consistent naming and scaling ratios in their user interface. This is to ensure that user experience is consistent for your application's users which may have experience of other applications using FSR2.
Depending on your target hardware and operating configuration FSR2 will operate at different performance levels.
The table below summarizes the measured performance of FSR2 on a variety of hardware in DX12.
Target resolution | Quality | RX 7900 XTX | RX 6950 XT | RX 6900 XT | RX 6800 XT | RX 6800 | RX 6700 XT | RX 6650 XT | RX 5700 XT | RX Vega 56 | RX 590 |
---|---|---|---|---|---|---|---|---|---|---|---|
3840x2160 | Quality (1.5x) | 0.7ms | 1.1ms | 1.2ms | 1.2ms | 1.4ms | 2.0ms | 2.8ms | 2.4ms | 4.9ms | 5.4ms |
Balanced (1.7x) | 0.6ms | 1.0ms | 1.0ms | 1.1ms | 1.4ms | 1.8ms | 2.6ms | 2.2ms | 4.1ms | 4.9ms | |
Performance (2x) | 0.6ms | 0.9ms | 1.0ms | 1.0ms | 1.3ms | 1.7ms | 2.3ms | 2.0ms | 3.6ms | 4.4ms | |
Ultra perf. (3x) | 0.5ms | 0.8ms | 0.8ms | 0.9ms | 1.1ms | 1.5ms | 1.8ms | 1.7ms | 2.9ms | 3.7ms | |
2560x1440 | Quality (1.5x) | 0.3ms | 0.5ms | 0.5ms | 0.5ms | 0.7ms | 0.9ms | 1.2ms | 1.1ms | 1.9ms | 2.3ms |
Balanced (1.7x) | 0.3ms | 0.5ms | 0.5ms | 0.5ms | 0.6ms | 0.8ms | 1.1ms | 1.0ms | 1.7ms | 2.1ms | |
Performance (2x) | 0.3ms | 0.4ms | 0.4ms | 0.4ms | 0.6ms | 0.8ms | 0.9ms | 0.9ms | 1.5ms | 1.9ms | |
Ultra perf. (3x) | 0.2ms | 0.4ms | 0.4ms | 0.4ms | 0.5ms | 0.7ms | 0.8ms | 0.8ms | 1.2ms | 1.7ms | |
1920x1080 | Quality (1.5x) | 0.2ms | 0.3ms | 0.3ms | 0.3ms | 0.4ms | 0.5ms | 0.6ms | 0.6ms | 1.0ms | 1.3ms |
Balanced (1.7x) | 0.2ms | 0.3ms | 0.3ms | 0.3ms | 0.4ms | 0.5ms | 0.6ms | 0.6ms | 0.9ms | 1.2ms | |
Performance (2x) | 0.2ms | 0.2ms | 0.2ms | 0.3ms | 0.3ms | 0.5ms | 0.5ms | 0.5ms | 0.8ms | 1.1ms | |
Ultra perf. (3x) | 0.1ms | 0.2ms | 0.2ms | 0.2ms | 0.3ms | 0.4ms | 0.4ms | 0.4ms | 0.7ms | 0.9ms |
Figures are rounded to the nearest 0.1ms and are without additional sharpness
and are subject to change.
Using FSR2 requires some additional GPU local memory to be allocated for consumption by the GPU. When using the FSR2 API, this memory is allocated when the FSR2 context is created, and is done so via the series of callbacks which comprise the backend interface. This memory is used to store intermediate surfaces which are computed by the FSR2 algorithm as well as surfaces which are persistent across many frames of the application. The table below includes the amount of memory used by FSR2 under various operating conditions. The "Working set" column indicates the total amount of memory used by FSR2 as the algorithm is executing on the GPU; this is the amount of memory FSR2 will require to run. The "Persistent memory" column indicates how much of the "Working set" column is required to be left intact for subsequent frames of the application; this memory stores the temporal data consumed by FSR2. The "Aliasable memory" column indicates how much of the "Working set" column may be aliased by surfaces or other resources used by the application outside of the operating boundaries of FSR2.
You can take control of resource creation in FSR2 by overriding the resource creation and destruction parts of the FSR2 backend interface, and forwarding the aliasing flags. This means that for a perfect integration of FSR2, additional memory which is equal to the "Persistent memory" column of the table below is required depending on your operating conditions.
Resolution | Quality | Working set (MB) | Persistent memory (MB) | Aliasable memory (MB) |
---|---|---|---|---|
3840x2160 | Quality (1.5x) | 448MB | 354MB | 93MB |
Balanced (1.7x) | 407MB | 330MB | 77MB | |
Performance (2x) | 376MB | 312MB | 63MB | |
Ultra performance (3x) | 323MB | 281MB | 42MB | |
2560x1440 | Quality (1.5x) | 207MB | 164MB | 43MB |
Balanced (1.7x) | 189MB | 153MB | 36MB | |
Performance (2x) | 172MB | 143MB | 29MB | |
Ultra performance (3x) | 149MB | 130MB | 19MB | |
1920x1080 | Quality (1.5x) | 115MB | 90MB | 24MB |
Balanced (1.7x) | 105MB | 85MB | 20MB | |
Performance (2x) | 101MB | 83MB | 18MB | |
Ultra performance (3x) | 84MB | 72MB | 11MB |
Figures are approximations, rounded up to nearest MB using an RX 6700XT GPU in DX12, and are subject to change.
For details on how to manage FSR2's memory requirements please refer to the section of this document dealing with Memory management.
FSR2 is a temporal algorithm, and therefore requires access to data from both the current and previous frame. The following table enumerates all external inputs required by FSR2.
The resolution column indicates if the data should be at 'rendered' resolution or 'presentation' resolution. 'Rendered' resolution indicates that the resource should match the resolution at which the application is performing its rendering. Conversely, 'presentation' indicates that the resolution of the target should match that which is to be presented to the user. All resources are from the current rendered frame, for DirectX(R)12 and Vulkan(R) applications all input resources should be transitioned to
D3D12_RESOURCE_STATE_NON_PIXEL_SHADER_RESOURCE
andVK_ACCESS_SHADER_READ_BIT
respectively before callingffxFsr2ContextDispatch
.
Name | Resolution | Format | Type | Notes |
---|---|---|---|---|
Color buffer | Render | APPLICATION SPECIFIED |
Texture | The render resolution color buffer for the current frame provided by the application. If the contents of the color buffer are in high dynamic range (HDR), then the FFX_FSR2_ENABLE_HIGH_DYNAMIC_RANGE flag should be set in the flags field of the FfxFsr2ContextDescription structure. |
Depth buffer | Render | APPLICATION SPECIFIED (1x FLOAT) |
Texture | The render resolution depth buffer for the current frame provided by the application. The data should be provided as a single floating point value, the precision of which is under the application's control. The configuration of the depth should be communicated to FSR2 via the flags field of the FfxFsr2ContextDescription structure when creating the FfxFsr2Context . You should set the FFX_FSR2_ENABLE_DEPTH_INVERTED flag if your depth buffer is inverted (that is [1..0] range), and you should set the FFX_FSR2_ENABLE_DEPTH_INFINITE flag if your depth buffer has an infinite far plane. If the application provides the depth buffer in D32S8 format, then FSR2 will ignore the stencil component of the buffer, and create an R32_FLOAT resource to address the depth buffer. On GCN and RDNA hardware, depth buffers are stored separately from stencil buffers. |
Motion vectors | Render or presentation | APPLICATION SPECIFIED (2x FLOAT) |
Texture | The 2D motion vectors for the current frame provided by the application in [(<-width, -height>..<width, height>] range. If your application renders motion vectors with a different range, you may use the motionVectorScale field of the FfxFsr2DispatchDescription structure to adjust them to match the expected range for FSR2. Internally, FSR2 uses 16-bit quantities to represent motion vectors in many cases, which means that while motion vectors with greater precision can be provided, FSR2 will not benefit from the increased precision. The resolution of the motion vector buffer should be equal to the render resolution, unless the FFX_FSR2_ENABLE_DISPLAY_RESOLUTION_MOTION_VECTORS flag is set in the flags field of the FfxFsr2ContextDescription structure when creating the FfxFsr2Context , in which case it should be equal to the presentation resolution. |
Reactive mask | Render | R8_UNORM |
Texture | As some areas of a rendered image do not leave a footprint in the depth buffer or include motion vectors, FSR2 provides support for a reactive mask texture which can be used to indicate to FSR2 where such areas are. Good examples of these are particles, or alpha-blended objects which do not write depth or motion vectors. If this resource is not set, then FSR2's shading change detection logic will handle these cases as best it can, but for optimal results, this resource should be set. For more information on the reactive mask please refer to the Reactive mask section. |
Exposure | 1x1 | R32_FLOAT |
Texture | A 1x1 texture containing the exposure value computed for the current frame. This resource is optional, and may be omitted if the FFX_FSR2_ENABLE_AUTO_EXPOSURE flag is set in the flags field of the FfxFsr2ContextDescription structure when creating the FfxFsr2Context . |
All inputs that are provided at Render Resolution, except for motion vectors, should be rendered with jitter. Motion vectors should not have jitter applied, unless the FFX_FSR2_ENABLE_MOTION_VECTORS_JITTER_CANCELLATION
flag is present.
It is strongly recommended that an inverted, infinite depth buffer is used with FSR2. However, alternative depth buffer configurations are supported. An application should inform the FSR2 API of its depth buffer configuration by setting the appropriate flags during the creation of the FfxFsr2Context
. The table below contains the appropriate flags.
FSR2 flag | Note |
---|---|
FFX_FSR2_ENABLE_DEPTH_INVERTED |
A bit indicating that the input depth buffer data provided is inverted [max..0]. |
FFX_FSR2_ENABLE_DEPTH_INFINITE |
A bit indicating that the input depth buffer data provided is using an infinite far plane. |
A key part of a temporal algorithm (be it antialiasing or upscaling) is the provision of motion vectors. FSR2 accepts motion vectors in 2D which encode the motion from a pixel in the current frame to the position of that same pixel in the previous frame. FSR2 expects that motion vectors are provided by the application in [<-width, -height>..<width, height>] range; this matches screenspace. For example, a motion vector for a pixel in the upper-left corner of the screen with a value of <width, height> would represent a motion that traversed the full width and height of the input surfaces, originating from the bottom-right corner.
If your application computes motion vectors in another space - for example normalized device coordinate space - then you may use the motionVectorScale
field of the FfxFsr2DispatchDescription
structure to instruct FSR2 to adjust them to match the expected range for FSR2. The code examples below illustrate how motion vectors may be scaled to screen space. The example HLSL and C++ code below illustrates how NDC-space motion vectors can be scaled using the FSR2 host API.
// GPU: Example of application NDC motion vector computation
float2 motionVector = (previousPosition.xy / previousPosition.w) - (currentPosition.xy / currentPosition.w);
// CPU: Matching FSR 2.0 motionVectorScale configuration
dispatchParameters.motionVectorScale.x = (float)renderWidth;
dispatchParameters.motionVectorScale.y = (float)renderHeight;
Internally, FSR2 uses 16bit quantities to represent motion vectors in many cases, which means that while motion vectors with greater precision can be provided, FSR2 will not currently benefit from the increased precision. The resolution of the motion vector buffer should be equal to the render resolution, unless the FFX_FSR2_ENABLE_DISPLAY_RESOLUTION_MOTION_VECTORS
flag is set in the flags
field of the FfxFsr2ContextDescription
structure when creating the FfxFsr2Context
, in which case it should be equal to the presentation resolution.
FSR2 will perform better quality upscaling when more objects provide their motion vectors. It is therefore advised that all opaque, alpha-tested and alpha-blended objects should write their motion vectors for all covered pixels. If vertex shader effects are applied - such as scrolling UVs - these calculations should also be factored into the calculation of motion for the best results. For alpha-blended objects it is also strongly advised that the alpha value of each covered pixel is stored to the corresponding pixel in the reactive mask. This will allow FSR2 to perform better handling of alpha-blended objects during upscaling. The reactive mask is especially important for alpha-blended objects where writing motion vectors might be prohibitive, such as particles.
In the context of FSR2, the term "reactivity" means how much influence the samples rendered for the current frame have over the production of the final upscaled image. Typically, samples rendered for the current frame contribute a relatively modest amount to the result computed by FSR2; however, there are exceptions. To produce the best results for fast moving, alpha-blended objects, FSR2 requires the Reproject & accumulate stage to become more reactive for such pixels. As there is no good way to determine from either color, depth or motion vectors which pixels have been rendered using alpha blending, FSR2 performs best when applications explicitly mark such areas.
Therefore, it is strongly encouraged that applications provide a reactive mask to FSR2. The reactive mask guides FSR2 on where it should reduce its reliance on historical information when compositing the current pixel, and instead allow the current frame's samples to contribute more to the final result. The reactive mask allows the application to provide a value from [0.0..1.0] where 0.0 indicates that the pixel is not at all reactive (and should use the default FSR2 composition strategy), and a value of 1.0 indicates the pixel should be fully reactive. This is a floating point range and can be tailored to different situations.
While there are other applications for the reactive mask, the primary application for the reactive mask is producing better results of upscaling images which include alpha-blended objects. A good proxy for reactiveness is actually the alpha value used when compositing an alpha-blended object into the scene, therefore, applications should write alpha
to the reactive mask. It should be noted that it is unlikely that a reactive value of close to 1 will ever produce good results. Therefore, we recommend clamping the maximum reactive value to around 0.9.
If a Reactive mask is not provided to FSR2 (by setting the reactive
field of FfxFsr2DispatchDescription
to NULL
) then an internally generated 1x1 texture with a cleared reactive value will be used.
To help applications generate the Reactive mask and the Transparency & composition mask, FSR2 provides an optional helper API. Under the hood, the API launches a compute shader which computes these values for each pixel using a luminance-based heuristic.
Applications wishing to do this can call the ffxFsr2ContextGenerateReactiveMask
function and should pass two versions of the color buffer, one containing opaque only geometry, and the other containing both opaque and alpha-blended objects.
In addition to the Reactive mask, FSR2 provides for the application to denote areas of other specialist rendering which should be accounted for during the upscaling process. Examples of such special rendering include areas of raytraced reflections or animated textures.
While the Reactive mask adjusts the accumulation balance, the Transparency & composition mask adjusts the pixel history protection mechanisms. The mask also removes the effect of the luminance instability factor. A pixel with a value of 0 in the Transparency & composition mask does not perform any additional modification to the lock for that pixel. Conversely, a value of 1 denotes that the lock for that pixel should be completely removed.
If a Transparency & composition mask is not provided to FSR2 (by setting the transparencyAndComposition
field of FfxFsr2DispatchDescription
to NULL
) then an internally generated 1x1 texture with a cleared transparency and composition value will be used.
FSR2.2 includes an experimental feature to generate Reactive mask and Transparency & composition mask automatically. To enable this, the enableAutoReactive
field of FfxFsr2DispatchDescription
needs to be set to 'TRUE' and a copy of the opaque only portions of the backbuffer needs to be provided in 'colorOpaqueOnly'. FSR2 will then automatically generate and use Reactive mask and Transparency & composition mask internally. The masks are generated in a compute pass by analyzing the difference of the color buffer with and without transparent geometry, as well as compare it to the previous frame. Based on the result of those computations each pixel gets assigned Reactive mask and Transparency & composition mask values.
To use autogeneration of the masks the following 4 values to scale and limit the intensity of the masks are required to be provided as well (Note the mentioned default values are suggested starting values but should be tuned per title):
This feature is intended to help with integrating FSR2.2 into a new engine or title. However, for best quality we still recommend to render the Reactive mask and Transparency & composition mask yourself, as generating those values based on material is expected to be more reliable than autogenerating them from the final image.
Please note that this feature is still in experimental stage and may change significantly in the future.
FSR2 provides two values which control the exposure used when performing upscaling. They are as follows:
The exposure value should match that which the application uses during any subsequent tonemapping passes performed by the application. This means FSR2 will operate consistently with what is likely to be visible in the final tonemapped image.
In various stages of the FSR2 algorithm described in this document, FSR2 will compute its own exposure value for internal use. It is worth noting that all outputs from FSR2 will have this internal tonemapping reversed before the final output is written. Meaning that FSR2 returns results in the same domain as the original input signal.
Poorly selected exposure values can have a drastic impact on the final quality of FSR2's upscaling. Therefore, it is recommended that FFX_FSR2_ENABLE_AUTO_EXPOSURE
is used by the application, unless there is a particular reason not to. When FFX_FSR2_ENABLE_AUTO_EXPOSURE
is set in the flags
field of the FfxFsr2ContextDescription
structure, the exposure calculation shown in the HLSL code below is used to compute the exposure value, which matches the exposure response of ISO 100 film stock.
float ComputeAutoExposureFromAverageLog(float averageLogLuminance)
{
const float averageLuminance = exp(averageLogLuminance);
const float S = 100.0f; // ISO arithmetic speed
const float K = 12.5f;
const float exposureIso100 = log2((averageLuminance * S) / K);
const float q = 0.65f;
const float luminanceMax = (78.0f / (q * S)) * pow(2.0f, exposureIso100);
return 1 / luminanceMax;
}
The primary goal of FSR2 is to improve application rendering performance by using a temporal upscaling algorithm relying on a number of inputs. Therefore, its placement in the pipeline is key to ensuring the right balance between the highest quality visual quality and great performance.
With any image upscaling approach is it important to understand how to place other image-space algorithms with respect to the upscaling algorithm. Placing these other image-space effects before the upscaling has the advantage that they run at a lower resolution, which of course confers a performance advantage onto the application. However, it may not be appropriate for some classes of image-space techniques. For example, many applications may introduce noise or grain into the final image, perhaps to simulate a physical camera. Doing so before an upscaler might cause the upscaler to amplify the noise, causing undesirable artifacts in the resulting upscaled image. The following table divides common real-time image-space techniques into two columns. 'Post processing A' contains all the techniques which typically would run before FSR2's upscaling, meaning they would all run at render resolution. Conversely, the 'Post processing B' column contains all the techniques which are recommend to run after FSR2, meaning they would run at the larger, presentation resolution.
Post processing A | Post processing B |
---|---|
Screenspace reflections | Film grain |
Screenspace ambient occlusion | Chromatic aberration |
Denoisers (shadow, reflections) | Vignette |
Exposure (optional) | Tonemapping |
Bloom | |
Depth of field | |
Motion blur |
Please note that the recommendations here are for guidance purposes only and depend on the precise characteristics of your application's implementation.
While it is possible to generate the appropriate intermediate resources, compile the shader code, set the bindings, and submit the dispatches, it is much easier to use the FSR2 host API which is provided.
To use to the API, you should link the FSR2 libraries (more on which ones shortly) and include the ffx_fsr2.h
header file, which in turn has the following header dependencies:
ffx_assert.h
ffx_error.h
ffx_fsr2_interface.h
ffx_types.h
ffx_util.h
To use the FSR2 API, you should link ffx_fsr2_api_x64.lib
which will provide the symbols for the application-facing APIs. However, FSR2's API has a modular backend, which means that different graphics APIs and platforms may be targeted through the use of a matching backend. Therefore, you should further include the backend lib matching your requirements, referencing the table below.
Target | Library name |
---|---|
DirectX(R)12 | ffx_fsr2_dx12_x64.lib |
Vulkan(R) | ffx_fsr2_vk_x64.lib |
Please note the modular architecture of the FSR2 API allows for custom backends to be implemented. See the Modular backend section for more details.
To begin using the API, the application should first create a FfxFsr2Context
structure. This structure should be located somewhere with a lifetime approximately matching that of your backbuffer; somewhere on the application's heap is usually a good choice. By calling ffxFsr2ContextCreate
the FfxFsr2Context
structure will be populated with the data it requires. Moreover, a number of calls will be made from ffxFsr2ContextCreate
to the backend which is provided to FfxFsr2Context
as part of the FfxFsr2ContextDescription
structure. These calls will perform such tasks as creating intermediate resources required by FSR2 and setting up shaders and their associated pipeline state. The FSR2 API does not perform any dynamic memory allocation.
Each frame of your application where upscaling is required, you should call ffxFsr2ContextDispatch
. This function accepts the FfxFsr2Context
structure that was created earlier in the application's lifetime as well as a description of precisely how upscaling should be performed and on which data. This description is provided by the application filling out a FfxFsr2DispatchDescription
structure.
Destroying the context is performed by calling ffxFsr2ContextDestroy
. Please note, that the GPU should be idle before attempting to call ffxFsr2ContextDestroy
, and the function does not perform implicit synchronization to ensure that resources being accessed by FSR2 are not currently in flight. The reason for this choice is to avoid FSR2 introducing additional GPU flushes for applications who already perform adequate synchronization at the point where they might wish to destroy the FfxFsr2Context
, this allows an application to perform the most efficient possible creation and teardown of the FSR2 API when required.
There are additional helper functions which are provided as part of the FSR2 API. These helper functions perform tasks like the computation of sub-pixel jittering offsets, as well as the calculation of rendering resolutions based on dispatch resolutions and the default scaling modes provided by FSR2.
For more exhaustive documentation of the FSR2 API, you can refer to the API reference documentation provided.
The design of the FSR2 API means that the core implementation of the FSR2 algorithm is unaware upon which rendering API it sits. Instead, FSR2 calls functions provided to it through an interface, allowing different backends to be used with FSR2. This design also allows for applications integrating FSR2 to provide their own backend implementation, meaning that platforms which FSR2 does not currently support may be targeted by implementing a handful of functions. Moreover, applications which have their own rendering abstractions can also implement their own backend, taking control of all aspects of FSR2's underlying function, including memory management, resource creation, shader compilation, shader resource bindings, and the submission of FSR2 workloads to the graphics device.
Out of the box, the FSR2 API will compile into multiple libraries following the separation already outlined between the core API and the backends. This means if you wish to use the backends provided with FSR2 you should link both the core FSR2 API lib as well the backend matching your requirements.
The public release of FSR2 comes with DirectX(R)12 and Vulkan(R) backends, but other backends are available upon request. Talk with your AMD Developer Technology representative for more information.
If the FSR2 API is used with one of the supplied backends (e.g: DirectX(R)12 or Vulkan(R)) then all the resources required by FSR2 are created as committed resources directly using the graphics device provided by the host application. However, by overriding the create and destroy family of functions present in the backend interface it is possible for an application to more precisely control the memory management of FSR2.
To do this, you can either provide a full custom backend to FSR2 via the FfxFsr2ContextDescription
structure passed to ffxFsr2ContextCreate
function, or you can retrieve the backend for your desired API and override the resource creation and destruction functions to handle them yourself. To do this, simply overwrite the fpCreateResource
and fpDestroyResource
function pointers.
// Setup DX12 interface.
const size_t scratchBufferSize = ffxFsr2GetScratchMemorySizeDX12();
void* scratchBuffer = malloc(scratchBufferSize);
FfxErrorCode errorCode = ffxFsr2GetInterfaceDX12(&contextDescription.callbacks, m_pDevice->GetDevice(), scratchBuffer, scratchBufferSize);
FFX_ASSERT(errorCode == FFX_OK);
// Override the resource creation and destruction.
contextDescription.callbacks.createResource = myCreateResource;
contextDescription.callbacks.destroyResource = myDestroyResource;
// Set up the context description.
contextDescription.device = ffxGetDeviceDX12(m_pDevice->GetDevice());
contextDescription.maxRenderSize.width = renderWidth;
contextDescription.maxRenderSize.height = renderHeight;
contextDescription.displaySize.width = displayWidth;
contextDescription.displaySize.height = displayHeight;
contextDescription.flags = FFX_FSR2_ENABLE_HIGH_DYNAMIC_RANGE
| FFX_FSR2_ENABLE_DEPTH_INVERTED
| FFX_FSR2_ENABLE_AUTO_EXPOSURE;
// Create the FSR2 context.
errorCode = ffxFsr2ContextCreate(&context, &contextDescription);
FFX_ASSERT(errorCode == FFX_OK);
One interesting advantage to an application taking control of the memory management required for FSR2 is that resource aliasing maybe performed, which can yield a memory saving. The table present in Memory requirements demonstrates the savings available through using this technique. In order to realise the savings shown in this table, an appropriate area of memory - the contents of which are not required to survive across a call to the FSR2 dispatches - should be found to share with the aliasable resources required for FSR2. Each FfxFsr2CreateResourceFunc
call made by FSR2's core API through the FSR2 backend interface will contains a set of flags as part of the FfxCreateResourceDescription
structure. If the FFX_RESOURCE_FLAGS_ALIASABLE
is set in the flags
field this indicates that the resource may be safely aliased with other resources in the rendering frame.
Temporal antialiasing (TAA) is a technique which uses the output of previous frames to construct a higher quality output from the current frame. As FSR2 has a similar goal - albeit with the additional goal of also increasing the resolution of the rendered image - there is no longer any need to include a separate TAA pass in your application.
FSR2 relies on the application to apply sub-pixel jittering while rendering - this is typically included in the projection matrix of the camera. To make the application of camera jitter simple, the FSR2 API provides a small set of utility function which computes the sub-pixel jitter offset for a particular frame within a sequence of separate jitter offsets.
int32_t ffxFsr2GetJitterPhaseCount(int32_t renderWidth, int32_t displayWidth);
FfxErrorCode ffxFsr2GetJitterOffset(float* outX, float* outY, int32_t jitterPhase, int32_t sequenceLength);
Internally, these function implement a Halton[2,3] sequence [Halton]. The goal of the Halton sequence is to provide spatially separated points, which cover the available space.
It is important to understand that the values returned from the ffxFsr2GetJitterOffset
are in unit pixel space, and in order to composite this correctly into a projection matrix we must convert them into projection offsets. The diagram above shows a single pixel in unit pixel space, and in projection space. The code listing below shows how to correctly composite the sub-pixel jitter offset value into a projection matrix.
const int32_t jitterPhaseCount = ffxFsr2GetJitterPhaseCount(renderWidth, displayWidth);
float jitterX = 0;
float jitterY = 0;
ffxFsr2GetJitterOffset(&jitterX, &jitterY, index, jitterPhaseCount);
// Calculate the jittered projection matrix.
const float jitterX = 2.0f * jitterX / (float)renderWidth;
const float jitterY = -2.0f * jitterY / (float)renderHeight;
const Matrix4 jitterTranslationMatrix = translateMatrix(Matrix3::identity, Vector3(jitterX, jitterY, 0));
const Matrix4 jitteredProjectionMatrix = jitterTranslationMatrix * projectionMatrix;
Jitter should be applied to all rendering. This includes opaque, alpha transparent, and raytraced objects. For rasterized objects, the sub-pixel jittering values calculated by the ffxFsr2GetJitterOffset
function can be applied to the camera projection matrix which is ultimately used to perform transformations during vertex shading. For raytraced rendering, the sub-pixel jitter should be applied to the ray's origin - often the camera's position.
Whether you elect to use the recommended ffxFsr2GetJitterOffset
function or your own sequence generator, you must set the jitterOffset
field of the FfxFsr2DispatchDescription
structure to inform FSR2 of the jitter offset that has been applied in order to render each frame. Moreover, if not using the recommended ffxFsr2GetJitterOffset
function, care should be taken that your jitter sequence never generates a null vector; that is value of 0 in both the X and Y dimensions.
The table below shows the jitter sequence length for each of the default quality modes.
Quality mode | Scaling factor | Sequence length |
---|---|---|
Quality | 1.5x (per dimension) | 18 |
Balanced | 1.7x (per dimension) | 23 |
Performance | 2.0x (per dimension) | 32 |
Ultra performance | 3.0x (per dimension) | 72 |
Custom | [1..n]x (per dimension) | ceil(8 * n^2) |
Most applications with real-time rendering have a large degree of temporal consistency between any two consecutive frames. However, there are cases where a change to a camera's transformation might cause an abrupt change in what is rendered. In such cases, FSR2 is unlikely to be able to reuse any data it has accumulated from previous frames, and should clear this data such to exclude it from consideration in the compositing process. In order to indicate to FSR2 that a jump cut has occurred with the camera you should set the reset
field of the FfxFsr2DispatchDescription
structure to true
for the first frame of the discontinuous camera transformation.
Rendering performance may be slightly less than typical frame-to-frame operation when using the reset flag, as FSR2 will clear some additional internal resources.
Applying a negative mipmap biasing will typically generate an upscaled image with better texture detail. We recommend applying the following formula to your Mipmap bias:
mipBias = log2(renderResolution/displayResolution) - 1.0;
It is suggested that applications adjust the MIP bias for specific high-frequency texture content which is susceptible to showing temporal aliasing issues.
The following table illustrates the mipmap biasing factor which results from evaluating the above pseudocode for the scaling ratios matching the suggested quality modes that applications should expose to end users.
Quality mode | Scaling factor | Mipmap bias |
---|---|---|
Quality | 1.5X (per dimension) | -1.58 |
Balanced | 1.7X (per dimension) | -1.76 |
Performance | 2.0X (per dimension) | -2.0 |
Ultra performance | 3.0X (per dimension) | -2.58 |
The FSR2 API requires frameTimeDelta
be provided by the application through the FfxFsr2DispatchDescription
structure. This value is in milliseconds: if running at 60fps, the value passed should be around 16.6f.
The value is used within the temporal component of the FSR 2 auto-exposure feature. This allows for tuning of the history accumulation for quality purposes.
High dynamic range images are supported in FSR2. To enable this, you should set the FFX_FSR2_ENABLE_HIGH_DYNAMIC_RANGE
bit in the flags
field of the FfxFsr2ContextDescription
structure. Images should be provided to FSR2 in linear color space.
Support for additional color spaces might be provided in a future revision of FSR2.
FSR2 was designed to take advantage of half precision (FP16) hardware acceleration to achieve the highest possible performance. However, to provide the maximum level of compatibility and flexibility for applications, FSR2 also includes the ability to compile the shaders using full precision (FP32) operations.
It is recommended to use the FP16 version of FSR2 on all hardware which supports it. You can query your graphics card's level of support for FP16 by querying the D3D12_FEATURE_DATA_SHADER_MIN_PRECISION_SUPPORT
capability in DirectX(R)12 - you should check that the D3D[11/12]_SHADER_MIN_PRECISION_16_BIT
is set, and if it is not, fallback to the FP32 version of FSR2. For Vulkan, if VkPhysicalDeviceFloat16Int8FeaturesKHR::shaderFloat16
is not set, then you should fallback to the FP32 version of FSR2. Similarly, if VkPhysicalDevice16BitStorageFeatures::storageBuffer16BitAccess
is not set, you should also fallback to the FP32 version of FSR2.
To enable the FP32 path in the FSR2 shader source code, you should define FFX_HALF
to be 1
. In order to share the majority of the algorithm's source code between both FP16 and FP32 (ensuring a high level of code sharing to support ongoing maintenance), you will notice that the FSR2 shader source code uses a set of type macros which facilitate easy switching between 16-bit and 32-bit base types in the shader source.
FidelityFX type | FP32 | FP16 |
---|---|---|
FFX_MIN16_F |
float |
min16float |
FFX_MIN16_F2 |
float2 |
min16float2 |
FFX_MIN16_F3 |
float3 |
min16float3 |
FFX_MIN16_F4 |
float4 |
min16float4 |
The table above enumerates the mappings between the abstract FidelityFX SDK types, and the underlaying intrinsic type which will be substituted depending on the configuration of the shader source during compilation.
Modern GPUs execute collections of threads - called wavefronts - together in a SIMT fashion. The precise number of threads which constitute a single wavefront is a hardware-specific quantity. Some hardware, such as AMD's GCN and RDNA-based GPUs support collecting 64 threads together into a single wavefront. Depending on the precise characteristics of an algorithm's execution, it may be more or less advantageous to prefer a specific wavefront width. With the introduction of Shader Model 6.6, Microsoft added the ability to specific the width of a wavefront via HLSL. For hardware, such as RDNA which supports both 32 and 64 wide wavefront widths, this is a very useful tool for optimization purposes, as it provides a clean and portable way to ask the driver software stack to execute a wavefront with a specific width.
For DirectX(R)12 based applications which are running on RDNA and RDNA2-based GPUs and using the Microsoft Agility SDK, the FSR2 host API will select a 64-wide wavefront width.
The context description structure can be provided with a callback function for passing textual warnings from the FSR 2 runtime to the underlying application. The fpMessage
member of the description is of type FfxFsr2Message
which is a function pointer for passing string messages of various types. Assigning this variable to a suitable function, and passing the FFX_FSR2_ENABLE_DEBUG_CHECKING
flag within the flags member of FfxFsr2ContextDescription
will enable the feature. It is recommended this is enabled only in debug development builds.
An example of the kind of output that can occur when the checker observes possible issues is below:
FSR2_API_DEBUG_WARNING: FFX_FSR2_ENABLE_DEPTH_INFINITE and FFX_FSR2_ENABLE_DEPTH_INVERTED present, cameraFar value is very low which may result in depth separation artefacting
FSR2_API_DEBUG_WARNING: frameTimeDelta is less than 1.0f - this value should be milliseconds (~16.6f for 60fps)
The FSR2 algorithm is implemented in a series of stages, which are as follows:
Each pass stage of the algorithm is laid out in the sections following this one, but the data flow for the complete FSR2 algorithm is shown in the diagram below.
The compute luminance pyramid stage has two responsibilities:
The following table contains all resources consumed by the Compute luminance pyramid stage.
The temporal layer indicates which frame the data should be sourced from. 'Current frame' means that the data should be sourced from resources created for the frame that is to be presented next. 'Previous frame' indicates that the data should be sourced from resources which were created for the frame that has just presented. The resolution column indicates if the data should be at 'rendered' resolution or 'presentation' resolution. 'Rendered' resolution indicates that the resource should match the resolution at which the application is performing its rendering. Conversely, 'presentation' indicates that the resolution of the target should match that which is to be presented to the user.
Name | Temporal layer | Resolution | Format | Type | Notes |
---|---|---|---|---|---|
Color buffer | Current frame | Render | APPLICATION SPECIFIED |
Texture | The render resolution color buffer for the current frame provided by the application. If the contents of the color buffer are in high dynamic range (HDR), then the FFX_FSR2_ENABLE_HIGH_DYNAMIC_RANGE flag should be set in the flags field of the FfxFsr2ContextDescription structure. |
The following table contains all resources produced or modified by the Compute luminance pyramid stage.
The temporal layer indicates which frame the data should be sourced from. 'Current frame' means that the data should be sourced from resources created for the frame that is to be presented next. 'Previous frame' indicates that the data should be sourced from resources which were created for the frame that has just presented. The resolution column indicates if the data should be at 'rendered' resolution or 'presentation' resolution. 'Rendered' resolution indicates that the resource should match the resolution at which the application is performing its rendering. Conversely, 'presentation' indicates that the resolution of the target should match that which is to be presented to the user.
Name | Temporal layer | Resolution | Format | Type | Notes |
---|---|---|---|---|---|
Exposure | Current frame | 1x1 | R32_FLOAT |
Texture | A 1x1 texture containing the exposure value computed for the current frame. This resource is optional, and may be omitted if the FFX_FSR2_ENABLE_AUTO_EXPOSURE flag is set in the flags field of the FfxFsr2ContextDescription structure when creating the FfxFsr2Context . |
Current luminance | Current frame |
Render * 0.5 + MipChain |
R16_FLOAT |
Texture | A texture at 50% of render resolution texture which contains the luminance of the current frame. A full mip chain is allocated. |
The Compute luminance pyramid stage is implemented using FidelityFX Single Pass Downs