Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposed RFC Feature: Support for multiple Material-Types in Raytracing Hit Shaders #152

Open
kh-huawei opened this issue Sep 20, 2023 · 8 comments
Labels
rfc-feature Request for Comments for a Feature

Comments

@kh-huawei
Copy link

kh-huawei commented Sep 20, 2023

Summary:

During raytracing the hit-shaders need access to all materials in the scene at once, which clashes with the current setup for the Forward+ - Pipeline, where a shader needs only access to a single Material Instance at any given time.
The current solution for Raytracing is somewhat unfinished, in that it assumes that there exists only one material-type, and collects all Material-Instances in a single RayTracingMaterialSrg, and uses one single hit-shader that re-implements almost but not quite the BasePBR - material type.

What is the relevance of this feature?

The new Material-Pipeline and the Material Canvas allows users to easily create their own material types, but that has no influence on the material used during raytracing. This feature would remedy that, and enable user-created material types during raytracing.

Generally, any meaningful raytracing pipeline needs access to the geometry and materials of the entire scene in one shader / renderpass.
Also the current material-shaders place the LightAssignment and the Shadow Sampling inside of the EvaluateLighting - function, which implicitly makes them the responsibility of the material type; in other words, using the BasePBR - Materialtype (or any material-Type derived from it) currently requires a viewspace tile-based light assignment and e.g. cascaded shadow maps for some lights, which doesn't work all that well for a raytracing pipeline that can intersect offscreen objects.

Incidentally, all of this would also make implementing a deferred pipeline easier.

Feature design description:

Each Material-Instance creates an fills a MaterialParameters ByteAddressBuffer that contains the material parameters, texture use flags and bindless texture read indices needed during rendering of the material.
The MeshFeatureProcessor creates and maintains a single SceneMaterialSrg that holds one buffer with the bindless read indices of the MaterialParameters, with one entry for each separate Material Instance (i.e. each sub-mesh), and the texture samplers required by all materials.
Alternatively, if the support of bindless SRGs is not good enough, this SRG can also hold an array of all required material textures (deduplicated) and an array of the MaterialParameters - buffers, with the read-index then referring to the entry in this buffers instead of the Bindless SRG. Also the first value in each MaterialParameters ByteAddressBuffer should be the Material Type Id, so the shader can perform a sanity check and avoid parsing parameters from the wrong material instance.

The Material Type Id should be calculated from the material type and the shader options of the material, since the shader-options can't be in the SceneMaterialSrg that is shared between shaders.

Technical design description and implementation steps:

Rough draft of the required steps:

  • Extend the RPI::Material - class to create and fill one MaterialParameters - ByteAddressBuffer for each MaterialInstance that holds the parameter values and the bindless read indices of the material textures.
  • Extend the MeshFeatureProcessor to calculate a unique Material Instance Id for each sub-mesh, and a unique Material Type Id for the material type and shader option values of each loaded mesh.
  • Move the logic for the material SRG from the RayTracingFeatureProcessor to the MeshFeatureProcessor to create and maintain a SceneMaterialSrg.
    • Alternatively add it to the SceneSrg directly
  • Add the Material Instance Id to the MeshInfo struct of the RayTracingSceneSrg.
  • Add the Material Instance Id to the ObjectSrg of each Mesh - DrawItems.
  • Refactor the existing material shader code and move the LightAssignment - Loop and the Shadow-sampling out of the EvaluateLighting - function
  • Modify the existing materials to read the material parameters from a MaterialParameters - struct and texture indices in an array instead of accessing the MaterialSrg directly.
  • Create a shader function that provides access to a MaterialParameters - struct based only on the Material Instance Id.
  • Consolidate the Light-Type structures (e.g. RayTracingSceneSrg::DirectionalLight and SceneSrg::DirectionalLight) of the RayTracingSceneSrg and the SceneSrg into a single azsli - file
  • Support multiple hit-shaders for each RayTracing pass

What are the advantages of the feature?

Actually support user-provided materials during raytracing.

What are the disadvantages of the feature?

Breaking changes with existing (old) materials

Are there any alternatives to this feature?

There are a few options in how and when the Material-Parameters could be stored and updated. But the current solution is an alternative in that it pretends there is only one PBR - material type in the scene, which does cover many use-cases.

How will users learn this feature?

  • There will be a breaking change for user-defined materials that should be announced on Discord, but most changes will happen in the ATOM rendered, without direct user interaction.

Are there any open questions?

  • To further support deferred pipelines we could move the logic of both, the Scene Geometry (i.e. the MeshInfo - struct) and the Material Parameters, out of the RayTracingFeatureProcessor and have the MeshFeatureProcessor simply place them in the SceneSrg. This would require raytracing-passes to bind both, the SceneSrg and the RayTracingSceneSrg, and some sort of mapping from a RTAS Instance ID to the submesh, but that should not be an issue.
  • We could create one MaterialSrg per Material-Type which contains a Structured Buffer with a defined Materialparamater-Struct to avoid using a ByteAddressBuffer, but i'm not sure if multiple hit-shaders can work with different SRGs in one Raytracing-pass.
  • The Texture-Samplers in the SceneMaterialSrg limits the materials to the sampler configurations we explicitly provide there, but i don't have a better idea there.
  • Should we move the light types from the ViewSrg / RayTracingSceneSrg to the SceneSrg? As far as i can tell the contents of these buffers is the same.
@kh-huawei kh-huawei added the rfc-feature Request for Comments for a Feature label Sep 20, 2023
@adiblev
Copy link

adiblev commented Sep 20, 2023

How would this cope with exposure of textures and handling them via the definition of a surface? was this taken into account?

@kh-huawei
Copy link
Author

kh-huawei commented Sep 21, 2023

Like the RayTracingMaterialSrg does now: the Material-Info struct holds the indices of the Texture Views relevant for the material instance, and the textures are then accessed via the bindless Srg.

There are actually two indirections going on currently: The MaterialInfo holds one start-index into the m_materialTextureIndices, and that buffer holds the index of the bindless texture view (either in the bindless m_materialTextures array in the same srg, or the Bindless_Srg - index directly). Seems like an optimization so the materialInfo-struct only needs one texture-index, and/or so that the texture-views for one material-instance are sequential in memory, but i'm not sure how useful that actually is.

@kh-huawei kh-huawei changed the title Draft: Proposed RFC Feature: Support for multiple Material-Types in Raytracing Hit Shaders Proposed RFC Feature: Support for multiple Material-Types in Raytracing Hit Shaders Sep 12, 2024
@antonmic
Copy link
Contributor

Sounds like a good set of changes overall, and will make ray-tracing in O3DE competitive with other engines.

  1. Can you elaborate on "Refactor the existing material shader code and move the LightAssignment - Loop and the Shadow-sampling out of the EvaluateLighting - function", specifically can you give more details as to what would that refactoring look like? Perhaps with some sample code?
  2. Could you also explain a bit more what the "Breaking changes with existing (old) materials" would be and any potential solutions (running old materials through an update script?) would be.

@antonmic
Copy link
Contributor

  1. What would be the replacement light culling strategy given the current one is view-dependent?

@kh-huawei
Copy link
Author

1.) Refactor the existing material shader code and move the LightAssignment - Loop and the Shadow-sampling out of the EvaluateLighting - function:

Current Status

  • EvaluateLighting, LightingData -> Defined by material, e.g. EvaluateLighting_BasePBR and LightingData_BasePBR:
  • Essentially does the following (for BasePBR, somewhat simplified):
EvaluateLighting_BasePBR()
    lightingData = InitializeLightingData_StandardPBR()
    CalculateLighting_BasePBR(lightingData)
            ApplyDecals(lightingData.tileIterator, surface);
            ApplyDirectLighting(surface, lightingData, screenPosition);
                ApplyPointLights(surface, lightingData);
                    For each light in view-space tile: (LightAssignment)
                        Check Lighting Channel
                        ApplyPointLight(lightIndex, surface, lightingData)
                            calculate falloff
                            sample Point light shadow cubemap
                            calculate diffuse / translucent / specular lighting
            ApplyIblForward(surface, lightingData);
            lightingData.FinalizeLighting(surface);

Proposal

This doesn't change the structure much, but makes the responsibilities a bit clearer:

  • EvaluateSurface -> Defined by pipeline, e.g. EvaluateLighting_ForwardPlus
  • LightingData -> Still defined by material, e.g. LightingData_BasePBR, but also expose a InitializeLightingData function
  • ApplyPointLight -> New and defined by material, e.g. ApplyPointLight_BasePBR
lightingData = InitializeLightingData()
EvaluateLighting_ForwardPlus(lightingData)
       ApplyDecals(lightingData.tileIterator, surface);  
       ApplyDirectLighting(surface, lightingData, screenPosition);
             ApplyPointLights(surface, lightingData);
                 For each light in view-space tile: (LightAssignment)
                     Check Lighting Channel
                     ApplyPointLight(lightIndex, surface, lightingData)
                         calculate falloff
                         sample Point light shadow cubemap
                         Call material-defined ApplyPointLight()  <- new!
       ApplyIblForward(surface, lightingData);
       lightingData.FinalizeLighting(surface);

This would allow me to then implement a CalculateLighting_RayTracing with a different light-assignment and shadow method, but i could still support the BasePBR - material essentially unchanged.

@kh-huawei
Copy link
Author

2.) Breaking Changes: This would mostly affect the EvaluateSurface - function, and look roughly like this:

Current Status

Surface EvaluateSurface_BasePBR(...)
{
    Surface surface;

    ...

    float2 baseColorUv = uvs[MaterialSrg::m_baseColorMapUvIndex];
    real3 sampledColor = GetBaseColorInput(MaterialSrg::m_baseColorMap, MaterialSrg::m_sampler, baseColorUv, real3(MaterialSrg::m_baseColor.rgb), o_baseColor_useTexture, uvDxDy, customDerivatives);
    real3 baseColor = BlendBaseColor(sampledColor, real3(MaterialSrg::m_baseColor.rgb), real(MaterialSrg::m_baseColorFactor), o_baseColorTextureBlendMode, o_baseColor_useTexture);
    baseColor = BlendVertexColor(baseColor, real3(vertexColor.rgb));

    ...
}

Proposal

Surface EvaluateSurface_BasePBR(MaterialParameters params, ...)
{
    Surface surface;

    ...

    float2 baseColorUv = uvs[params.m_baseColorMapUvIndex];
    real3 sampledColor = GetBaseColorInput(params.m_baseColorMap, SceneMaterialSrg::m_defaultSampler, baseColorUv, real3(params.m_baseColor.rgb), o_baseColor_useTexture, uvDxDy, customDerivatives);
    real3 baseColor = BlendBaseColor(sampledColor, real3(params.m_baseColor.rgb), real(params.m_baseColorFactor), o_baseColorTextureBlendMode, o_baseColor_useTexture);
    baseColor = BlendVertexColor(baseColor, real3(vertexColor.rgb));

    ...
}

This does mean every (user-created) material needs to be changed, since the MaterialSrg won't exist anymore.
We could keep the MaterialSrgs arround to stay somewhat backwards - compatible, but that would mean we need to bind both, the SceneMaterialSrg and the MaterialSrg, and any material that uses this won't work properly with Raytracing, and we'd have to recognize this somehow when creating the Hit - Shadertable

@kh-huawei
Copy link
Author

kh-huawei commented Sep 13, 2024

3.) In our Raytracing pipeline we are currently using a naïve fixed-size worldspace grid, with a TODO to make it less naïve, but so far neither the performance nor the memory impact was enough of an issue to increase the priority of that TODO. I'd go with something similar, unless someone has a better suggestion.

@moudgils
Copy link
Contributor

moudgils commented Oct 7, 2024

I am supportive of these changes. To be honest I like the bindless approach although given that this change will modify the render pipelines needed for mobile hw we will need the non-bindless approach to stick around. Could we have a setup where by the RT shaders can assume bindless approach (RT compliant hw are also bindless compliant) but the non-RT shaders assumes non-bindless at start and we add bindless support later on.

ShaderResourceGroup SceneMaterialSrg
{
    //non-bindless params
    StructuredBuffer<> m_MaterialParameters[];

#if USE_BINDLESS_SRG
    //Bindless params
    StructuredBuffer<> m_MaterialParameters
#endif
}

So the SceneMaterialSrg will contain two different layouts separated via macro USE_BINDLESS_SRG (as shown above), runtime c++ code also uses the same macro and handles binding for both layouts but the shaders that are using the SceneMaterialSrg can make an assumption and only pick one implementation to help lower the maintenance burden. The disadvantage is that on PC you are paying the cost (perf+mem) of both the layouts.

Eventually I would like to see the engine transition over to Bindless support globally but that is not possible in one go and will have to go through a transition where we support two implementations and then eventually drop one over time. Once we drop mobile min spec to lets say iPhone 11 and equivalent we could in theory switch to bindless globally but that may not happen for another few years at the very least.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rfc-feature Request for Comments for a Feature
Projects
None yet
Development

No branches or pull requests

4 participants