Microsoft has released a preview of the next step in DirectX Raytracing to its Windows Insider Program, meaning developers can start to play with new performance enhancing features to deliver a better ray traced experience. By the time Cyberpunk 2077 rolls around then the DirectX 12 API might actually have got to the stage where we can get great ray tracing performance outside of a $1,200 RTX 2080 Ti…
The company has so far put together a dev preview of the new DXR Tier 1.1 features, and let’s just say they require a little coding/developer knowledge to parse exactly what they mean, but from our limited understanding the ExecuteIndirect feature looks the most promising and the one which could deliver the biggest performance improvements.
ExecuteIndirect for Raytracing is described as enabling “adaptive algorithms where the number of rays is decided on the GPU execution timeline.” That’s some pretty sparse info, but it looks like it addresses one of the biggest concerns regarding ray tracing, the number of rays bouncing off objects in a scene. And also looks like it’s taking a leaf out of the path tracing book too.
Our Jacob recently went to check out Minecraft RTX, and fell in love with the incredible visuals. And he makes a good case for it being a far superior showcase for the visual power of ray tracing than something like Cyberpunk. Like Quake RTX before it, Minecraft RTX uses a more lightweight form of ray tracing called path tracing.
It can be a little noisier than proper ray tracing, but reduces the computational load because it doesn’t multiply rays as they disperse from every bounce. As our Jacob says, with ray tracing “one ray can become 10, 100, 1,000, and so on and so forth until your GPU is just a puddle of wet sand.” Instead, path tracing uses a random sampling algorithm.
What ExecuteIndirect seems to be doing is introducing a similar algorithmic method for giving developers the tools to reduce the number of calculated rays and still achieve a great-looking scene.
The DirectX 12 blog post (via Tom’s Hardware) also introduces two other features being introduced into DXR, while saying that it continues “to work with both GPU vendors and game developers to better expose hardware capabilities and to better address adoption pain points.”
Hopefully that means it’s working with AMD and not just Nvidia, because the new Xbox is going to need it some DirectX Raytracing helps… Anyways, these are the other two new features:
- Support for adding extra shaders to an existing Raytracing PSO, which greatly increases efficiency of dynamic PSO additions.
- Introduce Inline Raytracing, which provides more direct control of the ray traversal algorithm and shader scheduling, a less complex alternative when the full shader-based raytracing system is overkill, and more flexibility since RayQuery can be called from every shader stage. It also opens new DXR use cases, especially in compute: culling, physics, occlusion queries, and so on.
DirectX 12 has also introduced the DirectX Mesh Shader. Nvidia has been talking mesh shaders since it introduced the Turing architecture last year, promising that it would allow for visually complex and detailed scenes to be rendered without undue toll on the graphics hardware.
It would also reduce the reliance on inefficient fixed function hardware, such as geometry shaders and tessellation shaders, and allow for greater flexibility.
Until now Turing’s mesh shaders were only accessible through extensions in OpenGL and Vulkan, as well as Nvidia’s own NVAPI in DirectX 12. Now Microsoft has created its own Mesh Shader API, simplifying the pipeline and potentially boosting performance. Maybe Nvidia GPUs will actually start functioning on DX12 now…
“The flexibility and high performance of the mesh shader programming model,” says Jianye on the Microsoft Blog, “will allow game developers to increase geometric detail, rendering more complex scenes without sacrificing framerate.”