I posted the relevant quote a few posts up from this link. Second paragraph.
https://devblogs.microsoft.com/directx/announcing-directx-12-ultimate/
PC hardware features will of course move past Xbox and DX 12 Ultimate. No surprise there. I am taking Microsoft's guarantee literally that the upcoming Xbox and DX Ultimate will be feature equivalent for the entire generation. I don't see how else you could interpret it.
Thanks, though i interpret this as common baseline of Turing and XBSX, not a confirmation SX does not have more features. (And who knows - maybe AMD partially emulates a certain HW Turing feature.)
What gives me still hope for more is this (
):
View attachment 3770
'Inside' traversal can only mean traversal shaders. It can't be confused with inline tracing, and fits AMDs TMU patent.
Unfortunately that's all. They did not further comment.
I'm personally referring to thisI don’t know how anyone can interpret “all” to mean “baseline”. Microsoft even put the word ALL in caps. They seem to actually mean all and any other interpretation is just wishful thinking.
GPU"s and xsx will support all of the DX12U feature set.When gamers purchase PC graphics hardware with the DX12 Ultimate logo or an Xbox Series X, they can do so with the confidence that their hardware is guaranteed to support ALL next generation graphics hardware features, including DirectX Raytracing, Variable Rate Shading, Mesh Shaders and Sampler Feedback. This mark of quality ensures stellar “future-proof” feature support for next generation games!
I'm personally referring to this
GPU's and xsx will support all of the DX12U feature set
Of dx12 ultimatethey can do so with the confidence that their hardware is guaranteed to support ALL next generation graphics hardware features".
With programmable traversal you can alter the ray without having a hit.Is there a practical difference between a traversal shader and an any-hit shader using inline tracing? What can you not achieve using the latter?
Of dx12 ultimate
if a card comes out with hardware support for a feature not in dx12u that would prove your interpretation wrong
Next generation features that are in DX12U (which it also highlights) are guaranteed to be supported.
DX12U isn't the max feature set, in fact that wouldn't make any sense.
You've still not answered if a graphics card is labeled DX12U does that mean it can't have additional features going by your interpretation. As that quote is specifically about xsx and graphics cards.
With programmable traversal you can alter the ray without having a hit.
The only important application i can imagine seems stochastic LOD: At any time you can randomly decide to teleport a ray to a lower detailed version of the scene, or remove objects like trees / rocks.
This way it's possible to have continuous LOD using only discrete meshes, and RT becomes scalable to large complex scenes and given constant runtime budget.
There is no good solution to get LOD with current DXR.
One option do is to replace geometry globally which will cause popping artifacts. But maybe this can be hidden well enough with temporal smearing which denoising causes anyways.
Another would be to store multiple LODs in BVH and using any hit shaders, but i guess that's too slow to make sense in practice.
Like you i'm worried programmable traversal might be too costly, and stochastic LOD also increases data divergence across otherwise coherent rays.
On the other hand, stochastic LOD is a very simple solution to a long standing open problem. Implementing it in hardware could be the better option than having traversal shaders, IDK.
(MS has listed traversal shaders as potential upcoming feature for DXR: https://microsoft.github.io/DirectX-Specs/d3d/Raytracing.html)
Yeah, all my talk about LOD here is pretty theoretical. Having a LOD solution adds a lot of constant cost to get going, and i don't know if this can be practical on actual / next gen HW already.Yeah reading the Intel paper again it seems the leading BVH LOD techniques have considerable overhead and only start paying dividends for very complex scenes. I suspect we will have to make do with larger on-chip caches and moderate scene complexity for a while.
I think it is not so complicated.
DX12U == XboxSX
TURING/AMPERE >> DX12U
RDNA2 is when I have a doubt. It could be RDNA2 == DX12U or RDNA2 >> DX12U
View attachment 3773
I think it all comes down to what Microsoft means by "next generation graphics hardware features" which is basically a term they've made up and thus could mean anything.
The two likely possibilities are:
1. A common baseline of features that are fully supported by both Turing and RDNA2 which Microsoft has interpreted into DX12U. In this case it's possible one or both architectures have additional features which aren't supported by the other and which Microsoft has chosen not to expose via DX12U. Whether XSX will then be limited to what's in DX12U or not is another interesting question but Microsoft's wording suggests that it would.
2. Microsoft is setting the XSX up as the baseline for the "next generation graphics hardware featureset" and Turing just happens to meet or exceed all of those features (although not necessarily with the same speed / efficiency in all cases).
I'd say either possibility could be true but in any case it's great news for anyone with a Turing or newer class GPU (and soon to be upgrade time for peasants like me rocking a Pascal, RDNA or older achitecture).