DirectX Developer Day 2020

It supports all of the DX12U features.
That doesn't mean DX12U supports all of the xsx features.

Although personally I don't expect to hear that it has something we've not already heard about, just possibly tweaks to implementation if that.
 
I posted the relevant quote a few posts up from this link. Second paragraph.

https://devblogs.microsoft.com/directx/announcing-directx-12-ultimate/

PC hardware features will of course move past Xbox and DX 12 Ultimate. No surprise there. I am taking Microsoft's guarantee literally that the upcoming Xbox and DX Ultimate will be feature equivalent for the entire generation. I don't see how else you could interpret it.

Thanks, though i interpret this as common baseline of Turing and XBSX, not a confirmation SX does not have more features. (And who knows - maybe AMD partially emulates a certain HW Turing feature.)

What gives me still hope for more is this (
):
upload_2020-4-12_23-23-43.png
'Inside' traversal can only mean traversal shaders. It can't be confused with inline tracing, and fits AMDs TMU patent.
Unfortunately that's all. They did not further comment.
 
Thanks, though i interpret this as common baseline of Turing and XBSX, not a confirmation SX does not have more features. (And who knows - maybe AMD partially emulates a certain HW Turing feature.)

I don’t know how anyone can interpret “all” to mean “baseline”. Microsoft even put the word ALL in caps. They seem to actually mean all and any other interpretation is just wishful thinking.

What gives me still hope for more is this (
):
View attachment 3770
'Inside' traversal can only mean traversal shaders. It can't be confused with inline tracing, and fits AMDs TMU patent.
Unfortunately that's all. They did not further comment.

Is there a practical difference between a traversal shader and an any-hit shader using inline tracing? What can you not achieve using the latter?

Edit: I suppose a traversal shader would also exercise control over TLAS node traversal. Performance is likely disastrous though.
 
Last edited:
I don’t know how anyone can interpret “all” to mean “baseline”. Microsoft even put the word ALL in caps. They seem to actually mean all and any other interpretation is just wishful thinking.
I'm personally referring to this

When gamers purchase PC graphics hardware with the DX12 Ultimate logo or an Xbox Series X, they can do so with the confidence that their hardware is guaranteed to support ALL next generation graphics hardware features, including DirectX Raytracing, Variable Rate Shading, Mesh Shaders and Sampler Feedback. This mark of quality ensures stellar “future-proof” feature support for next generation games!
GPU"s and xsx will support all of the DX12U feature set.
That does not mean a gpu or xsx doesn't have features that aren't in DX12U.
What part are you referring to?

Or do you interpret it that a gpu can't have other features also if you're referring to same section?
 
I'm personally referring to this

Yep same quote.

GPU's and xsx will support all of the DX12U feature set

That's not what Microsoft said. What they said was...

"When gamers purchase PC graphics hardware with the DX12 Ultimate logo or an Xbox Series X, they can do so with the confidence that their hardware is guaranteed to support ALL next generation graphics hardware features".

You can believe Xbox Series X hardware supports more features than DX 12U exposes but that belief is incompatible with the quote above from Microsoft. At this point we're debating how to interpret English words. I interpret "all next generation graphics hardware features" to mean exactly that.

The other view offered in this thread is that the next generation Xbox somehow supports more features than "all next generation graphics hardware features" which clearly is a paradox.
 
No it's not incompatible. In fact it's pretty clear.
Next generation features that are in DX12U (which it also highlights) are guaranteed to be supported.
DX12U isn't the max feature set, in fact that wouldn't make any sense.
You've still not answered if a graphics card is labeled DX12U does that mean it can't have additional features going by your interpretation. As that quote is specifically about xsx and graphics cards.

And as i said, i personally don't expect it to have more features, but that quote in no way implies it doesn't or can't.
I honestly don't see how you come to your conclusion, no matter how earnest your being. Probably same for you.
 
they can do so with the confidence that their hardware is guaranteed to support ALL next generation graphics hardware features".
Of dx12 ultimate
if a card comes out with hardware support for a feature not in dx12u that would prove your interpretation wrong
 
Couldnt anything be done in shaders anyway? It would be super slow but still supportable. Console games have used methods not supported by PC APIs this gen and devs just used slow workarounds to get “close enough” to the consoles output.
 
Is there a practical difference between a traversal shader and an any-hit shader using inline tracing? What can you not achieve using the latter?
With programmable traversal you can alter the ray without having a hit.
The only important application i can imagine seems stochastic LOD: At any time you can randomly decide to teleport a ray to a lower detailed version of the scene, or remove objects like trees / rocks.
This way it's possible to have continuous LOD using only discrete meshes, and RT becomes scalable to large complex scenes and given constant runtime budget.

There is no good solution to get LOD with current DXR.
One option do is to replace geometry globally which will cause popping artifacts. But maybe this can be hidden well enough with temporal smearing which denoising causes anyways.
Another would be to store multiple LODs in BVH and using any hit shaders, but i guess that's too slow to make sense in practice.

Like you i'm worried programmable traversal might be too costly, and stochastic LOD also increases data divergence across otherwise coherent rays.
On the other hand, stochastic LOD is a very simple solution to a long standing open problem. Implementing it in hardware could be the better option than having traversal shaders, IDK.
(MS has listed traversal shaders as potential upcoming feature for DXR: https://microsoft.github.io/DirectX-Specs/d3d/Raytracing.html)
 
Of dx12 ultimate
if a card comes out with hardware support for a feature not in dx12u that would prove your interpretation wrong

You are adding words that Microsoft never used. They did not say all next generation hardware features “of DX 12 ultimate”.

You are also misrepresenting my position. I did not say that a PC GPU can not have more features than DX12 ultimate. The Microsoft quote also said nothing about PC GPUs being limited to only DX12U features. Please read the quote again.
 
Next generation features that are in DX12U (which it also highlights) are guaranteed to be supported.

No, the quote doesn’t say all features in DX12 U. It says all next generation hardware features. You’re literally changing what Microsoft said to fit your view. I’m taking their words at their actual meaning.

DX12U isn't the max feature set, in fact that wouldn't make any sense.

What I said is that the max feature set for the Xbox Series X (i.e. next generation hardware) is equivalent to DX12U according to Microsoft.

You've still not answered if a graphics card is labeled DX12U does that mean it can't have additional features going by your interpretation. As that quote is specifically about xsx and graphics cards.

I never said PC GPUs are limited to DX12U and I addressed this earlier in the thread. Of course PC GPUs will quickly move beyond DX12U and Xbox which are both snapshots in time.
 
With programmable traversal you can alter the ray without having a hit.
The only important application i can imagine seems stochastic LOD: At any time you can randomly decide to teleport a ray to a lower detailed version of the scene, or remove objects like trees / rocks.
This way it's possible to have continuous LOD using only discrete meshes, and RT becomes scalable to large complex scenes and given constant runtime budget.

There is no good solution to get LOD with current DXR.
One option do is to replace geometry globally which will cause popping artifacts. But maybe this can be hidden well enough with temporal smearing which denoising causes anyways.
Another would be to store multiple LODs in BVH and using any hit shaders, but i guess that's too slow to make sense in practice.

Like you i'm worried programmable traversal might be too costly, and stochastic LOD also increases data divergence across otherwise coherent rays.
On the other hand, stochastic LOD is a very simple solution to a long standing open problem. Implementing it in hardware could be the better option than having traversal shaders, IDK.
(MS has listed traversal shaders as potential upcoming feature for DXR: https://microsoft.github.io/DirectX-Specs/d3d/Raytracing.html)

Yeah reading the Intel paper again it seems the leading BVH LOD techniques have considerable overhead and only start paying dividends for very complex scenes. I suspect we will have to make do with larger on-chip caches and moderate scene complexity for a while.

It’s strange that Nvidia isn’t talking more about geometry LOD being a problem. Maybe it’s because they don’t have a solution. The only thing I could find from them is a recommendation to not use the BVH for geometry LOD as “BVH handles complexity well by architecture”.

https://devblogs.nvidia.com/rtx-best-practices/
 
Yeah reading the Intel paper again it seems the leading BVH LOD techniques have considerable overhead and only start paying dividends for very complex scenes. I suspect we will have to make do with larger on-chip caches and moderate scene complexity for a while.
Yeah, all my talk about LOD here is pretty theoretical. Having a LOD solution adds a lot of constant cost to get going, and i don't know if this can be practical on actual / next gen HW already.
I'm also not totally convinced about Intels proposal. They solve the hard LOD switch by ensuring rays leaving a hitpoint keep using the same LOD per instance of geometry.
But this is not guaranteed to work if instances overlap, and it's not trivial to divide stuff into such instances at all. E.g. a large mesh that should cover multiple LODs like terrain can't be broken into instances without risking self intersections at the transitions.

I'm more convinced about the idea of morphing the geometry and ensuring static BVH bounds cover the changing shape. That's more robust and hardware friendly, but breaks a lot of established tech and workflows.
That's why i expected stochastic LOD and traversal shaders to be more welcome to the industry.
 
I think it all comes down to what Microsoft means by "next generation graphics hardware features" which is basically a term they've made up and thus could mean anything.

The two likely possibilities are:

1. A common baseline of features that are fully supported by both Turing and RDNA2 which Microsoft has interpreted into DX12U. In this case it's possible one or both architectures have additional features which aren't supported by the other and which Microsoft has chosen not to expose via DX12U. Whether XSX will then be limited to what's in DX12U or not is another interesting question but Microsoft's wording suggests that it would.

2. Microsoft is setting the XSX up as the baseline for the "next generation graphics hardware featureset" and Turing just happens to meet or exceed all of those features (although not necessarily with the same speed / efficiency in all cases).

I'd say either possibility could be true but in any case it's great news for anyone with a Turing or newer class GPU (and soon to be upgrade time for peasants like me rocking a Pascal, RDNA or older achitecture).
 
I'd say that if XSX h/w would support something not exposed in DX12U then MS would very much likely to use their tiers system to show this - i.e. if XSX h/w would support something in RT which Turing doesn't then it would get some RT Tier 1.2 or 2.0 as an option in DX12U with Turing remaining on 1.0/1.1. The fact that there are no such options means that it's rather unlikely that XSX h/w will support something above current DX12U spec.
 
I think it all comes down to what Microsoft means by "next generation graphics hardware features" which is basically a term they've made up and thus could mean anything.

The two likely possibilities are:

1. A common baseline of features that are fully supported by both Turing and RDNA2 which Microsoft has interpreted into DX12U. In this case it's possible one or both architectures have additional features which aren't supported by the other and which Microsoft has chosen not to expose via DX12U. Whether XSX will then be limited to what's in DX12U or not is another interesting question but Microsoft's wording suggests that it would.

2. Microsoft is setting the XSX up as the baseline for the "next generation graphics hardware featureset" and Turing just happens to meet or exceed all of those features (although not necessarily with the same speed / efficiency in all cases).

I'd say either possibility could be true but in any case it's great news for anyone with a Turing or newer class GPU (and soon to be upgrade time for peasants like me rocking a Pascal, RDNA or older achitecture).

I think it’s option #2 simply because Microsoft can only speak confidently about their own Xbox hardware’s capabilities and not upcoming hardware from Sony/Nintendo/Intel/AMD/Nvidia.

They don’t have the authority to define “next generation” for the entire industry.
 
I think it is not so complicated.

Counter Point:

Venn_-_Those_who_can_vs_Those_who_can__t.jpg
 
Back
Top