The only thing needed for DXR support is basically some hardware instruction to make programming raytracing easier.
This is a meaningless statement.
Do you have something in mind as to what this instruction might do?
Texturing is also some instruction, but that instruction happens to kick off a shitload of memory fetches to dedicated caches, interpolations and filtering. One could totally implement texturing in a shader, but the overall performance would probably tank by a factor of 10.
Ray tracing is not hard. Some people have it printed on a business card. Making it fast is something else entirely.
I doubt that we’ll soon see a detailed expose on how it is done, but that some instruction may very well kick off a whole bunch of machinery with a complexity that similar to texturing.
NVIDIA has vastly overstated it's hardware "capabilities", ...
What have they overstated? Please enlighten us!
... you can do raytraced reflections today perfectly well, the only thing RTX really has is dedicated raytracing hardware instead of using compute units to do it.
Exactly. There’s nothing new about ray tracing. And if those units finally make it possible to do things that are worth doing(!) in real time then that seems like a big deal to me.
On a 970 at 1080p, his implementation takes 2.8ms to cast 1 direct ray per pixel and probably one secondary ray. He achieves something like 0.8G rays/s.
And with that, you get that ugly picture with hard shadows. The kind of picture that, during the launch presentation, was used to demo how things should not look. [emoji4]
the need for such hardware is questionable. Especially when RTX costs so damned much with such relatively little performance when using raytracing;
A factor 5x to 10x is not relatively little performance in my book.
The cost of the GPU itself is a marketing discussion.
In fact with Vega's fp16 bit support AMD could deploy the exact same sort of thing efficiently to Vega cards, as DNN can work faster on less precise hardware.
That makes it the perfect match for tensor cores, where both Volta and Turing have a huge advantage of Vega. But especially Turing because that one also has 8 bit tensor cores.
Vega's previously proposed wavefront splitting could provide similar speedup without the need to add hardware (and cost).
What?
Do you honestly think that being able to recover occasional inefficiencies of the SIMD pipeline can compensate for 2 fully dedicated hardware accelerators?
The end result could be Navi showing up with all the same shiny effect support as RTX, while being significantly cheaper in the price for performance metric.
I think that Nvidia will be absolutely thrilled if AMD does nothing more than the trivial improvements that you’ve sketched.