Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
Microsoft's SDK is "experimental" and the API isn't locked down so may (and in all likelihood, will) change. I remember Nvidia getting aggressive when designing the Geforce 5 series and focussing on great 16-bit and 24-bit shader performance. 32-bit performance? Not so much. Guess what devs really wanted? 32-bit shader performance and that put Nvidia behind AMD for that graphics generation.
Actually they went great 16-bit and slow 32-bit and didn't support 24-bit at all, everything 24-bit was run 32-bit. AMD on the other hand supported 24-bit but not 16- or 32-bit and DX specified 24-bit as the default precision for all pixel shaders (vertex shaders were always 32-bit on everything)
 
Apparently Shadow of the Tomb Raider doesn't have any raytracing related options yet. Video settings only mention screen-space reflections. Maybe it'll come later with a patch.

I remember Nvidia getting aggressive when designing the Geforce 5 series and focussing on great 16-bit and 24-bit shader performance. 32-bit performance? Not so much. Guess what devs really wanted? 32-bit shader performance and that put Nvidia behind AMD for that graphics generation.
IIRC nvidia's FX went mostly for maximum performance using 16bit pixel shader precision (which was widely used in DX8.1), but they also had the possibility of using 32bit precision though with significantly lower performance.
DX9 ended up using 24bit pixel shaders, so in the Geforce FX all of those had to be promoted to 32bit (which again was very slow). R300 was made with 24bit ALUs so they ran pixel shaders at native speed.

After nvidia sorted out the initial driver issues, the FX series actually performed very well in (the very few) DX8 titles (Comanche 4, Morrowind) and in OpenGL titles that apparently made heavy use of FP16 fragment shaders like Doom 3.
nVidia's biggest problem was probably Half Life 2's massive success that brought the spotlight on to DX9 and NV3x's terrible performance with all those 24bit Pixel Shaders 2.0.

Nothing released so far leads me to believe that developers are coding to some proprietary API.

I keep reading that these games are "implementing a combination of DXR and RTX"...
 
Apparently Shadow of the Tomb Raider doesn't have any raytracing related options yet. Video settings only mention screen-space reflections. Maybe it'll come later with a patch.

The way Microsoft manage Windows 10 updates has made everything less clear but if DirectX Raytracing SDK is "experimental" what is the status of the API in deployed Windows 10 code? Developers using experimental SDKs is one thing but if the APIs are in code in user's hands that's a null signal.
 
Again, it needs iterating what exactly is 'ray tracing hardware' and if there are suitable substitutes. Could a CPU be enhanced to provide that functionality? Or can the shaders be tweaked? Will there be a development of a Memory Structure Processor that can be used in all sorts of structured memory access functions? Or some level of super-smart cache?

AFAIK we haven't got a single piece of information about how RT hardware acceleration is implemented, neither from nVidia nor ImgTech, so we don't really know what the best thing going forwards is. The very notion of a 'ray tracing accelerator' may already be conceptually obsolete if the future, as I suddenly suspect having just thought of it, lies in an intelligent memory access module somewhere.
From what I understand the point is for the hardware to stay flexible. But little is known outside of that BVH customization. And I suppose some sort of tensor core logic for noise reduction, which may not necessarily be required.
 
Tomb Raider devs said it straight after the Turing unveil that RT will be added in later patch. Also, you can't use RT anyway at this point unless you're a Windows Insider since DXR ships with the October 2018 Update
 
From what I understand the point is for the hardware to stay flexible. But little is known outside of that BVH customization. And I suppose some sort of tensor core logic for noise reduction, which may not necessarily be required.

I'd say noise reduction is required: nothing I've seen so far even on Turing suggests effective ray budgets of more than low-single-digits per pixel.

DXR provides API for the BVH and allowing implementations to sort/regroup the shading tasks, but DXR does not provide anything for denoising. Developers are going to need to either do it themselves (there are some publications out there, but the space hasn't been deeply explored, not much to just copy&paste) or use GameWorks to use Nvidia's denoisers.
 
I'd say noise reduction is required: nothing I've seen so far even on Turing suggests effective ray budgets of more than low-single-digits per pixel.

DXR provides API for the BVH and allowing implementations to sort/regroup the shading tasks, but DXR does not provide anything for denoising. Developers are going to need to either do it themselves (there are some publications out there, but the space hasn't been deeply explored, not much to just copy&paste) or use GameWorks to use Nvidia's denoisers.
..or use Radeon Rays to use AMD's denoisers which also works on Nvidia and Intel GPUs (admittedly only on Vulkan & OpenCL right now though..). Everybody' and its mama will have its own denoiser in the near future.
 
..or use Radeon Rays to use AMD's denoisers which also works on Nvidia and Intel GPUs
NVIDIA denoisers use hardware acceleration on the tensor cores on Volta and Turing though, they should be much faster. Unless AMD comes up with a similarly effective solution.

See this benchmark of Radeon Pro Renderer with Ray Tracing: A Titan V is 200% to 250% faster than both Vega 64 and 1080Ti at the ray tracing workload.

https://www.computerbase.de/2018-05...-ray-tracing-radeon-pro-renderer-in-1920-1080
 
Last edited:
NVIDUA denoisers use hardware acceleration on the tensor cores on Volta and Turing though, they should be much faster. Unless AMD comes up with a similarly effective solution.
But limits you to only Turing/Volta, while AMDs solution works on current hardware too
 
But limits you to only Turing/Volta, while AMDs solution works on current hardware too
The question of Ray Tracing is not about wide adoption as of right now. It's about who can run it with acceptable fps. Vega, Pascal, Maxwell and Intel offerings are irrelevant if they can't accelerate RT effects to a workable level.
 
The question of Ray Tracing is not about wide adoption as of right now. It's about who can run it with acceptable fps. Vega, Pascal, Maxwell and Intel offerings are irrelevant if they can't accelerate RT effects to a workable level.
I'm pretty sure it's only a matter of finding the perfect balance on how much shading performance you want to sacrifice on raytracing, we after all have RT-based game already out there (Claybook)
 
I'm pretty sure it's only a matter of finding the perfect balance on how much shading performance you want to sacrifice on raytracing,
For the stuff shown with RTX, a hardware acceleration is needed to achieve a similarly convincing fidelity. Otherwise the IQ difference would be minimal as you scale down ray tracing to a visually unconvincing state, for it to work on normal shader hardware.

we after all have RT-based game already out there (Claybook)
Claybook doesn't actually ray trace in the traditional sense. It's cone tracing. The creator of the game even admits that hardware BFV (RT cores) will make it's implementation even faster.
 
For the stuff shown with RTX, a hardware acceleration is needed to achieve a similarly convincing fidelity. Otherwise the IQ difference would be minimal as you scale down ray tracing to a visually unconvincing state, for it to work on normal shader hardware.

Claybook doesn't actually ray trace in the traditional sense. It's cone tracing. The creator of the game even admits that hardware BFV (RT cores) will make it's implementation even faster.
I'm not saying RT cores wouldn't make things faster, just arguing about if they're actually needed or not.
It's still counted as "ray tracing" to my understanding, and if we don't limit ourselves to released stuff there's plethora of RT demos out there - the question is just about where's the balance and if finding the balance is worth it
 
I'm not saying RT cores wouldn't make things faster, just arguing about if they're actually needed or not.
They obviously are needed if you want to achieve palpable Ray Tracing effects. If you want scene wide reflections or Global Illumination.
Just running Ray Tracing on the shaders will get you what exactly? a single reflection on a single surface? and at what fps? 10?
 
They obviously are needed if you want to achieve palpable Ray Tracing effects. If you want scene wide reflections or Global Illumination.
Just running Ray Tracing on the shaders will get you what exactly? a single reflection on a single surface? and at what fps? 10?
Biggest problem is that those resources can not be used for something else.
Even if hardware accelerated version would be as slow, it would save resources for rest of the graphics.
 
Has AMD published their roadmap with regards to next generation Ray tracing and AI hardware? They're at least a generation behind Nvidia which isn't a good sign for the PS5.

And is there any mention of new Tegra hardware based on the new tech and feature sets developed for Turing?
 
Has AMD published their roadmap with regards to next generation Ray tracing and AI hardware? They're at least a generation behind Nvidia which isn't a good sign for the PS5.
Before they've put out something you can't really say they're "at least a generation behind", one could argue NVIDIA was several generations behind on tessellation hardware when they first introduced it, but that didn't stop them from being a lot faster than AMDs solutions.

And is there any mention of new Tegra hardware based on the new tech and feature sets developed for Turing?
No, they just released Xavier which is Volta-based in June
 
Status
Not open for further replies.
Back
Top