DirectX Ray-Tracing [DXR]

No idea if this is accurate or not, but basically it is claiming RTX doesn't need Tensor Cores for it's operation:

"Denoising for real time applications is using an algorithm (cross-bilateral filtered denoising) and not AI (like NVIDIA's OptiX Ray Tracing Engine does). This sounds like tensor cores are not required for RTX and therefore tensor cores are definitely not required in the upcoming GeForce line?

Full presentation: https://www.gdcvault.com/play/1024813/

If you got excited about Nvidia's Star Wars demo for real time ray tracing elements (Reflections, Shadows, AO) during GTC 18, here is the full technical story. Watching the presentation is absolutely worth it. The dedicated RTX hardware provides optimized implementations of:

  • Acceleration structure build/update
  • Traversal and scheduling of ray tracing and shading
Summary for the whole presentation:

  • Split-sum approximation used for cross-bilateral filtered denoising (for visibility term only). No Monte Carlo or holistic solution for the rendering equation
  • Only incoming radiance is denoised. BRDF integrals are pre-integrated, so there's no need to denoise them (as they don't produce any noise)
  • Minimum of 2spp necessary for contact shadows/close region AO in order to get reasonably close to ground truth (for now)
  • 1 area light source per pass for best results (you could do more in one pass, but you'd lose the benefit of efficiency for real-time use)
  • Not so good results for overlapping penumbrae of two or more occluders of very different distances away from the shadow receiver
  • The greater the roughness of the material, the further away the results are from the ground truth
  • NVIDIA is still experimenting with deep learning solutions for denoising in the future
  • In-engine Path Tracer for reference and fine tuning materials
  • Instant Lightmap Baking feature only focuses on lightmap texels contributing to current image instead of full scene in order to update in real-time
  • API support for the Data Acceleration Structures necessary for ray tracing does not appear to have any special optimizations. Developers will need to figure this out on their own (not easy for dynamic scenes)
  • Tessellation does not work well with NVIDIA's solution. More work to be done resolving this issue in the future
  • Demo was presented in real-time with the computer on stage, not pre-recorded before the presentation"
 
Even if denoising did use AI, wouldn't the tensors only be necessary on developer side? I thought tensors were just meant to speed up training. Am I wrong?
 
No idea if this is accurate or not, but basically it is claiming RTX doesn't need Tensor Cores for it's operation:
Nvidia's Tomasi interview .... 19 March 2018
“DXR, Microsoft’s raytracing API,” says Tomasi, “is basically built on top of DirectX… and I expect GPUs that are capable of DX12 class compute should be capable of running DXR.”
...
For the RTX goodies though you’re going to have to be a card-carrying GeForce gamer. And, as it’s only accessible via the Volta architecture, that means you’ve got to spend at least $3,000 on a Titan V card. Which makes us think gaming versions of the Volta architecture can’t be too far off now.

Quite what the Volta GPU has inside it that can leverage the demanding RTX feature set Nvidia won’t say. “There’s definitely functionality in Volta that accelerates raytracing,” Tomasi told us, “but I can’t comment on what it is.”
...

But the AI-happy Tensor cores present inside the Volta chips certainly have something to do with it as Tomasi explains:

“The way raytracing works is you shoot rays into a scene and - I’m going to geek out on you a little bit for a second - if you look at a typical HD resolution frame there’s about two million pixels in it, and a game typically wants to run at about 60 frames per second, and you typically need multiple, many, many rays per pixel. So you’re talking about needing hundreds of millions, to billions, of rays per second so you can get many dozens, maybe even hundreds, of rays per pixel.

“Film, for example, uses many hundreds, sometimes thousands, of rays per pixel. The challenge with that, particularly for games but even for offline rendering, is the more rays you shoot, the more time it takes, and that becomes computationally very expensive.”

Nvidia’s new Tensor cores are able to bring their AI power to bear on the this problem using a technique called de-noising.
https://www.pcgamesn.com/nvidia-rtx-microsoft-dxr-raytracing

Edit: URL correction.
 
Last edited:
And, as it’s only accessible via the Volta architecture, that means you’ve got to spend at least $3,000 on a Titan V card. Which makes us think gaming versions of the Volta architecture can’t be too far off now.
If your game needs players to own a >$1000 GPU to run a feature, you are going to have a very small market. More likely such a feature won't be used in games as too niche. RTX will be used for professional industries. Unless RTX can run well enough without the Volta specifics, in which case it's marketing speak. ;)
 
It would probably be rash at this point to push the graphics schedule back in order to leverage technology that isn't even out yet but if developers want to adopt a raytracing approach it is very likely it would be a drop-in replacement for some feature they already have but done better, rather than a new feature.
 
Last edited:
If your game needs players to own a >$1000 GPU to run a feature, you are going to have a very small market. More likely such a feature won't be used in games as too niche. RTX will be used for professional industries. Unless RTX can run well enough without the Volta specifics, in which case it's marketing speak. ;)
i guess that's why APIs are both good and bad. APIs are separated from the implementation. So today it might be only Volta, but later on, it could be another method to implement.
 
Even if denoising did use AI, wouldn't the tensors only be necessary on developer side? I thought tensors were just meant to speed up training. Am I wrong?
I thought it would be post processing with regards to the raytracing pipeline, by default I think Nvidia provides one trained model.

I am pretty sure RTX is a bit like Optix which is more than just a rendering engine these days but an "API"/framework as well that can use the Tensor Cores and also benefits with Volta from the changed pipeline/optimised HW code-compiler beyond those cores.
So I am pretty sure one does not need Tensor cores just like you do not for Optix, but there are performance reasons for it to exist potentially even for gaming but how soon and do we need another GPU arch/API evolution, also Nvidia is still developing additional libraries and integration.
That said even the presentation the reddit links clearly defines:
- Real-time ray tracing with RTX
- GameWorks Ray Tracing Denoising Modules.

That last point looks to reinforce my point that the denoising and focus so far on Tensor cores is just a post-processing function, but then RTX-GameWorks also includes libraries for Area Shadows,Glossy Reflections, Ambient Occlusion under "Volta" architecture, and worth noting it is a development in progress project that will continue to expand just as Nvidia do with regards to CUDA libraries-functions using the Tensor Cores from a Tesla perspective; although that development regarding ray tracing related libraries does not necessarily mean just Tensor cores integration.

Question is how efficient are these libraries even with the new pipeline/Volta specific optimised compiler without the Tensor cores denoising, this will be further exacerbated by the GPC-TPC limitations as the the architecture is scaled down from the full 6 GPCs to 3 or more likely 4 GPCs (lowest I could imagine it on myself).
And is it cost/R&D-logistical effective for Nvidia to create multiple Gx104/Gx102 with/without the Tensor cores across Geforce-Quadro and more distinct from them Tesla.

As Shifty Geezer suggests, this may end up being as niche for now as the FP16 introduced with a few games actively engaged by AMD for Vega.
 
More likely such a feature won't be used in games as too niche.
Several developers (and NVIDIA) have stated that RTX will be used in several games this year and the next, chief among them is Metro Exodus and quite possible Shadow Of Tomb Raider and/or Battlefield V, also possibly the next game from Remedy.
 
I'm talking about nVidia's acceleration hardware. Whatever feature that accelerates won't be used (unless it's automatic in the API) because the number of gamers who'll benefit is infinitesimally small. As a game developer, you don't integrate a feature <0.1% of owners at best will benefit from. They can't design their games around this feature.

What you might see, perhaps, is raytracing at one level of quality on normal hardware, and automagically enhanced to higher quality on Volta hardware. But that would make these remarks PR bluster.

"For the RTX goodies though you’re going to have to be a card-carrying GeForce gamer. And, as it’s only accessible via the Volta architecture, that means you’ve got to spend at least $3,000 on a Titan V card."
If the RTX goodies work on non-$3000 cards, this statement is bogus. If this statement is true, no game is going to going to implement those 'RTX Goodies', whatever they are.
 
They'll probably those kinds of nvidia developed "they way its meant to be played" sort of feature.
 
The statement is from the editor though, NVIDIA alluded that the next Geforce product will accelerate RTX.
RTX with Volta has a changed Ray Tracing pipeline along with a more optimised compiler for ray tracing functions specific for the architecture, it is one of the benefits for Optix on Volta beyond the Tensor cores and the post processing denoising; point is acceleration is happening even without the Tensor cores with Volta and can assume whatever follows.
That said I still think it is quite possible for Tensor cores to appear on the Gx104 Geforce models, but that needs to be weighed against quite a lot of factors, and I am curious how this further develops (Nvidia are still working on more libraries) and what else Nvidia might be able to do to utilise efficiently the Tensor cores in this context.
 
The way I see RTX, it's somewhat revolutionary mostly because it signs that the industry is now seeing ray-tracing as a viable path, and also opens the way to near future improvements not only by the big players (Microsoft, NVidia and AMD) but also by other people who will start using the API to experiment and find new optimizations.

In terms of visual quality, it's not there yet, but it's a nice start. We can clearly see that for now the underlying implementation uses a mix of rasterization hacks with true ray-tracing techniques, so it is still behind pure path-tracing, but hey, that is still very nice.

I foresee huge improvements in the next years, in part because of RTX, and that is quite a nice thing.
 
The way I see RTX, it's somewhat revolutionary mostly because it signs that the industry is now seeing ray-tracing as a viable path, and also opens the way to near future improvements not only by the big players (Microsoft, NVidia and AMD) but also by other people who will start using the API to experiment and find new optimizations.

In terms of visual quality, it's not there yet, but it's a nice start. We can clearly see that for now the underlying implementation uses a mix of rasterization hacks with true ray-tracing techniques, so it is still behind pure path-tracing, but hey, that is still very nice.

I foresee huge improvements in the next years, in part because of RTX, and that is quite a nice thing.

RTX≠DXR

RTX is simply Nvidia's proprietary CUDA extensions to DirectX Ray-Tracing (and Vulkan) which is currently HW accelerated only on Volta GPUs and is only in in used to denoise in real-time. It's nothing different than AMD's Radeon Rays (which also AFAIK supports exactly the same features). The only "thing" is that they were able to have the denoiser running in real-time on Volta thanks to its tensor cores...but every other RT feature should work on any DX12 compatible GPU...
 
Omg you're absolutely right, and I wanted to write DXR but for some reason I wrote RTX, sorry. I stand corrected.
 
RTX≠DXR

RTX is simply Nvidia's proprietary CUDA extensions to DirectX Ray-Tracing (and Vulkan) which is currently HW accelerated only on Volta GPUs and is only in in used to denoise in real-time. It's nothing different than AMD's Radeon Rays (which also AFAIK supports exactly the same features). The only "thing" is that they were able to have the denoiser running in real-time on Volta thanks to its tensor cores...but every other RT feature should work on any DX12 compatible GPU...
Seems it also involves changes to the pipeline and also compiler, there could be other aspects still not presented yet, it is more than denoise in real-time which is the post processing function.
Also worth noting they have only presented 3 primary-core libraries so far and others are in development.
 
Seems it also involves changes to the pipeline and also compiler, there could be other aspects still not presented yet, it is more than denoise in real-time which is the post processing function.
Also worth noting they have only presented 3 primary-core libraries so far and others are in development.
Yup but as of today (and after having personally talked face to face with UE4 engineers from Epic who worked on it) it has only been used for de-noising in all of the publicly displayed demos (ILMxLabs/Epic, Seed and Remedy's demos). Nvidia has only made DXR drivers available for Volta that's also why all the demos are using them..it's a "marketing" thing (same for the DGX-1 workstations..which where overkill..).. BTW UE4 won't support DXR before Q2/Q3 2019 at the earliest (meaning out of beta etc).
 
Last edited:
May 21, 2018
The Reflections demo is running at 1080p Resolution at 24 frames per second in Unreal Engine. The calculations for reflections, ambient occlusion and the shadows from the textured area lights are ray traced in real time. With ray tracing, challenging effects such as soft shadows and glossy reflections come naturally, greatly improving the realism of the scenes.

The team at Epic collaborated with NVIDIA’s engineers to utilize a number of techniques to achieve realism with ray tracing including denoising and reduction of detail in reflections. A deep-dive talk on these techniques, challenges, and trade-offs was given at GDC 2018.
https://www.unrealengine.com/en-US/...peek-real-time-ray-tracing-with-unreal-engine

Edit: Removed video as already posted.
 
Last edited:
Back
Top