Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
Wonder how many tensor cores or teraflops are needed to raytrace everything including GI, shadows, AO, reflection, refraction "SSS, water caustics and hair shadow"? I'm guessing 5-6 more gens at a decent resolution?

Generations of consoles or gpus? I'm assuming decent resolution means 4k? I think you'll probably see games doing a good number of those things on Turing once developers have importance sampling and de-noising figured out. Besides that, there will be the usual investigations into optimizations for async, re-using data, combining ray shaders etc.
 
Last edited:
tensorcores are not raytracing. they are denoising.

So BF5 is not using any of the custom hardware in the new Nvidia GPU as they are using their own denoise solution not Nvidias?

Interesting performance options at the end of the video with decoupling raterization resolution from raycasting resolution. That seems to make a lot of sense, and given most of what they worked on was prior to the new GPU I wonder how far they got with a 1080 say or even a top Vega if they used compute a lot.
 
Generations of consoles or gpus? I'm assuming decent resolution means 4k? I think you'll probably see games doing a good number of those things on Turing once developers have importance sampling and de-noising figured out. Besides that, there will be the usual investigations into optimizations for async, re-using data, combining ray shaders etc.
Gpu gens "hope" and yes 4k. It's a mammoth task to do all those and I certainly don't believe some software optimization could do the trick for short term gain. Yes there will be improvements but not nearly as drastic without raw hardware grunt.
 
So BF5 is not using any of the custom hardware in the new Nvidia GPU as they are using their own denoise solution not Nvidias?
They switched to using the RT cores in Turing when they got the hardware but their de-noising is still done in shaders.
 
So BF5 is not using any of the custom hardware in the new Nvidia GPU as they are using their own denoise solution not Nvidias?

Interesting performance options at the end of the video with decoupling raterization resolution from raycasting resolution. That seems to make a lot of sense, and given most of what they worked on was prior to the new GPU I wonder how far they got with a 1080 say or even a top Vega if they used compute a lot.

They're not using the Tensor cores, but they are most definitely using the RT blocks. While details of the RT blocks is scarce to non-existent, I think if you are using the DXR API, the Nvidia driver basically makes use of the RT hardware for you. You don't have to explicitly do anything to use it.
 
They switched to using the RT cores in Turing when they got the hardware but their de-noising is still done in shaders.

And I wouldn't be surprised if they, and most devs for the matter, never bother using nvidias proprietary tensors for demonising. I don't know how much more efficient they are to offset the trouble of moving away from the industry standard of that kind of task. Devs have been creating and perfecting shader/gpu-compute based screenspace denoisers for their games for more than a decade and they've been doing fine.
 
Gpu gens "hope" and yes 4k. It's a mammoth task to do all those and I certainly don't believe some software optimization could do the trick for short term gain. Yes there will be improvements but not nearly as drastic without raw hardware grunt.

Right now Battlefield 5 casts rays per pixel, which is what you do not want to do with ray tracing. Ray casting should be decoupled from output resolution, and ray casting should be based on importance. The general concept is to minimize the number of rays cast and then reconstruct a final display output. It was part of Morgan McGuire's talk where he references Matt Pharr's book on physically-based rendering.

Edit
http://on-demand.gputechconf.com/si...man-morgan-mcguire-real-time-ray-tracing.html

Of course, I'm not talking about games being full path-tracers. They'll be hybrid raster + ray renderers.
 
And I wouldn't be surprised if they, and most devs for the matter, never bother using nvidias proprietary tensors for demonising. I don't know how much more efficient they are to offset the trouble of moving away from the industry standard of that kind of task. Devs have been creating and perfecting shader/gpu-compute based screenspace denoisers for their games for more than a decade and they've been doing fine.

Even Nvidia has different non AI based denoising techniques for shadow, AO and reflection. Supposedly, the AI based one requires more samples per pixel than their handcrafted ones.

Nvidia says the AI denoising in Optix 5.0 is derived from this paper.

https://research.nvidia.com/sites/default/files/publications/dnn_denoise_author.pdf

It seems every game is going to need to use an alternative denoiser to train the AI.

1000 reference frames per scene along with 10 noisy frames per reference frame on a smooth camera pass.

I wonder if this method allows for some tricks to improve image quality. Can you use more samples per pixel and/or an offline denosier that provides better quality but is not performant for real-time to produce reference frames? And then provide the AI the noisy frames from your real-time solution?
 
Last edited:
Which is why people need to self-moderate.

If you're posting RayTracing but nothing that would impact Consoles, then it shouldn't be in this thread.
 
SO to me it looks like we will have some RT in the next gen consoles at launch, maybe for stuff like the reflections in BF5. But the midgen refresh PS4 Pro is when the heavy hitters will show up with a more mature and efficient solution.
 
Can we please stop calling it AI. It is artificial, yes but has nothing to do with intelligence. This AI is more or less "just a simple" best-match solution based on parameters that are provided by the driver etc. Everything today must be called AI and yet nothing has to do anything with that.

back to topic:
If we use 4k and half RT resolution, would that still work? E.g. SSR are half res most time, so there should be no difference in clarity.
 
  • Like
Reactions: JPT
Can we please stop calling it AI. It is artificial, yes but has nothing to do with intelligence. This AI is more or less "just a simple" best-match solution based on parameters that are provided by the driver etc. Everything today must be called AI and yet nothing has to do anything with that.
Though 'AI' is a term bandied about willy-nilly, similar to 'nano' and other silly tech buzz-words, machine learning to produce datasets is still an application of AI.
 
Can we please stop calling it AI. It is artificial, yes but has nothing to do with intelligence. This AI is more or less "just a simple" best-match solution based on parameters that are provided by the driver etc. Everything today must be called AI and yet nothing has to do anything with that.
AI is perfectly fine to use for these contexts IMO, it's specific task-based complex solvers with vast arrays of parameters and learning adaptive. What was originally conceived as "AI" in terms of machine learning itself to solve general problems and evolve is usually termed as AGI (Artificial General Intelligence). At least that's my understanding.
 
Status
Not open for further replies.
Back
Top