Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
Ray tracing is a very scalable technology. Just run it at a lower quality level for those with old/week hardware. Worse case scenario just use the Crysis approach, disable it and have the game look awful but run great.
Then I don't understand what you mean by games designed with raytracing in mind. If you're not talking about new engines built form the ground up for raytracing as their primary rendering method, and are talking about adding RT on top of existing rasterising engines, what's the difference between games designed for RT that'll showcase RT and the current demos designed to showcase RT?
 
You would know the settings for the lighting and GI by using RT. Using those settings you bake the level. I would argue that rasterized games would look and be cheaper to build in this regard.

Whether a game used works real-time GI or baked lighting, if nothing changes they should be the same in which baked lighting is desirable. It’s the dynamic stuff that requires RT hardware to run in real-time.
That seems to indicate that games either have baked lighting and shadowing or realtime GI, which of course is not true. I assume you mean all the games that have dynamic lighting that isn't GI?
 
That's one aspect I'm wary about, the transition from traditional lighting/shadows to RT over the next 5-10 years. Since implementing these techniques are going to be so much easier going forward, how much dev effort is going to be put into lighting, shadowing and reflections 2 years from now? 5 years? Are we going to see an actual degradation of traditional rasterization quality in favor of RT simply due to time factor and costs? How much are the EA's and Ubisoft's going to spend on their annual iterations, like Assassin's Creed in Space come 2022, in getting the effects looking just as great which takes considerable development effort when they could possibly spend half as much time getting it good enough, with the gap in quality between traditional methods and RT growing much wider?


Then I don't understand what you mean by games designed with raytracing in mind. If you're not talking about new engines built form the ground up for raytracing as their primary rendering method, and are talking about adding RT on top of existing rasterising engines, what's the difference between games designed for RT that'll showcase RT and the current demos designed to showcase RT?
It's not black and white. "Designed with RT in mind" can mean just reflections or shadows. Artistic decisions would be made with the possibilities those technologies allow instead of the limitations of rasterization.
 
That seems to indicate that games either have baked lighting and shadowing or realtime GI, which of course is not true. I assume you mean all the games that have dynamic lighting that isn't GI?
We may be speaking about different things, I'm referring to the topic that most of world lighting in games is baked and they layer on lights and shadows on top of that. I think the only game that does realtime world GI is Driveclub and one other Sony 1P title whose name is not coming to tongue. Forza and such have TOD, but I'm fairly positive it's baked, it just switches as the TOD changes.

I don't know what's in store for standard dynamic lighting and shadows, I can't see it being any different than what it is today, and I don't expect there to be a quality drop off due to RT.
 
It's not black and white. "Designed with RT in mind" can mean just reflections or shadows. Artistic decisions would be made with the possibilities those technologies allow instead of the limitations of rasterization.
So what is it about these demos that were made without RT in mind where the artistic decisions weren't made with the possibilities those technologies allow?

I have no idea what's being talked about.

¯\_(ツ)_/¯
 
Besides, RT isn't anywhere as complex as the algorithms it replaces. That's one of its selling points.

Not as fast either, and won't be for a while. Will be fast in the future at some point maybe won't cut it for consoles. Every bit of silicon needs to make sense, now and in the future, inside a console SoC and adding something means removing something else to make room for that. It's not like what Nvidia does by selling leftover HPC parts at a premium for people that have money to burn on new tech at all, not even a little bit.
 
Not as fast either, and won't be for a while. Will be fast in the future at some point maybe won't cut it for consoles. Every bit of silicon needs to make sense, now and in the future, inside a console SoC and adding something means removing something else to make room for that. It's not like what Nvidia does by selling leftover HPC parts at a premium for people that have money to burn on new tech at all, not even a little bit.
I guess with this idea then AMD should be able to come in and swoop up the graphics market with 15-20 TF GPUs at the same price point as the 20xx series?

Hmm. Interesting. I’d figure AMD would be salivating at this free gift instead of playing the “we can do ray tracing too” card.

Has anyone here considered possibility that the power and heat generated from a 15-20TF GPU or the bandwidth required to be too expensive to be feasible as a consumer product possible reasons why they needed to switch off the well known path?
 
Has anyone here considered possibility that the power and heat generated from a 15-20TF GPU or the bandwidth required to be too expensive to be feasible as a consumer product possible reasons why they needed to switch off the well known path?
Why would silicon processing rays generate less heat than silicon processing shaders? Given Rt is described as very processor intensive, it's non-sequitor to think it'll do less processing so generate less heat than rasterising. I'm not sure that the BW requirements would be any different - that'd be an interesting one to hear. Shaders processing surfaces have the same work to do. You still have to read the same model data and write the same buffers. Textures are the same whether shaded or traced. The optimised data model may make a difference for geometry.

As for AMD being handed the the graphics market, getting a shot at the top end is being offered a small piece of pie. nVidia make way more from the server, professional, and AI markets that Turing was developed for than they do the leet gamer market.
 
Why would silicon processing rays generate less heat than silicon processing shaders?
Higher frame rates and higher resolution should point to more usage ie 4K@60 or higher than 1080p@30.
No? Am I wrong in considering that more pixels per second is more power?
 
So going back to a console adoption, tensor + RT cores should cost at least some ~30% larger die-area / transistor count, if it was to assume the same proportions as nvidia uses in the RTX GPUs.
One could assume they could e.g. forego the tensor cores, but it seems those are used with RT to denoise so... they can't really take them off.

The Dice RTX demo was not using it so compute will obviously do the job, tensor may be faster but faster than all the compute capability it displaces?

While DICE is getting some great quality of the rays it shoots, the unfiltered results of ray tracing are still rather noisy and imperfect. To clean that noisiness up, a custom temporal filter is used, along with a separate spatial filter after that to make sure reflections never break down and turn into their grainy unfiltered results. Interestingly, this mean that DICE is not currently using the Nvidia tensor cores or AI-trained de-noising filters to clean up its ray traced reflections.

https://www.eurogamer.net/amp/digit...-battlefield-5s-stunning-rtx-ray-tracing-tech
 
The Dice RTX demo was not using it so compute will obviously do the job, tensor may be faster but faster than all the compute capability it displaces?

While DICE is getting some great quality of the rays it shoots, the unfiltered results of ray tracing are still rather noisy and imperfect. To clean that noisiness up, a custom temporal filter is used, along with a separate spatial filter after that to make sure reflections never break down and turn into their grainy unfiltered results. Interestingly, this mean that DICE is not currently using the Nvidia tensor cores or AI-trained de-noising filters to clean up its ray traced reflections.

https://www.eurogamer.net/amp/digit...-battlefield-5s-stunning-rtx-ray-tracing-tech

At the end of the day all this talk about the "nVidia Turing RayTracing enhanced GPUs on next-gen consoles" is somewhat meaningless given that we know for a fact that next-gen consoles (from Sony and MS) will be powered buy AMD. The fact that Nvidia chose to implement DXR this way ( RT "Cores" & Tensor Cores) shouldn't have an impact on how AMD decides to accelrate DXR. Microsoft designed the API this way so that manufacturer are free to accelerated the way they want

...fundamentally, DXR is a compute-like workload. It does not require complex state such as output merger blend modes or input assembler vertex layouts. A secondary reason, however, is that representing DXR as a compute-like workload is aligned to what we see as the future of graphics, namely that hardware will be increasingly general-purpose, and eventually most fixed-function units will be replaced by HLSL code.

As I've already stated since the Turing reveal at Siggraph it is my belief that Turing is a Pro grade chip first an foremost. You don't "need" RT Cores to support DXR/RTRT and you don't need Tensor at all to support DirectML (for resolution reconstruction or denoising etc). As quote above DICE is using it's onw denoiser and ChoasGroup showcased it's own Real-time Renderer which also uses it's own denoiser:

"What is Lavina built on?
Project Lavina is written entirely within DXR, which allows it to run on GPUs from multiple vendors while taking advantage of the RT Core within the upcoming NVIDIA “RTX” class of Turing GPUs. You will notice that there’s no noise or “convergence” happening on the frames, which is thanks to a new, real-time Chaos denoiser written in HLSL that also allows it to run on almost any GPU. With this, we aim to eventually deliver noise-free ray tracing at speeds and resolution suitable for a VR headset with Lavina."

https://www.chaosgroup.com/blog/ray-traced-tendering-accelerates-to-real-time-with-project-lavina

What's also fascinating is that it's AFAIK the first time that a GPU is released with "brand new features" that the end user simply can't use at all now, one month after it's official release. There isn't a single DXR & RTX DLSS demo/Game/Benchmark publicly available yet.
 
Last edited:
No? Am I wrong in considering that more pixels per second is more power?
Yes. It's all about utilisation of the silicon, regardless what it's doing, whether it's rendering a simple game at 16k240 or rendering a photorealistic movie frame at one frame an hour. If PC games ever ran cooler at lower framerates with higher quality, it's because there was a lot of idle time as they weren't being optimally used.
 
Yes. It's all about utilisation of the silicon, regardless what it's doing, whether it's rendering a simple game at 16k240 or rendering a photorealistic movie frame at one frame an hour. If PC games ever ran cooler at lower framerates with higher quality, it's because there was a lot of idle time as they weren't being optimally used.
But that doesn't seem to line up, most times, ie, when we look at uncapped frame rates (menus for games) the console spools up fans because of heat and that can't be due to utilizing the hardware more. Several titles exhibit this type of behaviour in specific AAA console titles where frame rates go uncapped
 
So what is it about these demos that were made without RT in mind where the artistic decisions weren't made with the possibilities those technologies allow?

I have no idea what's being talked about.

¯\_(ツ)_/¯
Basically, they only look slighty better because they're not going to change the whole look of the game just for a tacked on feature.

Not as fast either, and won't be for a while. Will be fast in the future at some point maybe won't cut it for consoles. Every bit of silicon needs to make sense, now and in the future, inside a console SoC and adding something means removing something else to make room for that. It's not like what Nvidia does by selling leftover HPC parts at a premium for people that have money to burn on new tech at all, not even a little bit.
What makes you think it'll take long for it to become fast?

EDIT: Voxel ray tracing (not cone tracing):

 
Last edited:
At the end of the day all this talk about the "nVidia Turing RayTracing enhanced GPUs on next-gen consoles"

As I've already stated since the Turing reveal at Siggraph it is my belief that Turing is a Pro grade chip first an foremost.

If console chips are custom and Microsoft went with Nvidia they may opt to not have the tensor cores, as you say they seem aligned more with industry. With that assumption you could try to calculate the tensor cores size and remove that from the die to get a rough area for Turing blocks on the pc cards. That would then give an indication of what an Nvidia console Apu GPU portion look like and possibly infer some performance indicators.

That was what was in myine anyway and clearly not articulated in my reply :oops:
 
But that doesn't seem to line up, most times, ie, when we look at uncapped frame rates (menus for games) the console spools up fans because of heat and that can't be due to utilizing the hardware more. Several titles exhibit this type of behaviour in specific AAA console titles where frame rates go uncapped

Think about what was just said for a second.

With a capped versus uncapped menu, you're potentially rendering the menu hundreds or thousands of times per second uncapped versus just 30 or 60 times per second if capped.

In StarCraft 2, and many other games that had uncapped framerates in the menu, the GPU would hit 100% utilization due to the menu being rendered thousands of times per second. In other words the menu was pushing the GPU harder than any other part of the game when actually playing the game (SC2 tending to be more CPU limited while gaming). This happened regardless of resolution chosen or graphical settings used.

This was eventually addressed in one of the updates for SC2 by capping menu refresh. :p

Regards,
SB
 
Think about what was just said for a second.

With a capped versus uncapped menu, you're potentially rendering the menu hundreds or thousands of times per second uncapped versus just 30 or 60 times per second if capped.

In StarCraft 2, and many other games that had uncapped framerates in the menu, the GPU would hit 100% utilization due to the menu being rendered thousands of times per second. In other words the menu was pushing the GPU harder than any other part of the game when actually playing the game (SC2 tending to be more CPU limited while gaming). This happened regardless of resolution chosen or graphical settings used.

This was eventually addressed in one of the updates for SC2 by capping menu refresh. :p

Regards,
SB


It's certainly true that with many sensors over many parts of the chip, it's possible that a relatively undemanding part of a game might stress a particular part of a chip highly, and force exaggerated thermal management for an overall low draw part of a game.

Of course, if such a scenario forced a console to see localised overheating and spool up it's fans to "twat" level noise, then someone fucked up their design.
 
Think about what was just said for a second.

With a capped versus uncapped menu, you're potentially rendering the menu hundreds or thousands of times per second uncapped versus just 30 or 60 times per second if capped.

In StarCraft 2, and many other games that had uncapped framerates in the menu, the GPU would hit 100% utilization due to the menu being rendered thousands of times per second. In other words the menu was pushing the GPU harder than any other part of the game when actually playing the game (SC2 tending to be more CPU limited while gaming). This happened regardless of resolution chosen or graphical settings used.

This was eventually addressed in one of the updates for SC2 by capping menu refresh. :p

Regards,
SB

I never thought about that! Is that why my PS4 is mostly silent in some games, but sometimes sounds like it’s taking off when I access the bloody menu?
 
I never thought about that! Is that why my PS4 is mostly silent in some games, but sometimes sounds like it’s taking off when I access the bloody menu?
I think function is probably correct here. Many sensors tripping the system to add additional cooling even if the whole chip isn’t under load. That would explain the behaviour pretty well.
 
Status
Not open for further replies.
Back
Top