Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
The other thing to consider is even in the PC space, developers still need to support non-RT hardware for at least the next 3 to 4 years, since not every PC gamer will have spent $500 or more on RT hardware. Look at how long DX10 or DX11 games were still having to be made despite the next greatest DX being released.

Not having RT hardware in next-gen consoles wont be totally horrible either.
 
Still surprised not to see too much detail on what the RT cores are actually doing either, from looking into the Nvidia web resources they put a lot of emphasis on their ability to denoise low sample count data, is it possible that the RT acceleration is a clever denoising algorithm that allows for ray counts low enough that they wouldn't be useful otherwise?
They put out the word that their new GPUs were twice as fast as the old ones thanks to fancy-shmancy acronym power. But that acronym-power is reconstruction upscaling from the sounds of it. So nVidia stick hardware level upscaling into their hardware and then claim it's powering 4K twice as fast. Let's claim 2xMSAA is making a GPU twice as fast then...

All claims, no matter who from, need transparent data to prove/disprove and understand them.
I think NV does a nice job of atleast starting with new tech, maybe its not the greatest but its a beginning. And who knows it might have advantages in engines optimized for those RT cores.
Starting with a tech that isn't ready doesn't net you anything. Except marketing opportunities. GPU vendors have often included bespoke features that largely went unused.

There's a distinct possibility here that the RT features are purely for the pro market - professional imaging and machine intelligence - and nVidia chucked them into their consumer space because they've nothing else to add there but wanted to sell upgrades with mega margins and needed some sales pitch. Cynical, I know, but a possibility; they are trying to make money first and foremost, and the greatest profit margins come from selling people dreams.
 
The other thing to consider is even in the PC space, developers still need to support non-RT hardware for at least the next 3 to 4 years, since not every PC gamer will have spent $500 or more on RT hardware. Look at how long DX10 or DX11 games were still having to be made despite the next greatest DX being released.

Not having RT hardware in next-gen consoles wont be totally horrible either.
This. Hardware tessellation was introduced in cards and not used for an entire generation. I suppose we really want RT hardware in consoles though as then we know it'd be used to its fully extent in at a least a few examples - it's far more valuable in a console than as a tiny part of the PC space.
 
This. Hardware tessellation was introduced in cards and not used for an entire generation. I suppose we really want RT hardware in consoles though as then we know it'd be used to its fully extent in at a least a few examples - it's far more valuable in a console than as a tiny part of the PC space.

True, and this is why I think RT is likely to become more fully realised in the impending mid-generation consoles.

NVidia's RT hardware will lead the way for development of RT algorithms, which will be implemented to some extent on the base PS5 and an affordable model in the Captain Scarlet range, whether or not they contain RT hardware. A few years down the line, once the tech and algorithms have both been further developed (i.e. Battlefield V running at 1080p on a 14TF GPU,) we'll see some iteration of the Sony and Microsoft consoles with dedicated hardware. That said, I wouldn't be surprised if Microsoft use Nvidia this go around, for a more expensive console in the same vein as the X1X and PS4Pro.

It'll be interesting to see where things go with Nintendo and Nvidia. I'm certainly day one for Switch RT.
 
I'd be fascinated to see if Nintendo and Nvidia did anything here, it goes against the Nintendo strategy since the Gamecube days to lead with tech but if a future Tegra style chip from Nvidia offered some RT I could see Nintendo building a striking game. Their strengths in visual design could offset a slower mobile focused energy efficient RT implementation that might be inadequate for a BFV but could support a Tomorrow's Children.

Also I'd be fascinated to see Nvidia maintain a supplier relationship for more than one product cycle (MS, Sony, Tesla.....)
 
The other thing to consider is even in the PC space, developers still need to support non-RT hardware for at least the next 3 to 4 years, since not every PC gamer will have spent $500 or more on RT hardware. Look at how long DX10 or DX11 games were still having to be made despite the next greatest DX being released.

Not having RT hardware in next-gen consoles wont be totally horrible either.
In fact, it is pretty safe to assume that AAA titles will use the next generation consoles as their base target for developmet until the end of their cycle, which is likely to last some 7-8 years from now.
 
Last edited:
In fact, it is pretty safe to assume that AAA titles will use the next generation consoles as their base target for developmet until the end of their cycle, which is likely to last some 7-8 years from now.
Indeed. Didn’t we finally start to see the consistency benefit of four or more CPU cores once this gen kicked off?
 
Let's not put NV on a pedestal here, they love them some proprietary tech (CUDA, PhysX, etc) and stuck with a heavily raster biased design when AMD was leaning heavily on compute power in the early DX12 era because that resulted in better perf for more games (or held back the adoption of compute heavy design if you want to phrase it another way). Still surprised not to see too much detail on what the RT cores are actually doing either, from looking into the Nvidia web resources they put a lot of emphasis on their ability to denoise low sample count data, is it possible that the RT acceleration is a clever denoising algorithm that allows for ray counts low enough that they wouldn't be useful otherwise? Like they come right out and explain that Tensor cores are MMA units why so shy on the RT stuff?

The RT hardware is in the domain of accelerating the BVH and calculations of ray intersections.
 
My biggest concern is the idea that the die area would need a compromise between a number of compute, tensor, and RT cores.

Is there a possibility for a next gen architecture with all three combined into a unified compute core, sharing a lot of the same circuitry?
 
My biggest concern is the idea that the die area would need a compromise between a number of compute, tensor, and RT cores.

Is there a possibility for a next gen architecture with all three combined into a unified compute core, sharing a lot of the same circuitry?

I would think all of the RT acceleration functionality is built into the cuda cores, or maybe whatever block of hardware contains a number of cuda cores, because it'd need access to some form of compute power.
 
I would think all of the RT acceleration functionality is built into the cuda cores, or maybe whatever block of hardware contains a number of cuda cores, because it'd need access to some form of compute power.
It's described as a core:

6n7A81l.png
 
I would think all of the RT acceleration functionality is built into the cuda cores, or maybe whatever block of hardware contains a number of cuda cores, because it'd need access to some form of compute power.
Yes.


I'd be fascinated to see if Nintendo and Nvidia did anything here, it goes against the Nintendo strategy since the Gamecube days to lead with tech but if a future Tegra style chip from Nvidia offered some RT I could see Nintendo building a striking game. Their strengths in visual design could offset a slower mobile focused energy efficient RT implementation that might be inadequate for a BFV but could support a Tomorrow's Children.

Also I'd be fascinated to see Nvidia maintain a supplier relationship for more than one product cycle (MS, Sony, Tesla.....)

Right now Nvidia has the benefit of being really the only ARM SoC provider targeting the TDP a switch like handheld console needs by virtue of their automotive efforts. That may chance as Qualcomm comes out with their Snapdragon 1xxx series.

I’m also interested to see if ARM’s vector extensions appear in any kind of battery-powdered form factors.
 
Yes.




Right now Nvidia has the benefit of being really the only ARM SoC provider targeting the TDP a switch like handheld console needs by virtue of their automotive efforts. That may chance as Qualcomm comes out with their Snapdragon 1xxx series.

I’m also interested to see if ARM’s vector extensions appear in any kind of battery-powdered form factors.

Thanks for answering part of what I've been wondering the past few days. But I still want to know how exactly the RT cores are designed and configured (pure vector SIMD?).
 
Supposing AMD have been working internally on equivalent tensor and rt execution units, I wonder if it absolutely requires a new arch for them, or if they could have designed them as an addition within the limitations of GCN. As an interim solution.
 
My biggest concern is the idea that the die area would need a compromise between a number of compute, tensor, and RT cores.

Is there a possibility for a next gen architecture with all three combined into a unified compute core, sharing a lot of the same circuitry?
For comparison:
12B transistors for Titan X - 11 TF compute
18.9B transistors for 2080TI - 14 TF compute + 14 TIPS + RT Core + Tensor Core
 
Supposing AMD have been working internally on equivalent tensor and rt execution units, I wonder if it absolutely requires a new arch for them, or if they could have designed them as an addition within the limitations of GCN. As an interim solution.
2020 gen already has as a selling point a better CPU and more consistent 4K with 10+ TF and GDDR6 bandwidth. Hardware RT is unlikely but DXR and Vulkan RT will at least make RT easier to implement on regular SP cores.

2026 gen won't be able to sell a res boost like all previous gens since we're likely staying at 4K. Instead their hook could be a machine that produces real CGI for the first time (in consoles).
 
Supposing AMD have been working internally on equivalent tensor and rt execution units, I wonder if it absolutely requires a new arch for them, or if they could have designed them as an addition within the limitations of GCN. As an interim solution.
It’s safe to assume they’ve been working on it. Radeon Rays is a few years old at this point.
 
Supposing AMD have been working internally on equivalent tensor and rt execution units, I wonder if it absolutely requires a new arch for them, or if they could have designed them as an addition within the limitations of GCN. As an interim solution.

Seeing as Vulkan is RT ready... DirectX 12 is RT ready... and AMD's current works with Radeon Rays... one would assume AMD's next architecture (Navi) will be RT ready. I can't picture AMD using specialised RT cores, but more so refining GCN CUs/ACEs for RT readiness. I'm also incline to believe that AMD will have a better implementation of infinity fabric within the Navi architecture. Truth be told, I believe AMD can have the edge in RT performance over Nvidia, if all the Navi CUs are RT ready - without the need for specialised cores (less latency design).
 
Status
Not open for further replies.
Back
Top