AMD's FSR 3 upscaling and frame interpolation *spawn

Diagrams are *NOT* to scale so I wouldn't infer much information about them in terms of HW unit design complexity ...

Would you think RT/tensor cores be worth the hit to the main compute die being 30% larger ?
Yes, even at 30% it would be worth it.

As for how much do they physically take, I can't find much. There is this:


Which says it's more around 10% for Turing. If anyone has more info, I would like to know.

As for the market that will decide if it's worth it? It's already decided. PC users have a overwhelming preference for Nvidia features, and a hypothetical PS6 isn't just going to add more raster power. It's going to be focused on ai and ray tracing, it's not like there is any other way to deliver advancements in visuals.
 
Even when we're solely discussing about desktop graphics, the argument has yet to be settled since high-end graphics could go into a direction that's not RT friendly and AI HW integration still may not be worthwhile on lower-end parts ...

Really? Two out of 3 of the PC GPU vendors already include hardware AI acceleration in their PC GPU's, and on the console front Sony has committed to the same for their next console. I'd be pretty amazed if Microsoft don't do the same in their next console, so that just leaves AMD. And one of the biggest areas AMD is lagging Nvidia (and Intel) is in the quality of their non-AI based upscaling. RT is the other but RDNA 4 may well resolve that.

As to whether it's useful on lower end parts, surely it's inclusion on the likes of the RTX3050 and RTX 2060 provide a clear answer in that respect. If anything, those GPU's benefit more from high quality upscaling which works well at low source resolutions that high end GPU's do.
 
Yes, even at 30% it would be worth it.

As for how much do they physically take, I can't find much. There is this:


Which says it's more around 10% for Turing. If anyone has more info, I would like to know.

As for the market that will decide if it's worth it? It's already decided. PC users have a overwhelming preference for Nvidia features, and a hypothetical PS6 isn't just going to add more raster power. It's going to be focused on ai and ray tracing, it's not like there is any other way to deliver advancements in visuals.
That 30% figure I threw around only describes the deficit between AMD vs Nvidia. Even on Nvidia the performance profile (resolution/refresh rate) between raster and RT isn't even identical but much less so for path tracing. The figure would be much higher (anywhere between 2x-10x) depending on how heavily content in question would subscribe to RT even giving the benefit of AI upscaling ...

What happens in the PC space stays *EXACTLY* in there and what the users over there prefer as well is irrelevant and has no bearing for console hardware designs too ...
Really? Two out of 3 of the PC GPU vendors already include hardware AI acceleration in their PC GPU's, and on the console front Sony has committed to the same for their next console. I'd be pretty amazed if Microsoft don't do the same in their next console, so that just leaves AMD. And one of the biggest areas AMD is lagging Nvidia (and Intel) is in the quality of their non-AI based upscaling. RT is the other but RDNA 4 may well resolve that.
Yes but out of the two, the other one (Intel) is clearly a design disaster (abysmal perf/area) ...

Where's the official statement of Sony's public commitment to these features that everyone here seems to be preemptively purporting ?
As to whether it's useful on lower end parts, surely it's inclusion on the likes of the RTX3050 and RTX 2060 provide a clear answer in that respect. If anything, those GPU's benefit more from high quality upscaling which works well at low source resolutions that high end GPU's do.
They may benefit more from higher quality upscaling but they also have a relatively higher frame time cost with that technique as well so it's not as straightforward as you believe ...
 
That 30% figure I threw around only describes the deficit between AMD vs Nvidia. Even on Nvidia the performance profile (resolution/refresh rate) between raster and RT isn't even identical but much less so for path tracing. The figure would be much higher (anywhere between 2x-10x) depending on how heavily content in question would subscribe to RT even giving the benefit of AI upscaling ...

What happens in the PC space stays *EXACTLY* in there and what the users over there prefer as well is irrelevant and has no bearing for console hardware designs too ...

Yes but out of the two, the other one (Intel) is clearly a design disaster (abysmal perf/area) ...

Where's the official statement of Sony's public commitment to these features that everyone here seems to be preemptively purporting ?

They may benefit more from higher quality upscaling but they also have a relatively higher frame time cost with that technique as well so it's not as straightforward as you believe ...
You don't need a statement from Sony for that. Mark Cerny visits the studios around the world, developers tell him they want better ray tracing and ai (upscaling-frame gen- other), and he goes and makes sure that they get what they want. Unless you think that developers will tell him they want more raster power, as if they wouldn't want something that makes their work both easier and better.

Also, I'm pretty sure that qualcomm-mediatek-apple are dedicating silicon to ai hardware, so it's happening pretty much everywhere.
 
You don't need a statement from Sony for that. Mark Cerny visits the studios around the world, developers tell him they want better ray tracing and ai (upscaling-frame gen- other), and he goes and makes sure that they get what they want. Unless you think that developers will tell him they want more raster power, as if they wouldn't want something that makes their work both easier and better.
@Bold You kind of do if you want validate your arguments especially when everyone here keeps repeating these talking points ad infinitum like broken clockwork ...

Developers might want these features but what if console manufacturers ultimately don't want to take the additional hardware implementation complexity hit ?
 
@Bold You kind of do if you want validate your arguments especially when everyone here keeps repeating these talking points ad infinitum like broken clockwork ...

Developers might want these features but what if console manufacturers ultimately don't want to take the additional hardware implementation complexity hit ?
We have the leaked documents for PS5 pro where the main improvements are 300 tops of ai inference (with the intention of going up to 8k 60 with pssr for future hardware) and 2-4x faster ray tracing. Unless the PS6 somehow decides to not invest in those features anymore, this is the direction of the majority of the industry. Are there even hints to the contrary?
 
@Bold You kind of do if you want validate your arguments especially when everyone here keeps repeating these talking points ad infinitum like broken clockwork ...

Developers might want these features but what if console manufacturers ultimately don't want to take the additional hardware implementation complexity hit ?
You asked where the statement was that Sony's committing to these features...

Where's the official statement of Sony's public commitment to these features that everyone here seems to be preemptively purporting ?
There's no official statement in words, but we know they have committed to them in hardware for PS5Pro. There's no statement of longer-term interest, but at the same time the console companies look largely set on using PC GPUs with whatever featureset they are adding. There's no evidence they'll include RTRT or ML hardware in PS6, but there's no evidence they won't either, and the straight-forward trajectory from PS5 to PS5Pri to PS6 suggests a higher probability of 'more of the same' than not. Notably, asking an IHV to create a truly bespoke part that uses a different core to the Pc architecture of the period is quite a big ask with a cost. Isn't it going to be cheaper to use AMD RDNA 5/6/whatever than to have AMD design a new core without RTRT and ML hardware? And what would the added cost actually get them?

Let's imagine games will run better without RTRT and ML hardware - will that actually result in more sales justifying the faff of more bespoke parts?
 
We have the leaked documents for PS5 pro where the main improvements are 300 tops of ai inference (with the intention of going up to 8k 60 with pssr for future hardware) and 2-4x faster ray tracing. Unless the PS6 somehow decides to not invest in those features anymore, this is the direction of the majority of the industry. Are there even hints to the contrary?
Are these "leaked documents" from unknown sources equivalent to official statements from Sony ? If not then I'd prefer you to stop spinning the rumor mills for the sake of our exchange ...

The 'direction' of this industry is also increasingly no more price cuts the lest you forget unless consoles get priced above out of existence ...
 
Are these "leaked documents" from unknown sources equivalent to official statements from Sony ? If not then I'd prefer you to stop spinning the rumor mills for the sake of our exchange ...
Everyone else on this board accepts them as 'working knowledge' given faith in sources like Digital Foundry (and the fact Sony removed the copyrighted leaked materials...). You don't have to agree, but the conversation then is akin to trying to discuss the nature of gravity when someone doesn't accept the world is round, or trying to discuss the cure of an illness when someone doesn't accept illness is the product of bad smells and imbalances of the four humours. The only correct response is agree to disagree and talk around that issue rather than keep repeating the 'do you trust these rumours' debate.
 
There's no official statement in words, but we know they have committed to them in hardware for PS5Pro. There's no statement of longer-term interest, but at the same time the console companies look largely set on using PC GPUs with whatever featureset they are adding. There's no evidence they'll include RTRT or ML hardware in PS6, but there's no evidence they won't either, and the straight-forward trajectory from PS5 to PS5Pri to PS6 suggests a higher probability of 'more of the same' than not. Notably, asking an IHV to create a truly bespoke part that uses a different core to the Pc architecture of the period is quite a big ask with a cost. Isn't it going to be cheaper to use AMD RDNA 5/6/whatever than to have AMD design a new core without RTRT and ML hardware? And what would the added cost actually get them?
Why do you assume that AMD are going to eventually follow Nvidia's lead in the first place ? The feature set asymmetry has existed between consoles and PCs for as long as it did because console manufacturers couldn't find a common solution between them that suited their own needs ...

Besides AMD are looking to counter RT with more advanced GPU-driven rendering and leave AI HW bag hanging in hopes that no major rendering applications materialize for it ...
 
Are these "leaked documents" from unknown sources equivalent to official statements from Sony ? If not then I'd prefer you to stop spinning the rumor mills for the sake of our exchange ...

The 'direction' of this industry is also increasingly no more price cuts the lest you forget unless consoles get priced above out of existence ...
If I were you I would prepare myself for the future of graphics technology in videogames, because I can assure you that the future isn't 8k shadow maps with SSR.

Now I'm curious, what does something like a 2028 console offer compared to current gen that is marketable? Three time the teraflops? 8 more GB of VRAM?
 
Why do you assume that AMD are going to eventually follow Nvidia's lead in the first place ? The feature set asymmetry has existed between consoles and PCs for as long as it did because console manufacturers couldn't find a common solution between them that suited their own needs ...

Besides AMD are looking to counter RT with more advanced GPU-driven rendering and leave AI HW bag hanging in hopes that no major rendering applications materialize for it ...
Consoles probably even have that simple rt implementation only because Nvidia caught amd and Sony/Ms off guard in 2018, and two years from launch they didn't have the time to develop something better. The first PS5 devkits didn't even have rt :)

"Besides AMD are looking to counter RT with more advanced GPU-driven rendering"

Wat?? Source?
 
Last edited:
Consoles probably even have that simple rt implementation only because Nvidia caught amd and Sony/Ms off guard in 2018, and two years from launch they didn't have the time to develop something better. The first PS5 devkits didn't even have rt :)
Consoles have RT implementations solely to tick a checkbox created entirely by Microsoft themselves (DXR). AMD doesn't care about engaging in some symmetric competition with Nvidia like you seem to believe since it's long past due the time for that ...
Wat?? Source?
AMD invented Work Graphs for the sole purpose of optimizing virtual geometry so that more games/engines would shift to technology like Nanite and speed up the adoption of mesh shaders (they're kinda boring without the PSO swapping functionality) to make it living hell to do RT integration with geometrically dense scenes ...
 
Consoles have RT implementations solely to tick a checkbox created entirely by Microsoft themselves (DXR). AMD doesn't care about engaging in some symmetric competition with Nvidia like you seem to believe since it's long past due the time for that ...

AMD invented Work Graphs for the sole purpose of optimizing virtual geometry so that more games/engines would shift to technology like Nanite and speed up the adoption of mesh shaders (they're kinda boring without the PSO swapping functionality) to make it living hell to do RT integration with geometrically dense scenes ...
I don't know what workgraphs have to do hardware acceleration of ai and rt, but maybe I'm ignorant.

Can you answer me this, to get to the point?

Are we going to abandon dedicated hardware for rt and ai acceleration to instead use general purpose compute?
 
Are we going to abandon dedicated hardware for rt and ai acceleration to instead use general purpose compute?
We've had that discussion elsewhere. Check Lurkmass's post history. It's not for this thread. Everyone refocus on FSR3 upscaling...
 

Cyberpunk 2077 got FSR3 frame gen in new update (also XeSS 1.3)
 

Cyberpunk 2077 got FSR3 frame gen in new update (also XeSS 1.3)

Looks near indistinguishable from native 4K, aside from some ghosting which DLSS3 has as well.
Really exited for the AMD AI solution as it should improve it even further
 
Back
Top