Digital Foundry Article Technical Discussion [2022]

Status
Not open for further replies.
Let me explain my comment further it as you seem to be struggling.... I said this:

"Which is a software problem, not a hardware one."

What that means is this:

If AMD currently don't have an ML based upscaler it is not a hardware problem as the hardware can do ML upscaling, it's merely a software problem, as in the software just not available for it (yet)

Sure, you could in theory run AI/ML based upscaling on a first generation Atom based CPU, or an Intel i286, after all it's just a "software problem". Is it going to run fast enough to be desirable?

AI/ML based upscaling without hardware to assist it can be prohibitively expensive, especially when compared to more traditional compute based upscaling like what Developers have been using on consoles. However, with hardware assists (Tensor cores, for example), AI/ML based upscaling can offer higher quality with faster performance. That's why it becomes a "hardware problem".

These implementations don't exist in a vacuum. Everything has to be weighted with consideration for their cost (performance) versus benefit (quality). If the cost is too high on X hardware then there is no benefit to using it. Thus for that particular implementation, it's a hardware problem because the hardware isn't fast enough to use it in a way that is beneficial for the product (in this case games) that potentially wants to use it.

AMD's FSR 2.0 isn't using AI/ML based upscaling because the cost (performance) versus quality without dedicated hardware assist wasn't as good as going with a more traditional compute based approach. However, they claim that their implementation can approach the quality of AI/ML based solutions with acceptable performance. We'll see how those claims hold up once we can compare it other solutions in games.

BTW - I'm one of the ones that doesn't particular like the Tensor based approached of DLSS just because it isn't universally applicable across multiple vendor's hardware. Even if other IHVs also had Tensor cores, it'd still be limited to NV hardware which still makes it the less desirable option to me. In that respect I like the promise of Intel's approach, although again, like FSR 2.0 we'll have to wait and see not only how good it is, but how well it performs.

Regards,
SB
 
Last edited:
Alex did not just say PS5 is performing as well as it possibly is because it's GPU is running at the full 10.2Tflops because the game is not CPU demanding?

I have no words for that conclusion which I see as doing nothing but trying to justify why PS5 is outperforming an RTX2080 :rolleyes:

Well, I assume no game currently on PS5 comes even close to maxing Ryzen CPU out, so with every game thus far the PS5 should have been running at max GPU clocks.

But it will be a more interesting discussion in the future.
 
Alex did not just say PS5 is performing as well as it possibly is because it's GPU is running at the full 10.2Tflops because the game is not CPU demanding?

I have no words for that conclusion which I see as doing nothing but trying to justify why PS5 is outperforming an RTX2080 :rolleyes:
The thing is, as usual, Alex is doubly wrong here.

- First we know from dev insiders that PS5 rarely ever downclocks and that it has well enough power to have both CPU / GPU at full clocks. This was also said by Cerny since the beginning.
- But more importantly from what we gathered from consumption tests is that it actually consumes less when it's CPU demanding. All tests showed this. And we already knew this based on PS4 and Pro fan noise and DF tests when they use cutscenes in various games to look for the max consumption. Matrix demo showed this as the max was actually found in one cutscene (~225W). During the "gameplay" the PS5 is often using up to ~200W-205W but even then we know from UE5 developers that this demo is actually light on CPU. Plenty others consumption tests with others games showed this.

I like Alex thorough comparisons but those kind of remarks are totally disconnected from the facts. Ironically based on what we know the scenes he used in his comparisons are scenes that are actually very light on CPU (cutscenes) in a game already heavily limited by the GPU. This is in those kind of scenes that the PS5 could actually downclock. But thing is, it should very easy to check as they just need to test the consumption during the scene.

We know the PS5 dynamic clock is deterministic and that it will downclock when a number of instructions will be exceeded. Those instructions are in direct correlation with the actual power consumption (because that's the whole point). And we know this current limit, it's about ~225W. Any game that will consume less than ~215w is very probably not triggering a downclock otherwise the whole system would not be fully deterministic.

And that's assuming the power limit is actually 225W. Maybe it's higher!
 
Last edited:
The thing is, as usual, Alex is doubly wrong here.

- First we know from dev insiders that PS5 rarely ever downclocks and that it has well enough power to have both CPU / GPU at full clocks. This was also said by Cerny since the beginning.
- But more importantly from what we gathered from consumption tests is that it actually consumes less when it's CPU demanding. All tests showed this. And we already knew this based on PS4 and Pro fan noise and DF tests when they use cutscenes in various games to look for the max consumption. Matrix demo showed this as the max was actually found in one cutscene (~225W). During the "gamepaly" the PS5 is often using up to ~200W-205W but even then we know from UE5 developers that this demo is actually light on CPU. Plenty others consumption tests with others games showed this.

I like Alex thorough comparisons but those kind of remarks are totally disconnected from the facts. Ironically based on what we know the scenes he used in his comparisons are scenes that are actually very light on CPU (cutscenes) in a game already heavily limited by the GPU. This is in those kind of scenes that the PS5 could actually downclock. But thing is, it should very easy to check as they just need to test the consumption during the scene.

We know the PS5 dynamic clock is deterministic and that it will downclock when a number of instructions will be exceeded. Those instructions are in direct correlation with the actual power consumption (because that's the whole point). And we know this current limit, it's about ~225W. Any game that will consume less than ~215w is very probably not triggering a downclock otherwise the whole system would not be fully deterministic.

And that's assuming the power limit is actually 225W. Maybe it's higher!

It's even simpler to evidence than that, if PS5 was having clock issues it wouldn't be competing with or beating XSX in comparisons like it does.

The only way PS5 matches or beats XSX is if it's running at full clocks.
 
hey make it faster as they are extra processing power on board. But you can make the same by using the shader-egines on the GPU and
accelerate with half-/quarter-rate accuracy.
Even DLSS ran purely on the shaders when nvidia did experiment with better algorithms between version 1 and 2.x. So it is possible. The question is just how big the impact would be.

The tensor cores probably are doing something to aid in performance, ofcourse it can be done without them, but as you say we dont know how much performance would be lost.

f AMD currently don't have an ML based upscaler it is not a hardware problem as the hardware can do ML upscaling, it's merely a software problem, as in the software just not available for it (yet)

Now can you explain why you started talking about Nvidia and stupidly talking about lying? Like what are you even talking about here? Where did that come from? Who's accusing who of lying?

Ofcourse AMD gpu's can do ML reconstruction on their GPU's, but the question is how much of a performance impact that would have as these lack the seperate cores that NV gpus contain. Nvidia states these help in performance, but you (a random forum poster) claiming that 'muh, its just software' is what it is, in your vision they are lying.

AMD lacks dedicated cores but are accelerated no less. INT4, INT8 and FP16 do not run at the same rate as FP32 on RDNA2. They run at multiples of the rate of FP32.

Absolutely, but with (probably) a higher performance impact. And that is where i am going with this. Some dont understand this.

Alex did not just say PS5 is performing as well as it possibly is because it's GPU is running at the full 10.2Tflops because the game is not CPU demanding?

I have no words for that conclusion which I see as doing nothing but a PC fan boy reaching to justify why PS5 is outperforming an RTX2080 :rolleyes:

Nvidia lying, DF 'reaching and being fanboy'. What the hell is your problem? PS5 aint 'outperforming' a 2080 either, its a notch below that one generally. You cant just go after one title, it namely doesnt outperform it in other titles.

Sure, you could in theory run AI/ML based upscaling on a first generation Atom based CPU, or an Intel i286, after all it's just a "software problem". Is it going to run fast enough to be desirable?

AI/ML based upscaling without hardware to assist it can be prohibitively expensive, especially when compared to more traditional compute based upscaling like what Developers have been using on consoles. However, with hardware assists (Tensor cores, for example), AI/ML based upscaling can offer higher quality with faster performance. That's why it becomes a "hardware problem".

These implementations don't exist in a vacuum. Everything has to be weighted with consideration for their cost (performance) versus benefit (quality). If the cost is too high on X hardware then there is no benefit to using it. Thus for that particular implementation, it's a hardware problem because the hardware isn't fast enough to use it in a way that is beneficial for the product (in this case games) that potentially wants to use it.

AMD's FSR 2.0 isn't using AI/ML based upscaling because the cost (performance) versus quality without dedicated hardware assist wasn't as good as going with a more traditional compute based approach. However, they claim that their implementation can approach the quality of AI/ML based solutions with acceptable performance. We'll see how those claims hold up once we can compare it other solutions in games.

BTW - I'm one of the ones that doesn't particular like the Tensor based approached of DLSS just because it isn't universally applicable across multiple vendor's hardware. Even if other IHVs also had Tensor cores, it'd still be limited to NV hardware which still makes it the less desirable option to me. In that respect I like the promise of Intel's approach, although again, like FSR 2.0 we'll have to wait and see not only how good it is, but how well it performs.

Someone that understands.
 
It's even simpler to evidence than that, if PS5 was having clock issues it wouldn't be competing with or beating XSX in comparisons like it does.

The only way PS5 matches or beats XSX is if it's running at full clocks.

What clock a CPU is running or even how capable a CPU is has almost zero bearing on whether or not a game on one hardware platform is competitive to a game on another platform with powerful modern CPUs.

As evidence I'll point back to when AMD's Phenom was competitive in games to Intel's Core lineup despite them being significantly worse CPU performers. Or how a lower clocked Intel Core CPU would perform exactly the same in many games as a higher clocked Intel Core CPU of the same generation. Why? Because games are more often than not GPU bound and not CPU bound.

It's why on PC, you have to drastically lower the resolution you render a game at (720p or 1080p) combined with using the most powerful GPU you can install in order to see any significant differences between CPUs. Benchmarks at 1440p or 4k in most games will show virtually no differences between CPUs of drastically different CPU clocks or even drastically different CPU architectures.

And what resolution are most games targeting on PS5 and XBS-X? 1440p to 2160p.

Basically, current gen games up to this point on PS5 are going to be GPU bound the vast majority of the time. Thus, you'll never know if the CPU is downclocked or not.

A game would be more likely to hit CPU performance boundaries on the previous generation with the Jaguar cores.

Regards,
SB
 
Alex did not just say PS5 is performing as well as it possibly is because it's GPU is running at the full 10.2Tflops because the game is not CPU demanding?

I have no words for that conclusion which I see as doing nothing but a PC fan boy reaching to justify why PS5 is outperforming an RTX2080 :rolleyes:

From the written version:
On the Nvidia side, native resolution performance comparisons are intriguing but ultimately somewhat academic, as I would be recommending DLSS in order to improve image quality over the native TAA solution. Still, the results are interesting and serve to highlight that the new generation of consoles are delivering a remarkable degree of horsepower - and remember, it's still early days in the current console generation.

If his goal was to show the PC as a platform in the best possible light as well, then why no DLSS comparisons? It seems the focus of this video was to simply compare the two versions of the same game on the same engine and see what power on the PC is needed to match the PS5 performance, and it certainly paints the PS5 in a good light. The reasoning may be flawed, but by and large, a game actually requiring a 2080 super/TI on the PC to match the PS5 version at the same quality settings is usually not the norm. Hell the majority of Alex's videos lately have been focusing on how awful most recent PC ports have been with the stuttering issues!

Alex's platform is obviously the PC first and foremost, and his reasoning for his theory on why the PS5 is performing exceptionally well in this game may indeed be suspect, but there's no reason for this kind of platform warring nonsense, albeit that seems to be your thing here.
 
Last edited:
Cerny said PS5 would spend "most of its time at, or close to, that performance" (meaning highest clocks). I think drops will become more likely as the generation goes on, but even then they won't usually be significant.

While we're seeing cross gen games I'd guess that this won't even be an issue. Even when Zen 2 is the baseline, PS5's half width SIMD units should shield it from the kind of huge power loads that AVX 256 can demand (it's the biggest power stressor for the cores). And if PS5 isn't relying on full rate AVX 256, it's hard to see any game being made to really rely on that kind of SIMD performance.

Cerny obviously thinks the payoffs of limited CPU SIMD vs higher GPU clocks is worth it, and I'm not going to say he's wrong.

Most interesting thing in the DF video for me was seeing that the extra width of the 5700XT is almost entirely without benefit in this game. This lines up perfectly with the old school wisdom about width and overheads and extra work and all that.

I'm of the opinion that performance matters most earlier in the generation. I think Cerny/Sony made some good choices about how to get the most out of the die area and power budget, especially during the all important transitional period.
 
Nvidia lying, DF 'reaching and being fanboy'. What the hell is your problem? PS5 aint 'outperforming' a 2080 either, its a notch below that one generally. You cant just go after one title, it namely doesnt outperform it in other titles.

What is your problem?? Others have validated that there is something wrong/fishy with those Nvidia slides, there's only you who seems to be in denial over it and can't see it, even when others members see it and have also pointed it out.

There's nothing wrong with highlighting things that don't look 'right' - It's a good thing to challenge those kind of things.

And what are you even talking about I can't 'go after one title'? I can assure you for the context of the discussion I can go after that one title as It's pretty clear I'm talking about the context of the latest video and not in general.
 
What clock a CPU is running or even how capable a CPU is has almost zero bearing on whether or not a game on one hardware platform is competitive to a game on another platform with powerful modern CPUs.

As evidence I'll point back to when AMD's Phenom was competitive in games to Intel's Core lineup despite them being significantly worse CPU performers. Or how a lower clocked Intel Core CPU would perform exactly the same in many games as a higher clocked Intel Core CPU of the same generation. Why? Because games are more often than not GPU bound and not CPU bound.

It's why on PC, you have to drastically lower the resolution you render a game at (720p or 1080p) combined with using the most powerful GPU you can install in order to see any significant differences between CPUs. Benchmarks at 1440p or 4k in most games will show virtually no differences between CPUs of drastically different CPU clocks or even drastically different CPU architectures.

And what resolution are most games targeting on PS5 and XBS-X? 1440p to 2160p.

Basically, current gen games up to this point on PS5 are going to be GPU bound the vast majority of the time. Thus, you'll never know if the CPU is downclocked or not.

A game would be more likely to hit CPU performance boundaries on the previous generation with the Jaguar cores.

Regards,
SB

I'm struggling to see what that has to do with my comment regarding PS5's clock speed vs it's performance in relation to XSX.
 
I'm struggling to see what that has to do with my comment regarding PS5's clock speed vs it's performance in relation to XSX.

Re-read your post that I replied to. I was basically dispelling the myth that how a game runs is evidence of the clock speed the PS5 CPU is running at.

The PS5's CPU clocks have almost Zero bearing on how well it competes with the XBS-X because current gen. games are very rarely CPU limited at the resolutions that the PS5 and XBS-X games target with the GPUs that they have in them.

The PS5 CPU could be clocked at 2.0 GHz and games would more than likely still perform about the same as they are at 3.0 GHz. The same could be said for the XBS-X. It's why for most games, it would likely have been beneficial if XBS-X also had dynamic clocks as it wouldn't need to run the CPU at full power all of the time.

Regards,
SB
 
Re-read your post that I replied to. I was basically dispelling the myth that how a game runs is evidence of the clock speed the PS5 CPU is running at.

The PS5's CPU clocks have almost Zero bearing on how well it competes with the XBS-X because current gen. games are very rarely CPU limited at the resolutions that the PS5 and XBS-X games target with the GPUs that they have in them.

Regards,
SB

But in known CPU heavy scenes (Corridor of death in Control with RT enabled) PS5 is basically neck and neck with XSX.

Same with 120fps modes which should be more CPU heavy and equally GPU heavy - PS5 does very well here too.
 
It's even simpler to evidence than that, if PS5 was having clock issues it wouldn't be competing with or beating XSX in comparisons like it does.

The only way PS5 matches or beats XSX is if it's running at full clocks.
No. This is not the right way to make the comparison.
You're comparing 2 different chips, with different memory configurations, running on 2 different variants/graphical settings of the same software with the whole thing obstructed by DRS.
According to your statement here:
A boost clocked 2.23GHz 200W max PS5 would be able to perform identical to a fixed clock 2.23 Ghz PS5 with no power limit.

Pretty sure that's not likely to be true at all. It's a claim that's probably been covered honestly.
Some GPU utilities allow us to fix power draw maximums on GPUs, and also allow us to fix clock speeds increasing power draw. In such tests, the results should not be the same.
 
Last edited:
I feel there are a number of misconceptions and unsound certainties in the feedback loop, and I'm going to try and use my limited ability (and tiny mind) to try and introduce a bit of "woah there, lets not be so definite about this" into the thread.

- First we know from dev insiders that PS5 rarely ever downclocks and that it has well enough power to have both CPU / GPU at full clocks. This was also said by Cerny since the beginning.

Downclocking will depend on the workload. Just because early games rarely do, doesn't mean later games won't. Cerny was intentionally none committal about specifics because, as he said, he can't be certain about how software will use hardware in the future (and he's not a bullshitter). He specifically said :

"We expect the GPU to spend most or all of it's time at, or close to, that frequency".

He made sure not to say "always at", and didn't even say that it "always had to be close to".

I think it's importint that if you're going to name drop Cerny, actually say what he did. If he knows enough to leave room for uncertainty and less common circumstances, I think we should pay attention to that.

- But more importantly from what we gathered from consumption tests is that it actually consumes less when it's CPU demanding.

But that could be because CPU bottlenecks are causing GPU underutilisation, it's doesn't necessarily mean that high CPU and GPU workloads can't result in GPU clocks dropping. If your CPU is limiting GPU activity then your total power consumption can drop dramatically even as CPU power consumption is high.

And we already knew this based on PS4 and Pro fan noise and DF tests when they use cutscenes in various games to look for the max consumption.

Fan noise is not always directly linked to power consumption. Chips have a number of thermal sensors on them, normally at potential hotspots, and a particular sensor could cause fan noise to spike even though average temperature across all sensors and power consumption are not at a level that would generally cause lots of fan noise.

"Fan noise = power consumption" is not guaranteed. That's not a certainty.

Matrix demo showed this as the max was actually found in one cutscene (~225W). During the "gameplay" the PS5 is often using up to ~200W-205W but even then we know from UE5 developers that this demo is actually light on CPU. Plenty others consumption tests with others games showed this.

Yeah, cut scenes can be a great place to optimise quality and increase power consumption even if CPU use is limited. Turn up the number of lights, turn up the DoF quality, use your highest quality back scattering skin shader etc....

We know the PS5 dynamic clock is deterministic and that it will downclock when a number of instructions will be exceeded. Those instructions are in direct correlation with the actual power consumption (because that's the whole point).

It's a bit more complicated than that. Certain instructions can require more power to execute, so it's not just the number of instructions. Sony / AMD will have taken this into account, of course.
 
Last edited:
by and large, a game actually requiring a 2080 super/TI on the PC to match the PS5 version at the same quality settings is usually not the norm.

For a more complete analysis one could compare to AMD GPU's in the dGPU space, as this game could be more optimized towards the other architecture. Generally the PS5 hovers around a 2070/2070S, sometimes better sometimes worse.

What is your problem?? Others have validated that there is something wrong/fishy with those Nvidia slides, there's only you who seems to be in denial over it and can't see it, even when others members see it and have also pointed it out.

There's nothing wrong with highlighting things that don't look 'right' - It's a good thing to challenge those kind of things.

I am not only talking about some slides, i am talking about the whole presentation and technical explanations that surrounds the RTX-IO/DS implementation on modern hardware. As other members have pointed out to you, yes some details are missing, but that doesnt mean everything was just to fool us.

And what are you even talking about I can't 'go after one title'? I can assure you for the context of the discussion I can go after that one title as It's pretty clear I'm talking about the context of the latest video and not in general.

'Muh, PS5 perfroms on 2080 levels' is more of a general statement than 'PS5 performs like a RTX2080 in this title, but so does a 6600XT/RX6700', theres context in this one. In general, its closer to a 2070/S than a 2080S.

I'm struggling

Yeah.
 
But in known CPU heavy scenes (Corridor of death in Control with RT enabled) PS5 is basically neck and neck with XSX.

Same with 120fps modes which should be more CPU heavy and equally GPU heavy - PS5 does very well here too.

That could easily be because the PS5 fixed function GPU hardware advantage is eating away the XSX shader (and B/W) advantage.

PS5 / XSX / XSS CPU advantage is about 4x over last gen Jaguar cores (about 8x for XSX / XSS if you include SIMD). It's unlikely any cross gen game is really pushing the limits of these Zen 2 CPUs.

Even 120hz modes are unlikely to radically increase the power hungry SIMD loads as you'd normally see these utilised to handle some aspect of simulation, which should be frame rate independent.
 
And in that scene, the PS5 is performing 12% slower than a 2080. Being very close to a RTX2070S (1% differentional). Taking into account that his game favours AMD hardware, as DF notes in this video, we are back at the RTX2070 levels somewhere again.

I think RX6600XT (narrow, high clocked RDNA2 gpu at 10.6TF) is the closest you'd get to the PS5, and then some.

 
Good to see vsync perf penalty was taken into account. ps5 gpu shows realy good perf here, nice advantage over 5700xt
 
To answer to the critique put forward here: we have heard from devs making games which target heavy CPU usage that the GPU in PS5 does in fact downclock. But whether an end user notices this in a game with TAA, post-processing and DRS is a whole other question. That is the point of the PS5 Design.

For example - think of a game with an unlocked framerate to 120 with DRS targetting a high output res. How exactly does that fit into a fixed and shared power budget? The obvious answer is it stresses both CPU and GPU to their max and power adjusts
 
Last edited:
Status
Not open for further replies.
Back
Top