Digital Foundry Article Technical Discussion [2022]

Status
Not open for further replies.
we have heard from devs making games which target heavy CPU usage that the GPU in PS5 does in fact downclock

Very intresting to hear that, puts alot of discussions to sleep, thanks for the information.
Things will be even more intresting going forward where games will start utelizing the cpu more and more and when gpu saturation is more common.

Good to see vsync perf penalty was taken into account. ps5 gpu shows realy good perf here, nice advantage over 5700xt

And in other scenes its much closer to a 5700xt (latter bit of the video). Game does prefer narrow/fast, a 6600xt would be a good fit for ps5 comparisons. I see that many put that forward in the YT comments.
 
To answer to the critique put forward here: we have heard from devs making games which target heavy CPU usage that the GPU in PS5 does in fact downclock. But whether an end user notices this in a game with TAA, post-processing and DRS is a whole other question. That is the point of the PS5 Design.

For example - think of a game with an unlocked framerate to 120 with DRS targetting a high output res. How exactly does that fit into a fixed and shared power budget? The obvious answer is it stresses both CPU and GPU to their max and power adjusts

That's better, we've normal folk have heard nothing about this in 12+ months of PS5's release so to get some form of actual confirmation that it does happen is nice!

Being cheeky.......did they happen to say what it drops too? Is there a min clock it simply will not drop below :mrgreen:
 
I'm confused by this comment, AMD GPU's have supported ML for a few generations now and are more than capable of doing an ML based upscaling.

RDNA2 fully supports INT4 and INT8.

PS5 has already used ML in Spiderman:MM with inference run on the GPU.

So there's no waiting for RDNA3 or 4, it's here right now.

Sure, like accelerated Raytracing on RDNA2 hardware is "here right now" and the performance is just a software problem...

Doing inference with DP4a is neither efficient nor fast. That is one reason why AMD handcoded the same method with compute shaders instead of using a dl network.
 
Sure, like accelerated Raytracing on RDNA2 hardware is "here right now" and the performance is just a software problem...

Doing inference with DP4a is neither efficient nor fast. That is one reason why AMD handcoded the same method with compute shaders instead of using a dl network.

Did you read the conversation?

He claimed that AI/ML upscaling doesn't exist on AMD because it's a hardware problem and AMD need to wait to RDNA3/4, which I highlighted as being incorrect as RDNA2 is capable.

The speed at which it can do it is irrelevant as the speed was not the issue discussed, it was claiming the hardware support wasn't there and wouldn't be there until RDNA3/4 arrived.
 
And in other scenes its much closer to a 5700xt (latter bit of the video). Game does prefer narrow/fast, a 6600xt would be a good fit for ps5 comparisons. I see that many put that forward in the YT comments.
even in other scene ps5 has clear advantage, take into account also that 5700xt was pair with vastly superior cpu and still almost 20% slower (tough in gpu heavy scene but would be interesting to see results with closer cpu to ps5)
Things will be even more intresting going forward where games will start utelizing the cpu more and more and when gpu saturation is more common.
we already seen in few games that in scenes more heavy on cpu ps5 can have advantage over xsx for some reasons
 
Last edited:
Did you read the conversation?
He claimed that AI/ML upscaling doesn't exist on AMD because it's a hardware problem and AMD need to wait to RDNA3/4, which I highlighted as being incorrect as RDNA2 is capable.
The speed at which it can do it is irrelevant as the speed was not the issue discussed, it was claiming the hardware support wasn't there and wouldn't be there until RDNA3/4 arrived.

No, speed is not irrelevant. In fact it is the only reason why accelerators like GPUs exists. Every chip can do "ML upscaling". My Shield TV is doing it with 1/20 of the compute performance of the PS5.
But reconstruct a frame in real time without input latency in a game is different from upscaling a finish frame in the buffer.
 
Did you read the conversation?

He claimed that AI/ML upscaling doesn't exist on AMD because it's a hardware problem and AMD need to wait to RDNA3/4, which I highlighted as being incorrect as RDNA2 is capable.

The speed at which it can do it is irrelevant as the speed was not the issue discussed, it was claiming the hardware support wasn't there and wouldn't be there until RDNA3/4 arrived.

In that case, ps4 supports it aswell. Speed IS important. Besides that the ps5 doesnt support dp4a.


even in other scene ps5 has clear advantage, take into account also that 5700xt was pair with vastly superior cpu and still almost 20% slower (tough in gpu heavy scene but would be interesting to see results with closer cpu to ps5)

for some reasons we already seen that in scenes more heavy on cpu ps5 can have advantage over xsx

The cpu’s, as pointed out in the video, bear no relevance in these scenes as the gpus are pushed to the max. Also what was pointed out that the ps5 gpu (narrow/fast) does fit this game better as evident in the video.
As a reference the 5700XT is a rdna1.0 much lower clocked gpu, and should be slower to boot (9.x TF). As said a 6600XT would be a better comparison (narrow/fast rdna1.5+ design like ps5).

No idea about the ps5 having an advantage to xsx, probably in older-gen games yes. The cpu will allegedly downclock enough for devs to notice it atleast.(which was claimed to not be the case) Like said things will be more intresting when cpu+gpu get hammered in games actually using the new hardware (when we leave crossgen behind).
 
No, speed is not irrelevant. In fact it is the only reason why accelerators like GPUs exists. Every chip can do "ML upscaling". My Shield TV is doing it with 1/20 of the compute performance of the PS5.
But reconstruct a frame in real time without input latency in a game is different from upscaling a finish frame in the buffer.

For the context of the discussion that was going on it wasn't relevant as it wasn't speed we were talking about but the actual ability of the hardware to actually do ML/AI based upscaling.

And now you're talking about something completely irrelevant to a discussion that had concluded over 12 hours ago.
 
No idea about the ps5 having an advantage to xsx, probably in older-gen games yes. The cpu will allegedly downclock enough for devs to notice it atleast.(which was claimed to not be the case) Like said things will be more intresting when cpu+gpu get hammered in games actually using the new hardware (when we leave crossgen behind).
I dont have as good memory but if Im not wrong ps5 had some perf advantage in cyberpunk during driving which for sure have cpu inpact and game is not that old, also matrix awaking demo which is for sure not old, other example 120fps mode in Cold War - so as I said, there are examples quite modern with cpu usage and some perf advantage for ps5
 
For the context of the discussion that was going on it wasn't relevant as it wasn't speed we were talking about but the actual ability of the hardware to actually do ML/AI based upscaling.

And now you're talking about something completely irrelevant to a discussion that had concluded over 12 hours ago.

ML tests were done on Kepler GPUs 10 years ago. So you dont need Matrix engines, 4DPa, double packed FP16, FP16 support etc. A simple FP32 vector unit and an API is enough to do ML upscaling. So yes, a PS5 is capable of "actually do ML/AI based upscaling". Like every other GPU released in the last 10 years, too.
 
I dont have as good memory but if Im not wrong ps5 had some perf advantage in cyberpunk during driving which for sure have cpu inpact and game is not that old, also matrix awaking demo which is for sure not old

Maybe your right, i have no idea as i said before about that. It would strike me as weird seeing they have the same cpus at the same speed (somewhat higher clock on xsx). 2077 wasnt all that well coded of a game and matrix wasnt a game but a quick-done tech demo without all the heavy lifting a true full game would have. I think its still too early to conclude any true differences between the xsx and ps5, in special since theres truth to narrow/fast utilization which is more common in crossgen games as evident in the DF video.
RT will also be an intresting merit, though i doubt it will be used all that much on both machines.
 
Maybe your right, i have no idea as i said before about that. It would strike me as weird seeing they have the same cpus at the same speed (somewhat higher clock on xsx). 2077 wasnt all that well coded of a game and matrix wasnt a game but a quick-done tech demo without all the heavy lifting a true full game would have. I think its still too early to conclude any true differences between the xsx and ps5, in special since theres truth to narrow/fast utilization which is more common in crossgen games as evident in the DF video.
RT will also be an intresting merit, though i doubt it will be used all that much on both machines.
in terms of cpu inpact api has also strong influence, also io can be more or less heavy on cpu, so things are more complicated than just simple statement because cpu is more used gpu will have slower clocks on ps5 and perf will be slower
 
ML tests were done on Kepler GPUs 10 years ago. So you dont need Matrix engines, 4DPa, double packed FP16, FP16 support etc. A simple FP32 vector unit and an API is enough to do ML upscaling. So yes, a PS5 is capable of "actually do ML/AI based upscaling". Like every other GPU released in the last 10 years, too.

Which was exactly my point, so saying this

"I still think or hope that AMD will come with their own AI/ML reconstruction tech, one way or another. RDNA3 may or may not have it yet, but RDNA4 perhaps, who knows"

That stood out to me and made no sense, which is why I challenged it as AMD wouldn't need to wait for RDNA3/4 to have the ability to process AI/ML as their current hardware can already do it.

Sure if you dedicated silicon for units to handle those instructions it would be faster but to say AMD would need to wait for RDNA3/4 for an AI/ML upscaler.....whaaaa :nope:
 
To answer to the critique put forward here: we have heard from devs making games which target heavy CPU usage that the GPU in PS5 does in fact downclock. But whether an end user notices this in a game with TAA, post-processing and DRS is a whole other question. That is the point of the PS5 Design.

Mark Cerny said as much in the The Road to PS5 video:


Mark Cerny said:
36CUs at 2.3Ghz, is 10.3Tf and we expect the GPU to spend most of its time at or close to that frequency
In the following few sentences Mr Cerny talks about the power drop and what that means in percentage terms for the actual clock. It's amazing there are people who will die on this hill.
 
Sure if you dedicated silicon for units to handle those instructions it would be faster but to say AMD would need to wait for RDNA3/4 for an AI/ML upscaler.....whaaaa :nope:
Don't be obtuse on purpose, AMD wouldn't implement a powerful AI upscaler right now because it would eat their processing power when it runs on regular shaders, they will wait when they implement an actual proper ML engine on their consumer hardware, then they would release the AI upscaler.

Intel released their AI upscaler from the get go because they have matrix engines on their consumer GPUs.
 
in terms of cpu inpact api has also strong influence, also io can be more or less heavy on cpu, so things are more complicated than just simple statement because cpu is more used gpu will have slower clocks on ps5 and perf will be slower

Yeah well, the PS5 could enjoy a less of a over-head/closer to metal API indeed for the CPU. Technically they are exactly the same, a 100mhz difference wouldnt be all that of a problem either.
Though i do believe DF's notes because they actually have contact with developers. This isnt ment either to use in console warring as again, the XSX and PS5 arent far from eachother in the performance/capability department anyway. The PS5 might have the architectural advantage in the first half of the gen (narrow/fast) while the XSX might in the second half when engines saturate the gpus better and RT is getting involved and cpu/gpu get hammered more (wide/slower).
It is probably the first generation where hardware is almost identical.

Which was exactly my point, so saying this

"I still think or hope that AMD will come with their own AI/ML reconstruction tech, one way or another. RDNA3 may or may not have it yet, but RDNA4 perhaps, who knows"

That stood out to me and made no sense, which is why I challenged it as AMD wouldn't need to wait for RDNA3/4 to have the ability to process AI/ML as their current hardware can already do it.

Sure if you dedicated silicon for units to handle those instructions it would be faster but to say AMD would need to wait for RDNA3/4 for an AI/ML upscaler.....whaaaa :nope:

Your trying to twist things here. I am clearly referring to FSR2.0 being akin to TAAU, and not to XesS and DLSS, that is almost certainly due to no dedicated acceleration present on current amd gpu's. Yes RDNA can accelerate it (in hw), but it isnt fast enough, obviously, otherwise we would have seen it. What i further say is that RDNA3+ might see similar solutions as to what Intel and Nv are doing. AMD just recently catched up in rasterization with RDNA2, the next step could be RT and/or AI/ML acceleration like Intel and NV are doing. AMD is the only one of the big GPU designers omitting it.

In the following few sentences Mr Cerny talks about the power drop and what that means in percentage terms for the actual clock. It's amazing there are people who will die on this hill.

Remember that Dictator is quiting what actual developers have experienced and shared. Also, go and read the DF topic forum rules, you are breaking multiple of those with that last part, which is totally not-needed imo. I do believe in self-moderation and you are generally a poster with good-writing, but i think going personal towards the only DF member we have on the forum isn't all that good of a discussion. This forum is already quite dire in the amount of developers/studios or people who have contact with them.
 
No idea about the ps5 having an advantage to xsx, probably in older-gen games yes. The cpu will allegedly downclock enough for devs to notice it atleast.(which was claimed to not be the case) Like said things will be more intresting when cpu+gpu get hammered in games actually using the new hardware (when we leave crossgen behind).

Just a minor point, but I think the CPU frequency is constant. When the CPU is stressed it effectively means less power available for the GPU so the likelihood of GPU frequency drops will increase.

Keeping available CPU constant is probably more important in terms of dependable game logic / simulation performance.


Edit: Nope, just checked and I'm wrong, CPU can fluctuate too. Old man memory. :(
 
Last edited:
Just a minor point, but I think the CPU frequency is constant. When the CPU is stressed it effectively means less power available for the GPU so the likelihood of GPU frequency drops will increase.

Keeping available CPU constant is probably more important in terms of dependable game logic / simulation performance.

Isn't that dependable on what kind of game/situation one would be in? More GPU grunt is mostly more important when say a high-fidelity 'next-gen' game is on display at 30fps. The developers allegedly have witnessed the CPU clocking down so in that scenerio the CPU had to give in to keep the GPU at max capabilities.
 
Alex's speculation seems reasonable to me. The only error he might have made is calculating the TFLOPS of the RTX GPUs. I suspect his cards run above the official stated boost clocks from which the TFLOP numbers are calculated. Wouldn't change things to a very meaningful degree though.
 
Status
Not open for further replies.
Back
Top