Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
This is in no way contradicting what I claimed. I just said as per cerny max clock speed doesn't imply max power draw. Very well optimized games would find the powerdraw limits and we really don't know what that means. Cerny was very specific on mentioning avx2 for cpu side. And cerny was very specific saying gpu clock speed is not limited by thermals/power draw but some internal implementation of gpu. It just doesn't clock any higher no matter what.

Taking what cerny said it means that to hit those power draw limits one would have to have sufficiently well optimized code. Again what that really means we will not know until some developer spills the beans. Though just hitting max clocks is not enough to cause throttling.
Yea that's fine, I'm not saying hitting max frequency results in throttling.
Cerny provided:
a) The optimal clock rates for both CPU and GPU (we anticipate this to be not heavy saturation workloads)
b) Cerny provided the use cases in which CPU load could have a downclocking effect on GPU load. In this case, he brought up AVX calls as being able to drop GPU frequency, but barely. This makes sense because GPU requires 4x more power than CPU in modern GPUs. So if the CPU needs some, the GPU loses it at 1/4 the amount that CPU needs.
c) Cerny _did not_ provide the GPU loads that will cause downclocking on the CPU. The GPU needs to pull 4x the amount in the opposite direction.
d) Cerny _did not_ provide what happens when both CPU and GPU are sufficiently loaded what the frequencies would be
 
Yea that's fine, I'm not saying hitting max frequency results in throttling.
Cerny provided:
a) The optimal clock rates for both CPU and GPU (we anticipate this to be not heavy saturation workloads)
b) Cerny provided the use cases in which CPU load could have a downclocking effect on GPU load. In this case, he brought up AVX calls as being able to drop GPU frequency
c) Cerny _did not_ provide the GPU loads that will cause downclocking on the CPU.
d) Cerny _did not_ provide what happens when both CPU and GPU are sufficiently loaded what the frequencies would be

Agree. And to me that is why we cannot really draw any good conclusions based on data we have know. Best we can do is parrot cerny and then claim he has no credibility, he is lying or say I believe cerny, though he left some considerable wiggle room in his talk.
 
I find it strange that so many theories appear based on power problema over a 2.23 Ghz 36 CU APU and lower CPU clocks, and no one finds power problems on a APU with higher CPU clocks (+3%) and 52 CU (+44%) at 1825 (-22%) Mhz with extra logic to route more data, and wider controllers.

Why is that? What data do you have for that line of thought? Based only on these thingsI would guess Microsoft's APU will consume more power than PS5.

My gut feeling is that a lot of this has to do with pc side experience. Boost on pc is just ... not that good. It starts from chips not being binned to console standard, hw being built cheaper(bad cooler on one board, great cooler on another) and ends on thermal limits as some users have stupid cases. Also bad power supplies seen on pc side.

This is different for sony as they definitely said they only throttle clocks around power draw and designed system accordingly. Every console chip should have same behavior, i.e. limited by the worst chip. Similar to sony I don't doubt microsoft as they have done solid engineering as we can see from teardown.
 
you're right, it took them a while to get to this point. So it wasn't immediate in that sense. Their job functions had all sorts of gaps everywhere on all cores.
Even in the shipping version, it is fairly clear that usage peaks are a rarity across all 6 cores, even after they have overlapped rendering and game logic. Wild guess by eyes says <50% utilisation in all cores.

Many here have a romanticised view of hardware utilisation and software development in general. The reality is that it is tough to keep general purpose hardware usage high all the times, let alone a naive 100% utilisation. This applies to even things that have been heavy optimised for consoles, and yet many focus on theorising at practically unattainable scenarios like continuous and simultaneous 100% CPU use and 100% GPU use.

(For sure, you can attain that if you write a "power virus" like Furmark that does gibberish specifically to achieve this, but it would not be a reflection of reality.)

Modern processor designs exploit this exact reality, by introducing race-to-idle with dynamic power allocation in the name of "turbo" or "boost", so that the unused power due to underutilisation can be directed to get the hot paths done quicker (raising running frequency temporarily). In the context of CPU, that's the basics behind different tiers of boost clocks (single cores, two cores, 1/2 cores, all cores, etc).
 
My gut feeling is that a lot of this has to do with pc side experience. Boost on pc is just ... not that good. It starts from chips not being binned to console standard, hw being built cheaper(bad cooler on one board, great cooler on another) and ends on thermal limits as some users have stupid cases. Also bad power supplies seen on pc side.

lol what!?
 
Even in the shipping version, it is fairly clear that usage peaks are a rarity across all 6 cores, even after they have overlapped rendering and game logic. Wild guess by eyes says <50% utilisation in all cores.

Many here have a romanticised view of hardware utilisation and software development in general. The reality is that it is tough to keep general purpose hardware usage high all the times, let alone a naive 100% utilisation. This applies to even things that have been heavy optimised for consoles, and yet many focus on theorising at practically unattainable scenarios like continuous and simultaneous 100% CPU use and 100% GPU use.

(For sure, you can attain that if you write a "power virus" like Furmark that does gibberish specifically to achieve this, but it would not be a reflection of reality.)

Modern processor designs exploit this exact reality, by introducing race-to-idle with dynamic power allocation in the name of "boosting", so that the unused power due to underutilisation can be directed to get the hot paths done quicker (raising running frequency temporarily). In the context of CPU, that's the basics behind different tiers of boost clocks (single cores, two cores, 1/2 cores, all cores, etc).
Agreed. I assumed as much even as I wrote it, it looked like there was more to give. I also assumed that this was why Cerny was confident that the CPU workloads would be unlikely to dip GPU clockspeeds in PS5 with the exception of AVX2 use cases.

There are all sorts of bottlenecks sort of just limiting things, and higher clock speeds may have a positive effect on frame rate limited scenarios, so some situations where if I'm frame rate limited, having that higher clock speed is going to result in more FPS then spreading out the work over more cores (which require programmer intervention, while having a high clock speed does not)

In this case, having super high boost is what you want for 120fps-240 fps on your CPU.

edit:
the graphs we were looking at are jobs not cpu utilization btw. So I’m not sure what the actual utilization/ saturation is. Not sure if what I posted was a good example anymore.
 
Last edited:
lol what!?

On PC side boost is something that is not guaranteed. People reflect pc boost on ps5 and that leads to all kinds of wrong conclusions starting from assuming something about thermals and ending to thinking about clocks only achievable with excellent (rare) chips.
 
That SSF contains a hardware block only appears in DF article, not in the MS own specs sheet.
As per the sampler feed back video above from the DX team, Claire Andrews says "it is a GPU hardware feature." and it is neatly printed on the slide.

About 20 seconds into that video.

WRT to the whole phraseology from the Sony PS5 presentation - I also have trouble following the logic of how they went from having trouble to maintaining 2.0 Ghz (GPU) 3.0 Ghz (CPU) under their old normal fixed clock paradigm to. a 2.23 and 3.5 with it transfering power between the parts. What loads were they testing that made 2.0/3.0 hard to get - and what loads are they testing that 2.23 and 3.5 happen "most of the time"? Wouldnt it make sense that loads that made 2.0 and 3.0 hard to maintain under a fixed power would have the same affect on a situation where power transfers?
 
Is the PS5 GPU generation so much more heat than XBSX's such that XBSX can run GPU at full speed and CPU at full speed without overheating but PS5 can't? If so, PS5's design really is poor by going with only 36 CUs.
I suspect that the XSX is lower on the power frequency curve. Just thinking about exponential looking graph. It’s 400 MHz down. But two will have different power frequency curves so I don’t know for sure.
 
On PC side boost is something that is not guaranteed. People reflect pc boost on ps5 and that leads to all kinds of wrong conclusions starting from assuming something about thermals and ending to thinking about clocks only achievable with excellent (rare) chips.

There's plenty of data on how boost works on the PC by people using high end cooling, knowing how the different parameters impact boost and testing it across a wide range of cooling.

Boost behavior noted using using an oscilloscope
boost behavior white board session

I'm linking buildzoid as the 2 videos line up. There's a lot more content out there from other (including AMD) that corroborates this.
 
Last edited:
Is the PS5 GPU generation so much more heat than XBSX's such that XBSX can run GPU at full speed and CPU at full speed without overheating but PS5 can't? If so, PS5's design really is poor by going with only 36 CUs.
Well, the PS5's APU size is the smallest Sony has made, and they had to make something to catch up. So, i would say yes, poor design. By the time PC cards come out PS5's GPU will be in the lower end. At least PS4 was over a Radeon 5850.
 
Is the PS5 GPU generation so much more heat than XBSX's such that XBSX can run GPU at full speed and CPU at full speed without overheating but PS5 can't? If so, PS5's design really is poor by going with only 36 CUs.

You need to know the efficiency scaling of the node. Once tip over, the power draw/heat/frequency relationship goes through the roof.

You also need to cross reference that against the size of the die to see it's surface area and heat dissipation capabilities.
 
Is the PS5 GPU generation so much more heat than XBSX's such that XBSX can run GPU at full speed and CPU at full speed without overheating but PS5 can't? If so, PS5's design really is poor by going with only 36 CUs.

Hard to tell unless you run the PS5 GPU at 1.85 GHz instead of 2.23 GHz and take measurements. Even more difficult because we don't have PC GPU parts to test with to know where on the knee-curve the RDNA2 GPUs normally sit, then cross reference that with next-gen console clocks and voltages.
 
No. Again, for the about millionth time: Consoles have Zen 2 based CPU cores, but they do not adhere to any AMD CPU/APU specifications nor reuse them in any way. They are built using same elements, like the Zen 2 core, but that's it, each different chip, console or not, is built from scratch using the blocks AMD has.
The CPU cores themselves should be identical, but amount of cache hasn't been confirmed by either Sony nor MS, it could be same as Renoir (that 4900HS), it could be more, heck, it could be even less in theory. But regardless of it's exact configuration, it's not the same even if it has same amount of caches, it's built to fit the console APU
 
- Devs are limited by TDP
- Increasing workload per cycle increases TDP, reduces Mhz = 60FPS unreachable.
- Reducing workload per cycle decreases TDP, increases Mhz = 60FPS uncreachable.
- Are forced to optimize code without increasing workloads or make concessions to the picture quality.

Again. It's so easy to understand.
If you knew why the games in current gen were struggling with 60 fps.

I think on the hardware side the only real change is probably feedback from the texture samplers. But the APIs should allow for real texture space shading.

Nope. AFAIK no API has that yet. Although theoretically you can do it.
 
Last edited by a moderator:
Hard to tell unless you run the PS5 GPU at 1.85 GHz instead of 2.23 GHz and take measurements. Even more difficult because we don't have PC GPU parts to test with to know where on the knee-curve the RDNA2 GPUs normally sit, then cross reference that with next-gen console clocks and voltages.
If you take the "couple % drop in clocks can save 10% in power" and map it to a curve of any power scaling graph, you should be able to estimate where in the curve PS5 is sitting in regards to that quote.
 
My gut feeling is that a lot of this has to do with pc side experience. Boost on pc is just ... not that good. It starts from chips not being binned to console standard, hw being built cheaper(bad cooler on one board, great cooler on another) and ends on thermal limits as some users have stupid cases. Also bad power supplies seen on pc side.

This is different for sony as they definitely said they only throttle clocks around power draw and designed system accordingly. Every console chip should have same behavior, i.e. limited by the worst chip. Similar to sony I don't doubt microsoft as they have done solid engineering as we can see from teardown.

This has nothing to do with boost on PC! Completly diferent tecnology.

Quoting from Eurogamer:

"It's really important to clarify the PlayStation 5's use of variable frequencies. It's called 'boost' but it should not be compared with similarly named technologies found in smartphones, or even PC components like CPUs and GPUs."

As such, if it is only gut feeling, just spit it out ;) (kidding you)
 
Last edited:
Status
Not open for further replies.
Back
Top