Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Different chips will have different thermal properties. Some chips are bound to run hotter than others running the same code, this is parametric yield.
If you want to keep costs down, you're going to have to allow for larger thermal allowances. If game code keeps frequencies high into the 95-99% range at all times, you're going to have to start dropping off lower yield chips.
The alternative is to have your frequencies throttle down earlier, to allow for lower quality chips.

Yes and surely there clocks are set for the performance of there worst performing chips that they are willing to use.
 
Yes and surely there clocks are set for the performance of there worst performing chips that they are willing to use.
Yea unfortunately we don't know what those are. We're making sensible predictions on what Sony would be willing to target based upon how much they want to release the console for. They could easily pick chips that run at 2230Mhz all the time and never fall below that.
But as Mark indicated: they want each console to behave the same regardless of temperature. Which means acceptance of lower yield silicon will taper off the upper end of the spectrum. With variable frequencies you can get more out of your lower yield chips to make them run higher, but somewhere in there, the variable clock rates needs to have allowances for the lower yield chips to continue to operate without thermal shutdown.

Somehow it's controversial to make a sensible prediction that such a tolerance would be between 90-100% of their marketed frequencies. Of which Mark was the one to hand out the 2.0Ghz number unattainable fixed and he also handed out the 10% drop in frequency number to provide back 27% power.

It's not that people are trying to create a narrative around it being a 9.2TF chip. If we use Mark's own numbers, this is where it could operate at. It just so happens to turn out that 10% is 2000Mhz. And 10% is 9.2TF.

If PS5 is operating between 9.2TF and 10.2TF that's fine. But some people are unwilling to accept any possibility of it hitting anything less than 10.2TF here. That isn't variable, that's just fixed. That's been the on-going discussion for some time here. There's not a lot of room here to move either. Any drop in frequency below 2170Mhz would put PS5 below 10.TF. That's a 60Mhz drop.

I don't think any of this will have any bearing on anything, but if poeple are upset that at B3D these conclusions were hit, that's unfair.

Even with the discussion around the 2Ghz Github. was it wrong for people to speculate that it would run at less than 2Ghz? There was no rumour it was variable clocks. This is the only time a console ever ran variable clocks. Cerny himself admitted that even fixed they could not achieve 2Ghz. Doesn't that actually validate the fact that people were right that 2Ghz was too high (under the assumption of fixed clocks)?

Would the discussion have been the same if variable clocks was leaked and it was a possibility: probably not.
 
Last edited:
I always understood it as the following:

If a game is developed to use the GPU 100% of the time, the CPU will runs slightly less than the 3.5Ghz e.g. could be 3.4Ghz or less as power is moved off the CPU to compensate for the GPU power draw.

If the game uses the CPU and runs at 100%, the GPU will run slightly less than 2.23Ghz i.e. could be 2.20Ghz or 2.0Ghz etc. But the power would be moved off the GPU towards to CPU to compensate for the power draw.

How many games are expected to use the CPU 100%? For me, the GPU is likely to stay very close to 2.23Ghz most of the time and am expecting the CPU to run slightly lower clock speeds and not expecting games to use the CPU as much unless the are designed to. Cerny said the same in The Road to PS5 as I understood it.
 
Thanks but maybe I should've phrased my question better.

Does the 256 bit avx load example also applies to the GPU?

It's said here that certain loads on the CPU consume a lot of power and that frequencies can be irrelevant to the power drawn. You can draw 100 watts at 3 gigz and you can draw 70 watts at 3 gigz?

Is that the same for the GPU as well? What instruction sets or workloads are on the GPU?

Thank you.

Yeah, as others have said it can be. I can't give a detailed answer about instruction sets as I just do everting through a game engine and don't even write my own shaders.

But in principle, if you have very wide vector units and can't make full use of that width, you won't be using as many logic gates or registers. And different tasks make different demands on buses and caches and local data stores, and that affects power used too. Even at the same frequency.

I hope I didn't mess up that explanation too much!

where did 2.2 speed come from? GitHub was 2 ghz @ 9.2 tf and people were saying it would never be that speed, they are likely testing the ceiling of the chip and will dial back, so expect 8tf vs 12TF of XSX

To be fair, Cerny came out and actually said that using traditional fixed clocks PS5 would indeed never have been 2 gHz. So people saying it would never be 2 gHz were ... half right. What no-one was expecting was dynamic clocks. I certainly wasn't!
 
I always understood it as the following:

If a game is developed to use the GPU 100% of the time, the CPU will runs slightly less than the 3.5Ghz e.g. could be 3.4Ghz or less as power is moved off the CPU to compensate for the GPU power draw.

If the game uses the CPU and runs at 100%, the GPU will run slightly less than 2.23Ghz i.e. could be 2.20Ghz or 2.0Ghz etc. But the power would be moved off the GPU towards to CPU to compensate for the power draw.

How many games are expected to use the CPU 100%? For me, the GPU is likely to stay very close to 2.23Ghz most of the time and am expecting the CPU to run slightly lower clock speeds and not expecting games to use the CPU as much unless the are designed to. Cerny said the same in The Road to PS5 as I understood it.

Thing is, you can use the CPU or the GPU "100%" but still have different power requirements. It depends on what you're doing with them.
 
Agreed and that's why Cerny said they designed a maximum power draw level for the system. Power draw will never go over the maximum power and so the system heat will be constant and the system stable with less noise to boot.

Also how many games nowadays uses 100% GPU or 100% CPU all the time? Not many. Imo, the GPU will hit 100% more than the CPU will hit 100% if the games are designed that way.

As many have said, it will be fascinating to get people's breakdown of the system to see how the GPU and CPU are affected in games designed for the system.
 
Yea unfortunately we don't know what those are. We're making sensible predictions on what Sony would be willing to target based upon how much they want to release the console for. They could easily pick chips that run at 2230Mhz all the time and never fall below that.

That is what they have done though. Max 2230mhz and at what power is needed, there won't be any fluctuations no matter how good or bad the chip in your PS5 is.
 
I always understood it as the following:

If a game is developed to use the GPU 100% of the time, the CPU will runs slightly less than the 3.5Ghz e.g. could be 3.4Ghz or less as power is moved off the CPU to compensate for the GPU power draw.

If the game uses the CPU and runs at 100%, the GPU will run slightly less than 2.23Ghz i.e. could be 2.20Ghz or 2.0Ghz etc. But the power would be moved off the GPU towards to CPU to compensate for the power draw.

How many games are expected to use the CPU 100%? For me, the GPU is likely to stay very close to 2.23Ghz most of the time and am expecting the CPU to run slightly lower clock speeds and not expecting games to use the CPU as much unless the are designed to. Cerny said the same in The Road to PS5 as I understood it.
As @function writes, both can run 100% but that isn't indicative of power draw. You can run a CPU 100% on 12 cores... all I ask it to do is add 0+0 in an infinite loop. With the resulting answer being 0. Barely any power draw because no bits are being flipped. You could ask a GPU to do the same thing infinitely, and it will also hold 100% utilization with no power draw.
 
As @function writes, both can run 100% but that isn't indicative of power draw. You can run a CPU 100% on 12 cores... all I ask it to do is add 0+0 in an infinite loop. With the resulting answer being 0. Barely any power draw because no bits are being flipped. You could ask a GPU to do the same thing infinitely, and it will also hold 100% utilization with no power draw.
Ah ok. But would games be designed this way because it would then seem very inefficient.
 
That would just be fixed frequency and he noted they cannot achieve that at 2Ghz.

I don't mean that it will maintain the max clock but that no matter what code is running every PS5 will be running at the same clocks because they have set the limitations to the worst possible Apu that they will be using.
 
Ah ok. But would games be designed this way?
No never, which is why Cerny is correct that there is enough power there to run both at 100%.
The flip side, is if we ask a CPU to take a 1111111111111 and add it to 00000000000 and then tell the CPU to just invert the values, add and then invert the values and place it back into the memory to do it again. That is a very simple calculation that would just blow your shit up ;)

But usually if we aren't designing code to do that, finding prime numbers would heat your processors up pretty badly. And that's just a lot of division happening at once.

Cerny's not wrong that both can operate at 100%. But 100% isn't what causes power draw. It really comes down to what the developer is doing and how much of it.

Here might be a reasonable ideas that cause power draw:
Here is furmark running on a 5700XT

Basically what furmark does is render fur onto some donut shape and it's constnatly rotating. Each individual fur strand is rendered separately. So it's trying to draw and calculate thousands of strands of hair. This isn't textured, ti's just individually coloured as I understand it, I could be wrong though.

But yea, this would heat your gpu pretty good. Keep your eyes on the core clock speed as well as the temperature. This particular demo showcases an issue with the fan not working, so it's getting up there dangerously high. But once the temperatures are under control the core clock can stay up. Somewhere in here, Sony has to account for this. So we're not exactly sure when they would start downclocking frequency and how much by based on code.


Improper cooling can happen, AMD really screwed the pooch on 5700XT release.

So I guess it might be simpler to say, running a lot of high density stuff is more power draw then less density. Might be a simple way to sort of describe things. High density geometry will start to slow down as you approach small triangles however. And that's just an issue with how we generate triangles. On compute shaders that issue doesn't exist. So yea, I suspect UE5 demo is pretty much the epitome of pushing the limits of graphical hardware in power in real world use cases. It's capable of really pushing the hardware on both geometry and lighting.
 
Last edited:
Another perspective here, but no one said 51%.
As indicated earlier, we've seen ample evidence all boost and game clocks are within 10% of their max.
That means to expect the game to operate between game and boost clock as per AMD's specifications.
Just because PS5 does frequency on game code and not on thermals doesn't mean that PS5 isn't bound by the same physics.
Different chips will have different thermal properties. Some chips are bound to run hotter than others running the same code, this is parametric yield.
If you want to keep costs down, you're going to have to allow for larger thermal allowances. If game code keeps frequencies high into the 95-99% range at all times, you're going to have to start dropping off lower yield chips.
The alternative is to have your frequencies throttle down earlier, to allow for lower quality chips.

I'm okay with either method, obviously with cost being an important factor here for consoles, it's natural to lean into lower frequencies. Sony can easily continue to find ways to keep the value at that number, it's just going to cost more to keep poorer yield silicon performing at that level. if they want more power, remove the redundant CUs and run it full. It's only yield and costs at this point in time if you want to ignore everything.

Yeah, both Sony and MS are constrained by the lower bound of what makes yield.

I find the following interesting:

Difference between 5700 XT base and max boost clocks for baseline official numbers is 300 mHz. But with something like Furmark it can drop 40 or 50 mhz below even that base number (the base number is not guaranteed it's "up to"), for a range between observed minimum and official advertised boost of ~ 350 mhz. It could even be more on some samples.

Now imagine the whole frequency range scaling up by about 15% for next gen, and the difference between a conservative (Furmark style, like MS used for the 360) base and expected max boost doing the same. Hmm. 350 x 1.15 = 402.5 mHz.

Hmm. I wonder what the difference between XSX constant clocks and PS5 max boost is? 2.23 - 1.825 = 0.405. Or 405 mhz.

Oh man! I know it's just extrapolating based on RDNA 1, but it's almost like PS5 and XSX are the same base architecture on the same manufacturing process with the same universal laws of physics. It's a conspiracy!! :runaway:
 
No never, which is why Cerny is correct that there is enough power there to run both at 100%.
The flip side, is if we ask a CPU to take a 1111111111111 and add it to 00000000000 and then tell the CPU to just invert the values, add and then invert the values and place it back into the memory to do it again. That is a very simple calculation that would just blow your shit up ;)

But usually if we aren't designing code to do that, finding prime numbers would heat your processors up pretty badly. And that's just a lot of division happening at once.

Cerny's not wrong that both can operate at 100%. But 100% isn't what causes power draw. It really comes down to what the developer is doing and how much of it.

Here might be a reasonable ideas that cause power draw:
Here is furmark running on a 5700XT

Basically what furmark does is render fur onto some donut shape and it's constnatly rotating. Each individual fur strand is rendered separately. So it's trying to draw and calculate thousands of strands of hair. This isn't textured, ti's just individually coloured as I understand it, I could be wrong though.

But yea, this would heat your gpu pretty good. Keep your eyes on the core clock speed as well as the temperature. This particular demo showcases an issue with the fan not working, so it's getting up there dangerously high. But once the temperatures are under control the core clock can stay up. Somewhere in here, Sony has to account for this. So we're not exactly sure when they would start downclocking frequency and how much by based on code. Improper cooling can happen, AMD really screwed the pooch on 5700XT release.

So I guess it might be simpler to say, running a lot of high density stuff is more power draw then less density. Might be a simple way to sort of describe things. High density geometry will start to slow down as you approach small triangles however. And that's just an issue with how we generate triangles. On compute shaders that issue doesn't exist. So yea, I suspect UE5 demo is pretty much the epitome of pushing the limits of graphical hardware in power in real world use cases. It's capable of really pushing the hardware on both geometry and lighting.

Dammit you beat me to Furmark!! :mad:
 
Yeah, both Sony and MS are constrained by the lower bound of what makes yield.

I find the following interesting:

Difference between 5700 XT base and max boost clocks for baseline official numbers is 300 mHz. But with something like Furmark it can drop 40 or 50 mhz below even that base number (the base number is not guaranteed it's "up to"), for a range between observed minimum and official advertised boost of ~ 350 mhz. It could even be more on some samples.

Now imagine the whole frequency range scaling up by about 15% for next gen, and the difference between a conservative (Furmark style, like MS used for the 360) base and expected max boost doing the same. Hmm. 350 x 1.15 = 402.5 mHz.

Hmm. I wonder what the difference between XSX constant clocks and PS5 max boost is? 2.23 - 1.825 = 0.405. Or 405 mhz.

Oh man! I know it's just extrapolating based on RDNA 1, but it's almost like PS5 and XSX are the same base architecture on the same manufacturing process with the same universal laws of physics. It's a conspiracy!! :runaway:
Also remember that XBSX will have 52CU's so it has different set of requirements for it's power draw and heat.
 
Also remember that XBSX will have 52CU's so it has different set of requirements for it's power draw and heat.
indeed. More cores at the same frequencies will result in more power draw.
But frequencies will increase wattage in a cubic nature, so there's limits for what Sony wants to put in a box.

End of the day, we're debating numbers here. It won't make a difference the graphcial fidelity here for games.
Some resolution differences at most.
Even if you fixed PS5 at 2000Mhz and it became 9.2TF vs 12TF. Most people aren't bothered by it. Most people weren't super bothered by XBO and PS4 either. And that's more dramatic because resolution really matters at the low end.

I think someone alluded to earlier, if you are designing a game around 30fps, and your competitor is designing it around 60fps. There's just no way a 60fps title can compete with a 30fps title.
You remove the 10ms or so for standard rendering, and you have 6ms left for everything else vs 20 ms left for everything else. It's night and day. In order for a 60fps title to look like a 30fps, you'd need to 18-20TF. It's just not going to happen. You can enter some weird edge cases where you could double the resolution or double the frame rate, and that's just a result of a weird cut off. But if you're designing a title for 30fps, it's going to look really good. You're going to need a tremendous amount of power to get it to look the same at 60fps.

And it seems to me that Sony is pretty comfortable with 30fps. So, yea, my expectations for first party titles will be that PS5 games will likely look better than Xbox. And that's all people care about anyway.

We'd need some sort of different rendering method for this to work. Like DLSS might be the only type of game changer here. But there's no signs of life on the console space yet.
 
i understound the PS5 could run at max clock on the GPU as long as wanted, but if so the CPU would not run at full clock at the same time, and vice versa, because they know the max power it can drawn at any given time ?
 
i understound the PS5 could run at max clock on the GPU as long as wanted, but if so the CPU would not run at full clock at the same time, and vice versa, because they know the max power it can drawn at any given time ?
The SoC has a fixed power limit in which it must divide between CPU and GPU.
There is enough power to run both at 100% frequency.
When the activity levels start surpassing the allotment provided to either CPU or GPU, the frequencies of the CPU or GPU will need to drop because it cannot sustain the activity level at that frequency. If you don't want it to drop, then you need to borrow from the other side. If you keep increasing the activity level, eventually you won't be able to borrow anymore and still be forced to downclock.
 
Status
Not open for further replies.
Back
Top