Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
It's irrelevant, you're using it like a tool to say 'well why would Cerny say it unless that's the expectation?' - we're discussing taking something out of context to spin a narrative.
There's no narrative spin here.
I'm trying to figure out how they accomplish things, I'm trying to read between the lines here.

Through the presentation Cerny hints of all of this
32:41
about the only downside is that system memory is 33% further away in terms of cycles but the large number of benefits more than counterbalanced that as a friend of mine says a rising tide lifts all boats also it's easier to fully use 36CU use in parallel than it is to fully use 48CU use when triangles are small it's much harder to fill although sea use with useful work so there's a lot to be said for faster assuming you can handle the resulting power and heat issues which frankly we haven't always done the best job

worst case game and prepare a cooling solution that we think will be quiet at that power level if we get it right fan noise is minimal if we get it wrong the console will be quite loud for the higher power games and there's even a chance that it might overheat and shut down if we miss estimate power too badly
He abolutely talks about being bound by the same physics issues that everyone is bound by. The move to fixed power and dynamic clocks is to solve this issue, to gain some power where they can, and to make sure they don't over shoot heating issues, and to make the PS5 sound quiet and they know power cannot exceed a certain threshold. All of this hinges on how well the PS5 is able to vary it's frequency and thus control it's power output
 
I'm perplexed by this, it's probably just me being thick - but if there's enough power to run both at 100% then how can activity levels surpass that (100%)?
Because you're assuming 100% means activity level.
You can have a 100% saturation and have very little activity level. Activity level is about how often transistors are flipping between 1 and 0.
If you add 0 + 0 the result is 0. You haven't flipped any bits and you can ask a CPU and GPU to do this over all its' cores and all threads infinitely. Therefore 100% in use; no real power draw.

You can spawn all threads and make them sleep for 1 sec. That would be 100% use, no activity as well.

If you are doing workloads in which a lot of bits are flipping every single cycle the power draw will keep going up as a rate of bits flipping.
 
Last edited:
There's no narrative spin here.
I'm trying to figure out how they accomplish things, I'm trying to read between the lines here.

Through the presentation Cerny hints of all of this



He abolutely talks about being bound by the same physics issues that everyone is bound by. The move to fixed power and dynamic clocks is to solve this issue, to gain some power where they can, and to make sure they don't over shoot heating issues, and to make the PS5 sound quiet and they know power cannot exceed a certain threshold. All of this hinges on how well the PS5 is able to vary it's frequency and thus control it's power output
I'm aware, I understand why they are doing it...to get better bang for buck (as it were) - it's a far more efficient use of silicon.

Because you're assuming 100% means activity level.
You can have a 100% saturation and have very little activity level. Activity level is about how often transistors are flipping between 1 and 0.
If you add 0 + 0 the result is 0. You haven't flipped any bits and you can ask a CPU and GPU to do this over all its' cores and all threads infinitely. Therefore 100% in use; no real power draw.

You can spawn all threads and make them sleep for 1 sec. That would be 100% use, no activity as well.

If you are doing workloads in which a lot of bits are flipping every single cycle the power draw will keep going up as a rate of bits flipping.
I think I'm just going to have to leave the thread and come back after launch or when we know more about PS5 hardware and how it's performing.
 
I don't care about it in the PS5's case, with this boosting down clocks. Aslong AMD and NV don't get any idea's in implementing a RX5900 or RTX3080 and advertise 18TF 'most of the time' or whatever in that range 'between 16 and 18TF.
For PS5 it doesn't matter if it's between 9 and 10, cause you know, sony AAA studio magic, they games will look next gen anyway.
 
I don't care about it in the PS5's case, with this boosting down clocks. Aslong AMD and NV don't get any idea's in implementing a RX5900 or RTX3080 and advertise 18TF 'most of the time' or whatever in that range 'between 16 and 18TF.
For PS5 it doesn't matter if it's between 9 and 10, cause you know, sony AAA studio magic, they games will look next gen anyway.
That is what base and boost clocks are? Or in AMD case base, boost and game clock. Nvidia severely undersells their GPU performance with their advertised clocks.
 
Last edited:
And why are RDNA1 numbers on a GPU power virus like furmark even remotely relevant to the amount of time the PS5's RDNA2 GPU will be at 2.23GHz?

Okay, so this is out of context. I never claimed Furmark was relevant to the amount of time the PS5 could boost for.

I was using it to try and estimate a lower bound for XSX locked clocks based on Furmark and supposing similar scaling between 5700XT boost clocks and PS5 boost clocks.

Are Microsoft or Sony going to have power viruses available on their online store? I just don't see how the PS5's boost needs to ever be similar to a desktop graphics card's boost based on a previous architecture. First because Cerny has repeatedly said the GPU spends most of its time at 2.23GHz whereas the 5700 XT does not spend most its time at 1905MHz, second because the console's boost needs to work differently due to the fact that the PS5's boost is solely dependent on power consumption whereas whereas a desktop card will just churn out higher clocks if the ambient temperature is lower, and third because it's a new architecture.

A lot to unpack here, but briefly, IMO:

- You need to be able to handle a Furmark like situation, whether it's by having the power and cooling capacity or throttling. Even if it's just for a fraction of a frames rendering time, you need to be able to handle these situations.

- Boost is based on physics, even if you use a model based on activity rather than real time measuring of sensors. There will necessarily be similarities in behaviour, especially between very similar architectures.

- Cerny did not say the GPU spends most of it's time at 2.23 gHz. He said "most" and "at or close to" and it's a big difference. A huge one really, in the context of this forum. If we're going to use Cerny's words we need to actually use them, and not a partisan "fake news" edit of them.

- I think the 5700XT probably does does spend "most" of its time "at or close to" 1905 mhz - depending on what those words mean to you. In the Anandtech review every game's average was within 90% of advertised max boost frequency. Some are above 95%. GTA V is above 100%! :D

- PC GPU's don't just keep churning out higher clocks based on temperature! AMD cards have power limits which have default values that can limit frequency even when within temperature limits.

And finally, RDNA2 isn't a completely new architecture, and it's only moving to a minor node improvement rather than a completely unknown one. Characteristics should be quite similar. There really isn't a better architecture to base speculation on at this time.

When have console GPUs ever dictated the maximum clock rates of PC GPUs? When the PS4 came out with a 800MHz GPU we had Pitcairn cards working at 1000MHz, with boost to 1050MHz.
That's a 25% higher base clock with boost giving it a 32% higher clock, but boost has come a long way since GCN1 chips

PS4 didn't have a boost mode that pushed the limits of the chip and the process in the way Cerny claims the PS5 is. You can't read anything into the PS5's peak boost clock from PS4 clocks IMO.

And I didn't actually say PS5 or XSX were dictating anything, but I do think they might be indicating something interesting! Those being approximate RDNA2 boost and base (or slightly lower) frequencies respectively.

2.23 ghz as a (conservative) upper limit for the logic comes from Cerny.

"Infact we have to cap the GPU frequency at 2.23 gHz so we can guarantee that the on chip logic operates properly".


So this isn't about power or heat. It's about integrity due to frequency. Some PC RDNA cards will fare a little better in the silicon lottery no doubt, and binning my help for a top end model, but PC RDNA 2 isn't going to see the boost headroom over PS5 that PC's have seen on the last gen systems.

PS5 is already well up there with what the architecture can do on the silicon IME.

What are AMD's recommended power and voltage values for the PS5 SoC, and how do they compare to the discrete PC graphics cards?

I'm not expecting the console silicon to have completely different power and voltage curves just because it's in a console. I don't think AMD's recommendations will be much, if any, different.

The claims are for the architecture, i.e. for a lineup of GPUs that will go from (at least) mid-range to high-end, and not just specifically "Big Navi". I'm pretty sure the Macbook Pro's Navi 12 has a lot more than a 50% perf/watt increase over a Radeon 5700 XT, so making that 50% claim for "wider and slower" doesn't make much sense if they could have done it within the same architecture.

Claims about perf/watt come with conditions and won't be universally the same. I take claims like these as being "up to" figures based on planned chips, at predicted frequencies using selected workloads. I guess we'll find out in November ...

Edit: and just to re-iterate, "relatively slower" is relative to where you can reach with the new chip, and internal, architectural tweaks can reduce power consumption and improve perf/watt independent of clock speeds and IPC (though they often go together).

Nvidia have been particularly good at minimising power usage for a given amount of work at a given frequency.
 
Last edited:
That is what base and boost clocks are? Or in AMD case base, boost and game clock. Nvidia severely undersells their GPU performance with their advertised clocks.

Exactly, underselling is a totally different thing then potentionally underselling a expensive product like a 3080 or RX5900 (or whatever name it will get).
With my 2080Ti, i get atleast what's advertised, with that new idea of Sony, AMD could advertise max clocks and it could go anywhere down from there.
 
I feel that in all these discussions about variable clocks I am missing the reference to running real game code. Games do not need constant top frequency on CPU and GPU. A game shall not be programmed with the need for full GPU clocks all the time. Only for short moments the game will need that and the GPU will provide. In between those moments the GPU will cool down. Running for instance Furmark would be fun on a console, but what will it tell for performance running game code? (By the way I had to stop staring at that Furmark video when I suddenly felt I was staring in to the asshole of a tiger, lol) Variable clocks is normal for PC GPU and maybe it will be the future standard for consoles. All this also applies to CPU's of course. Here is a chart showing a game running on a 5700XT.

graph_1.png.webp
 
I would be careful with using PC resource usage as limitations on what console usage looks like. Consoles are an entirely different beast than PCs. The developers tune specifically for one particular set of hardware. You can't do that in PC.
 
I feel that in all these discussions about variable clocks I am missing the reference to running real game code. Games do not need constant top frequency on CPU and GPU. A game shall not be programmed with the need for full GPU clocks all the time. Only for short moments the game will need that and the GPU will provide. In between those moments the GPU will cool down. Running for instance Furmark would be fun on a console, but what will it tell for performance running game code? (By the way I had to stop staring at that Furmark video when I suddenly felt I was staring in to the asshole of a tiger, lol) Variable clocks is normal for PC GPU and maybe it will be the future standard for consoles. All this also applies to CPU's of course. Here is a chart showing a game running on a 5700XT.

graph_1.png.webp

Yes, this is what’s confusing me. This is my understanding (and probably completely wrong);

I can only run my tap at 80% maximum if I run it constantly as that’s all the water my pug hole can take before the water levels rise and my sink overflows (XSX).

I’ve invented a tap that fluctuates and gives me water when I actually need it - so now if I need to get water at 100% speed I can, I don’t need to worry about overflowing because I never need to run my tap at 100% all the time (PS5).

No game ever runs 100% all the time so there’s natural gaps, but the implied discussion (in my eyes-and I’m probably wrong) is that PS5 needs developers to build in gaps so it doesn’t break (or start to overflow in my tap example)
 
Last edited:
I feel that in all these discussions about variable clocks I am missing the reference to running real game code. Games do not need constant top frequency on CPU and GPU. A game shall not be programmed with the need for full GPU clocks all the time. Only for short moments the game will need that and the GPU will provide. In between those moments the GPU will cool down. Running for instance Furmark would be fun on a console, but what will it tell for performance running game code? (By the way I had to stop staring at that Furmark video when I suddenly felt I was staring in to the asshole of a tiger, lol) Variable clocks is normal for PC GPU and maybe it will be the future standard for consoles. All this also applies to CPU's of course. Here is a chart showing a game running on a 5700XT.

graph_1.png.webp
LOL haha very thicc tiger hole indeed.

Yea, indeed which is why we're waiting for benchmarks on games to really know what the variable behaviour is for PS5.
 
Yes, this is what’s confusing me. This is my understanding (and probably completely wrong);

I can only run my tap at 80% maximum if I run it constantly as that’s all the water my pug hole can take before the water levels rise and my sink overflows (XSX).

I’ve invented a tap that fluctuates and gives me water when I actually need it - so now if I need to get water at 100% speed I can, I don’t need to worry about overflowing because I never need to run my tap at 100% all the time (PS5).

No game ever runs 100% all the time so there’s natural gaps, but the implied discussion (in my eyes-and I’m probably wrong) is that PS5 needs developers to build in gaps so it doesn’t break (or start to overflow in my tap example)
I might try to visualize it like this.

You have 2 18 wheeler trucks with cargo that will go down the same road, a road will goes up-hill or down hill or flat out.

The first truck has it's speed controlled by the cruise control, the cruise control will always make this truck drive up and down and flat hills always at 60 mph, that means cruise control will add more gas going up hill to maintain this speed and reduce gas to maintain this speed downhill. It can never go all out in terms of speed, but it never falters under load. This is Xbox Series X.

The second truck has it's gas throttle locked. That means, when it goes uphill, you can't add more gas to keep your speed the same so you will go slower but when you go downhill your truck will go quicker, and when your on flat you're going out as hard as your throttle allows you. So that means the PS5 can run at say 75mph but it will run slower than 75mph on the uphill. This is PS5.

The steepness or grade of the hill is the activity level.
The length of the uphill is the load, how long it needs to do it for.
The RPM is how hot your engine is running and you can blow it if you keep forcing the RPM into redline (ie, a very heavy load 60mph steep incline requires full gas at all times)
 
Last edited:
LOL haha very thicc tiger hole indeed.

Yea, indeed which is why we're waiting for benchmarks on games to really know what the variable behaviour is for PS5.


You've said this a few times and how are we going to get benchmarks for console anything? The boxes are locked down.

Unless you just mean DF running a framerate analyzer prog and inferring whatever from that. Yes, I agree we'll get that...

But in terms of actually knowing the PS5's internal clocks at a given point, average clocks running game X etc, we will never know except for possibly very very rare and nebulous dev leaks (which would likely be immediately contested, somewhat unverifiable and controversial anyway).
 
You've said this a few times and how are we going to get benchmarks for console anything? The boxes are locked down.

Unless you just mean DF running a framerate analyzer prog and inferring whatever from that. Yes, I agree we'll get that...

But in terms of actually knowing the PS5's internal clocks at a given point, average clocks running game X etc, we will never know except for possibly very very rare and nebulous dev leaks (which would likely be immediately contested, somewhat unverifiable and controversial anyway).
Yea so the easiest way would be to
a) get the same settings, which they can almost do
b) get the same hardware settings (which is tougher)
c) see the result on PC and compare it to playstation and see where the clockspeeds are landing.
 
But in terms of actually knowing the PS5's internal clocks at a given point, average clocks running game X etc, we will never know except for possibly very very rare and nebulous dev leaks (which would likely be immediately contested, somewhat unverifiable and controversial anyway).

I'm hoping that there's some kind of external bus that can be tapped, or some way to determine frequency from rf interferance when you get into the box and past the shielding. And then, of course, that someone smart figures out how to use do this and shares the results!

Failing that, a firmware crack that allows access to these figures directly would be cool.

Other than any of that, or devs leaking results, I guess we're down to just trying to to infer things based on PC RDNA cards with whatever questionable margin of error....
 
Can anyone decipher what all the audio hardware stuff they talked about at Hotchips means? I don't know anything about audio so all I can gather is that, yes, there's dedicated audio hardware.
 
The RPM is how hot your engine is running and you can blow it if you keep forcing the RPM into redline (ie, a very heavy load 60mph steep incline requires full gas at all times

This is more a problem for the hardware with a fixed Frequency because the way I understand it on the PS5 performance will suffer rather than the hardware overheating.

And that's the whole point of going with the variable clocks in that it's a much easier way to get maximum performance without the potentially worst case scenarios holding back your hardware during more usual usage.

Now if everything works how Sony have predicted is up in the air and only time will tell.
 
Another perspective here, but no one said 51%.
As indicated earlier, we've seen ample evidence all boost and game clocks are within 10% of their max.

Rewind back to page 81 (~ Late March) to find such examples of people accusing downclocks will occur a lot despite Cerny saying otherwise.
 
I might try to visualize it like this.

You have 2 18 wheeler trucks with cargo that will go down the same road, a road will goes up-hill or down hill or flat out.

The first truck has it's speed controlled by the cruise control, the cruise control will always make this truck drive up and down and flat hills always at 60 mph, that means cruise control will add more gas going up hill to maintain this speed and reduce gas to maintain this speed downhill. It can never go all out in terms of speed, but it never falters under load. This is Xbox Series X.

The second truck has it's gas throttle locked. That means, when it goes uphill, you can't add more gas to keep your speed the same so you will go slower but when you go downhill your truck will go quicker, and when your on flat you're going out as hard as your throttle allows you. So that means the PS5 can run at say 75mph but it will run slower than 75mph on the uphill. This is PS5.

The steepness or grade of the hill is the activity level.
The length of the uphill is the load, how long it needs to do it for.
The RPM is how hot your engine is running and you can blow it if you keep forcing the RPM into redline (ie, a very heavy load 60mph steep incline requires full gas at all times)

Is a car with no gears locked at full rpms, better than a car with gears than can reduce rpms when rolling or going downhill, and getting the power back when on steep areas?
Can we say that this car with variable rpms is worse?
Because I see PS5 variable clock as nothing more than this. You will get the mhz when you need them, and save the when you dont, reducing power consumption and heat.
So why are you saying ps5 will go slower on steep areas? The power is there and Cerny claims there is enough power budget for both the cpu and gpu to run at full speed, so all I see is a very smart way of saving power and reducing heat with the GPU making real time adjustements on speed according to the real needs of the system.
 
Status
Not open for further replies.
Back
Top