Playstation 5 [PS5] [Release November 12 2020]

I don't understand this quote ... The APU should have a max rated power draw, and the cooler should be designed to handle that max power at a particular ambient temp and assuming adequate space has been given for airflow. So as long as your environment is within those ranges, if the APU drew it's max power, which should be an unrealistic number, the console should not overheat. If my console ever overheated in an ideal environment I'd be pissed. "It's the software's fault," would not be an acceptable answer.
Although this is true to an extent, there is no way to absolutely make sure it doesn't happen without leaving a massive amount of performance on the table. There is a reason why TDP is calculated on average load these days and not on absolutely worse case load for pretty much every chip. You can't squeeze out performance during the average use case if you are worried about the worse case scenario. The difference between a typical game and Furmark is very significant to the point where it is recommended to not even RUN Furmark cause it might damage your GPU.

It is not unrealistic to have some parts of code that will run outside the TDP of any hardware and devs should absolutely make sure their code isn't doing this for long periods.
 
It's too bad the techinique involved here will cause main menus and map screens to redraw the same image only 900 times per second instead of 1000 times per second as it was seemingly intended by the UX programmer.

This will be the end of an era.

Is this directed at me? Obviously that's not an ideal case for software, but the hardware shouldn't fail when somebody does something dumb. That's my point. His argument that this situation could cause a console to overheat really should never happen. It should be designed to be adequately cooled in the worst case.
 
Although this is true to an extent, there is no way to absolutely make sure it doesn't happen without leaving a massive amount of performance on the table. There is a reason why TDP is calculated on average load these days and not on absolutely worse case load for pretty much every chip. You can't squeeze out performance during the average use case if you are worried about the worse case scenario. The difference between a typical game and Furmark is very significant to the point where it is recommended to not even RUN Furmark cause it might damage your GPU.

It is not unrealistic to have some parts of code that will run outside the TDP of any hardware and devs should absolutely make sure their code isn't doing this for long periods.

But your TDP can't be higher than your power limit in Watts? That's just a case of GPUs with poor cooling, isn't it? I imagine many PC gpus have cheap and inadequate cooling.

Edit: Also in the PC world the gpu designer can't know if your case has adequate airflow etc. On a console, the whole thing is designed from the ground up. They should know the worst case and whether the case and the cooling system can handle it, and up to what environmental conditions. If either PS5 or Series X have overheating problems with particular games, I'd say the whole console was a design failure. That's not a software problem in the console world.
 
Last edited:
Is this directed at me? Obviously that's not an ideal case for software, but the hardware shouldn't fail when somebody does something dumb. That's my point. His argument that this situation could cause a console to overheat really should never happen. It should be designed to be adequately cooled in the worst case.

I wouldn't expect any software that did that to pass cert.
 
Is this directed at me? Obviously that's not an ideal case for software, but the hardware shouldn't fail when somebody does something dumb. That's my point. His argument that this situation could cause a console to overheat really should never happen. It should be designed to be adequately cooled in the worst case.
No it's a joke about some bad UX code I ran across in the past, it seems main menus are not well coded. It's not normal a menu and map screen take more power than the game. No idea about HZD but there are horror stories about UX programming in general.
 
But your TDP can't be higher than your power limit in Watts? That's just a case of GPUs with poor cooling, isn't it? I imagine many PC gpus have cheap and inadequate cooling.
I mean, even with the best cooler in the world, you can have power viruses that completely blow your TDP budget. At some point, it is up to the dev to not put that kind of code in their game.
 
I mean, even with the best cooler in the world, you can have power viruses that completely blow your TDP budget. At some point, it is up to the dev to not put that kind of code in their game.

So GPU manufacturers like Nvidia and AMD don't actually know the max power draw for their parts? What they're giving out is some kind of estimate of power draw?
 
They draw up to Vsync on console :)
HZD map screen is 60 fps, looks really smooth after you switch from in game to map.
hm.... so is it actually locked to 30fps in the game or is it double buffer V-sync'ed and *always* between 31-59?
 
I mean, even with the best cooler in the world, you can have power viruses that completely blow your TDP budget. At some point, it is up to the dev to not put that kind of code in their game.
Also there are hot spot issues that can be impossible to solve no matter how big or expensive the heatsink is, the heat cannot conduct through the heat spreader fast enough.

IF AVX256 is so power hungry, I wouldn't be surprised it's a nasty hot spot.
 
Also there are hot spot issues that can be impossible to solve no matter how big or expensive the heatsink is, the heat cannot conduct through the heat spreader fast enough.

IF AVX256 is so power hungry, I wouldn't be surprised it's a nasty hot spot.

But wouldn't this just go back to hardware design? At least do synthetic tests that just blitz the worst-case functions of the gpu, and either have dynamic clocking that can handle it, or pick a fixed clock that can sustain it?
 
But that's what they wanted to solve.

My objection is that he's saying their design will handle it more gracefully than I'm assuming fixed clocks. I just don't accept that any console should overheat ever, whether it's fixed clock or dynamic clocks. I don't think it's a software problem for software engineers to manage. I think it's a problem for the hardware team that designed the box and the cooling solution and selected the clock speeds. I've never had a console that overheated and I hope I never do.
 
So GPU manufacturers like Nvidia and AMD don't actually know the max power draw for their parts? What they're giving out is some kind of estimate of power draw?
They definitely know the likely max power draw since they do all kinds of power testing with unrealistic code to test all possible cases to make sure the chip at least doesn't blow up when hit with the kind of code. They just don't publicly advertise or state it in the specs. Modern chips basically just handle this by heavy throttling.
 
I don't understand this quote ... The APU should have a max rated power draw, and the cooler should be designed to handle that max power at a particular ambient temp and assuming adequate space has been given for airflow.
the cooling solution will be designed to be as cheap as possible to do the realistic work asked of it. If you have an APU capable of drawing 200 W with a power virus, but which only draws 120 W in general game code, I'd expect the cooling assembly to be designed to cool to maybe 150 W because that's the sensible target for a cooling solution that isn't over-engineered for work it'll never have to do. If you want a cooling solution that'll cater to the maximum possible heat from an APU in the hottest environmental situations, you are adding considerable redundant cost to every unit sold - either you charge more or you eat losses.

The only people who could answer what happens to a console when the silicon is saturated with workloads are devs writing non-game code to test that, and I imagine NDA's prevent anyone with an SDK trying that. ;) I can well imaging the console overheats and powers down though. That must be what would happen with PS because otherwise Cerny wouldn't have referenced it. Something like XB1X may be engineered with greater tolerances because it's a premium product at a more premium price-point.
I've never had a console that overheated and I hope I never do.
Those consoles never hit close to 100% utilisation or maximum possible heat output. It's unlikely any future gaming hardware will get close to 100% utilisation and maximum heat; Cerny is talking hypothetically with no knowledge how future software will progress. We talked about Async Compute this generation allowing nearer 100% utilisation, but it hasn't happened. Might it happen next-gen? No-one can know, but probably not.
 
Or because fatter API prevents you from ever getting that speed anyway...
XB1X's relative performance is very good - the API doesn't seem that cumbersome, and if it enables 56 CUs instead of forcing a strangely narrow 36 and really high clocks and hardware designs with explanations that detractors can keep harping on about, 'fat' APIs are far and away the better choice. :p
 
XB1X's relative performance is very good - the API doesn't seem that cumbersome, and if it enables 56 CUs instead of forcing a strangely narrow 36 and really high clocks and hardware designs with explanations that detractors can keep harping on about, 'fat' APIs are far and away the better choice. :p

But you know, if somebody talks "full BC" you immediately know that API is fat... :)
It's just common sense.
And PS4 Pro vs XB1X perf in multiplatform games hints at something too... (cough, cough, RE3) :)
 
It still has a 60 fps cap there, but it never runs at 60. It is usually in the 40s (on console)

Exactly which means it's more or less running at an uncapped frame rate.
So even if the lobby screen is running uncapped it shouldn't use more power?

My fans definitely make more noise while waiting for matchmaking.
 
XB1X's relative performance is very good - the API doesn't seem that cumbersome, and if it enables 56 CUs instead of forcing a strangely narrow 36 and really high clocks and hardware designs with explanations that detractors can keep harping on about, 'fat' APIs are far and away the better choice. :p
It doesn't. ;-)
 
Back
Top