Playstation 5 [PS5] [Release November 12 2020]

There are design goal-parallels with MS's Velocity engine - both companies have looked into finding better efficiency options in their designs. Internet outrage at lower numbers on specsheets at the moment is the product of a naive understanding by the lay-gamer who can't comprehend "efficient == better", achieving more with less resources. Eventually when they get over the single-number understanding of system design, hopefully all smart choices will be accepted and celebrated for the engineering savvy behind them. Next gen, everyone will crib the good ideas the other side had. MS will have flexible power and clocking, and Sony will have hardware-supported partial asset loading.
 
Last edited:
wouldn't be at all surprised if this approach becomes the norm going forward in this space, to not do it is to not extract all the performance in your device.

Certain android phone has been doing that for a year or more. Some modders also added the feature to their kernel. So when gaming, the soc TDP give priority to gpu for example.

I don't remember the name
 
There's a difference between claiming max performance and minimum. Nvidia's are actually advertising lower performance metrics instead of what their gpu's can achieve as max.

MS guarantees their performace metrics, it's the bare minimum performance you can expect. In Sony's case they advertise whats possible at the most. It's an advantage, hence they emphasise it. It's also different to what a RTX gpu does, it has a baseclock, but can boost if the situation allows it. In PS5 it's the other way around.

They couldn't even achieve 2/3Ghz for the GPU and CPU respectiively, now with boost and variable clocks they can maintain 2.23/3.5.
If it never happens (1% of the time, and when it does, it barely downclocks), one can wonder why even bother implementing it or even talking about it.
My guess is the gpu is variable between 9 and 10tf when both cpu and gpu are hammered.
1825mhz/12.15tf is the peak, not the floor. It's no different than any other AMD gpu only here the Tf claim is based on sustained clock rather than theoretical peak as with desktop cards. Of course desktop cards can well exceed that sustained clock with boost. Even NVIDIA, yes they advertise well below their regular Tf number but that's because they can claim much lower power usage while making it seem they do more with less Tf.
 
Last edited:
1825mhz/12.15tf is the peak, not the floor. It's no different than any other AMD gpu only here the Tf claim is based on sustained clock rather than theoretical peak as with desktop cards. Of course desktop cards can well exceed that sustained clock with boost. Even NVIDIA, yes they advertise well below their regular Tf number but that's because they can claim much lower power usage while making it seem they do more with less Tf.
Except that Microsoft decided against using any "boostclocks" or such, it's 1825 MHz locked (well, probably there's separate "2D clock" for when games aren't running but you know what I mean).
Just because you can and most manufacturers have opted to squeeze more out of their chips via variable Boost-clocks doesn't mean everyone has to. In case of consoles I can see plenty of reasons why locked is better.
 
Except that Microsoft decided against using any "boostclocks" or such, it's 1825 MHz locked (well, probably there's separate "2D clock" for when games aren't running but you know what I mean).
Just because you can and most manufacturers have opted to squeeze more out of their chips via variable Boost-clocks doesn't mean everyone has to. In case of consoles I can see plenty of reasons why locked is better.
Nowhere did I suggest otherwise and even said 1825mhz is sustained(normal).
 
It's a locked clock

Exactly. That's a different thing then peak. I see you addressed that in your last post though :)
And no it doesn't clock up nor down. Sustained guaranteed performance as MS advertised in march, same for the SSD, they talked about guaranteed sustained performance.
Seems abit different marketing that's all.
 
I think I just used a word incorrectly. ;) Although I feel like 'nonce' should mean the smarts inside the 'bonce', so if I keep using it incorrectly enough, perhaps my definition will become correct?

Edit: I'm using the term just fine*, but it's clearly a rarer English term that someone's hijacked to mean something else. Google results...

Code:
https://www.facebook.com/stcannas/posts/814105848974690
"- at 1.30pm: Big screen film showing (the one about the young smuggler and his furry friend - can't say the name but use your nonce)"

https://www.arrse.co.uk/community/threads/oh-bollox-ive-locked-myself-out.78600/
"Third floor would indicate a block of flats so.......use your nonce, ring every buzzer till someone lets you in the main door and then call a locksmith for the door to the flat."

https://forums.overclockers.co.uk/threads/body-fat.17893021/
"But frankly as said, knowing your body fat percentage does very very very little for you. Knowing its changing, good or bad, is what you need and they work fine for that as long as you use your nonce."

Seems to have developed in UK prisons to mean sex-offenders. But it's always been a jovial term for savvy where I've grown up. ¯\_(ツ)_/¯

* And using a term 'just fine' for you isn't 'just fine' for communication if the rest of the world has a different meaning!
 
Last edited:
And using a term 'just fine' for you isn't 'just fine' for communication if the rest of the world has a different meaning!
I've always known both meanings, just know which is meant by context.
Think one meaning has become more prevalent over the other, over time though.
 
Edit: I'm using the term just fine*, but it's clearly a rarer English term that someone's hijacked to mean something else. Google results...
You've just linked to other people using the wrong word - that is why there is only 922 hits. The word you want is nous.

Seems to have developed in UK prisons to mean sex-offenders. But it's always been a jovial term for savvy where I've grown up. ¯\_(ツ)_/¯
If you grew up in prison or a borstal, sure. :yes:
 
There's a difference between claiming max performance and minimum. Nvidia's are actually advertising lower performance metrics instead of what their gpu's can achieve as max.

MS guarantees their performace metrics, it's the bare minimum performance you can expect. In Sony's case they advertise whats possible at the most. It's an advantage, hence they emphasise it. It's also different to what a RTX gpu does, it has a baseclock, but can boost if the situation allows it. In PS5 it's the other way around.

They couldn't even achieve 2/3Ghz for the GPU and CPU respectiively, now with boost and variable clocks they can maintain 2.23/3.5.
If it never happens (1% of the time, and when it does, it barely downclocks), one can wonder why even bother implementing it or even talking about it.
My guess is the gpu is variable between 9 and 10tf when both cpu and gpu are hammered.

I disagree with some of your words:

Tflops is a theoretical number... It relates to the bare maximum output a system could have. In other words, if you could squeze every "nut and bolt" of the system, that would be the performance you could have.

The formula to calculate that maximum is: clock speed x cu x 64*2

In this formula, where does the workload enter??? It just doesn't. Its assumed to be at maximum.

A fixed clocks system doing nothing is not outputing the same as a system with a medium workload, and even less than a system with a full workload. So output in processed Flops is not a consequence of only clock speeds

So... A fixed clocks system is not better in performance than a variable one. Clocks by themselfs do not represent performance, and performance depends on workloads, and clock speeds only need to be enough to deal with that workload in the allocated time given to process.

This means that, as far as I see it... with two equal GPUs, one with fixed clocks and other with variable clocks, there is absolutely no diference in performance terms in both systems.

And this is my first disagreent with your wording, where you claim that one system claims the possible maximium performance and the other the bare minimum performance.

If systems are equal, on maximum workload, both will have the same clockspeed, and will have the same output. So... where is the diference?

But in fact there is a diference... As you well say it!

With fixed clock speeds Sony was having problems with 2 Ghz... But now, with variable clocks, and workload control (directly related to watts consumptions), they are at 2.23!

This means that if we take the above same equal GPUs, the one with variable clock speeds would achieve higher performances than the one with Fixed clocks.

The second point I disagree is you claiming that the system varies between 9 and 10 tf... As explained above the real Tflops processed (aka output) depends on workloads, so even Xbox will have variable outputs, even with fixed frequêncies.

Otherwise, PS5 will work as Xbox séries X, only diference beeing it can achieve higher frequêncies, and clocks will not be fixed, but variable according to demand.
 
Last edited:
Yes it will be a reasonable difference on a laptop, but here it will be more to deal with corner cases, since they have both clocks at or near the maximum most of the time. Considering cerny was not putting much emphasis on it, I think they already have individual tdp limits based on keeping the die wattage density equal across the entire surface (shifting too much means hotspot), so the advantage of smartshift is a small one in practice compared to the rest of the scheme. Also, without knowing their internal tdp target versus their real world tests we have no idea how often corner cases happen. Devs would be able to answer if they were allowed to talk about sony's "project squirrel".
Don't think so.
PS5 cpu will be much times underutilized for having BC to weaker PS4 Jaguars (110 millions around)... also CPU steals bandwidth to GPU (bandwidth is really the weak part of the ps5 vs xsx)... so smartshift is sure an usefull feature in PS5. I'm considering then how much a ps5@5nm vs ps5@7nm will perform better.... ?!? Probably not because also cooling will be then different.... Really courios about ps5 cooling.....
 
Don't think so.
PS5 cpu will be much times underutilized for having BC to weaker PS4 Jaguars (110 millions around)... also CPU steals bandwidth to GPU (bandwidth is really the weak part of the ps5 vs xsx)... so smartshift is sure an usefull feature in PS5. I'm considering then how much a ps5@5nm vs ps5@7nm will perform better.... ?!? Probably not because also cooling will be then different.... Really courios about ps5 cooling.....

So backwards compatibility is suddenly a factor in CPU utilization in native PS5 games, thus PS5 weak.
Sure.

We continuing to allow this type of rhetoric?
It's been around two and a half months since the Road to PS5 video and we're still dealing with misinformation and disingenuous ways to convey the PS5 in a particular manner.
 
The 110 milions of ps4 + milions of Xbox One around are of course a big argument towards my argument. Then maybe in few years things will change. So I'm sure CPU will be mostly underutilized on ps5 and even more on XSX
 
The 110 milions of ps4 + milions of Xbox One around are of course a big argument towards my argument. Then maybe in few years things will change. So I'm sure CPU will be mostly underutilized on ps5 and even more on XSX
Why do you believe that for games which are not frame limited, that the CPU and GPU utilisation will not increase to produce more frames? What is driving the production of new frames if not the CPU?
 
Back
Top