Playstation 5 Reveal Reactions *spawn*

Based on the microsoft emphasis on stable clocks, no boost someone obviously had talked outside nda and microsoft knew at least some part what was coming from sony. Who knows if opposite is/was true.
 
Based on the microsoft emphasis on stable clocks, no boost someone obviously had talked outside nda and microsoft knew at least some part what was coming from sony. Who knows if opposite is/was true.
They talked about fixed clock all of a sudden just before the gdc presentation of sony, so things leaked very recently, but Sony still knows what didn't leak yet and can plan accordingly.

The last time, they kept the 8GB upgrade extremely close to their chest, apparently only a few studios knew. Nobody leaked the price either. With the Pro, we didn't even know for sure it existed right up until two months before launch. All those were kept secret successfully.
 
Based on the microsoft emphasis on stable clocks, no boost someone obviously had talked outside nda and microsoft knew at least some part what was coming from sony. Who knows if opposite is/was true.
Well; there is that. But also MS can do some math on things. They have exactly the same partner. They both have an idea of the restrictions each configuration will have.

both would have figured out whether variable clock speeds would work. And when MS saw the 36 CU it’s not hard to make a call that Sony may attempt to close the TF gap with variable clocks.

likewise Sony knew what MS was doing as soon as they said 2x X1X.
 
They had to design for a power managed clocks from the start. There is a lot of difficulty making this happen on consoles and it's not like they can just completely change the plans a year from launch and magically changed the whole silicon power delivery, add the deterministic power management module, and add the necessary counters everywhere. An upclock might have happen, but not this.

Cerny said the clock limits were set based on parametric yield (which is obvious). They don't have that information until they have the final silicon.
 
They had to design for a power managed clocks from the start. There is a lot of difficulty making this happen on consoles and it's not like they can just completely change the plans a year from launch and magically changed the whole silicon power delivery, add the deterministic power management module, and add the necessary counters everywhere. An upclock might have happen, but not this.

Cerny said the clock limits were set based on parametric yield (which is obvious). They don't have that information until they have the final silicon.
But he did say that they struggled to lock it to 2.0GHz before, so perhaps this was added but obviously still in time for any design changes, no?
 
But he did say that they struggled to lock it to 2.0GHz before, so perhaps this was added but obviously still in time for any design changes, no?

Even if they planned for dynamic clocks they would definitely test what happens without dynamic clocking. It's good engineering to figure out if the improvement you did really did give the result you expected/simulated/calculated theoretically.
 
But he did say that they struggled to lock it to 2.0GHz before, so perhaps this was added but obviously still in time for any design changes, no?
No he didn't said that. He said they had issues predicting the worst case TDP for PS4, and if the predict wrong they get problems years later (he said that was the reason for fan noise, they predicted wrong). This method removes the need for those safety margins. And a 10% margin only cost 2% in clock in the case of the PS5.
 
Sony is lying about their specs. 10.3tflops and 3.5ghz on their GPU and CPU are boost clocks. what are the base clocks where the cpu and gpu can operate without any oscillation between the two? those are the real clocks not these fake max clocks.

Well.... I heard the same presentation as you guys, but I did not undertood the same.
What I heard was an explanation about how to choose GPU speeds, and a thermal solution, and that the choice is made for a worst case scenario to happen on the life of a console.
What Mark cerny explained was that the worst case scenario, more than often is badly calculated, and so, when it happens, the cooling solution fires the fan, the system overheats, and it may crash or even reboot.
Sony's solution for that problem, and this was the only context he talked about downclock, is to downclock the system. But not by much, A pair of percentual points on clock speeds was enough for 10% decrease on power consumption, so he didn´t even expected a big decrease.
This was to avoid overheating, crashing and reboot. That was the context, and the only time a downclock was mentioned.
The remaining explanations were made to explain how they could reach 2.23 Ghz, since with fixed clocks 2 Ghz was already a problem.
The continuous boost mode on both processors allows the system to reduce clock speeds according to the actual usage. During a game the CPU/GPU usage varies frame by frame, and locked speeds drain a high level of minimum energy, heating the system as a consequence.
In this way the system will reduce speeds according to the needs. Nothing different from what current GPUs do.
This makes the system a lot cooler, and allows to go higher than with constant clocks. This is why cerny states both the CPU and GPU will be at 3.5 Ghz and 2.23 Ghz most of the time. Not all the time, since full power is not needed all the time, and this system exists to allow reductions and overheat..
But since both CPU and GPU can reach full clock there is no doubt there is a power an thermal envelope for the ending result!
So, why the need to draw power from the CPU to the GPU?
Well, Cerny never stated the CPU would suffer from this. The moved power is just unused power. Since the CPU power usage will also fluctuante according to the needs at every moment, not all power will be used all the time.
This allows that, if the GPU approaches the worst case scenario, instead of going over the power budget, he will see if the CPU has power not used, and use it if available to prevent going over the budget.

There are two points I need to make:

1 - If the CPU and GPU both can reach the maximum speed whenever is needed (and Cerny stated they will be there most of the time), then the console has a power budget enough for that. And as such this seems to contradict the doomsday theories thay PS5 will have to downclock for all and for nothing.
2 - This was a presentation showing the good stuff of PS5. Not their flaws! Why should Cerny be so detailed on explaining and showing a bad thing?

So it is my oppinion that it is the combination of high speeds, boost clocks, and downclock all talked in the same presentation (although in clear different contexts) that is causing all of this mess.

From this understanding, It is my firmly belief PS5 will be stable at showing it's 10.28 Tflops.
 
Last edited:
No he didn't said that. He said they had issues predicting the worst case TDP for PS4, and if the predict wrong they get problems years later (he said that was the reason for fan noise, they predicted wrong). This method removes the need for those safety margins. And a 10% margin only cost 2% in clock in the case of the PS5.
No, he definitely said that. He said :

"Running a GPU at 2.0GHz was looking like unreachable target with fixed frequency strategy".

He also said "running CPU at 3.0GHz was causing headaches with the old strategy"

Its at 37:13
 
"At, or close to, the majority of the time" definitely means it won't be stable. But with the reference to "2% clock for 10% less power", and "a very small drop" we can see the average can easily be 10TF from a 9.7 to 10.28 variability.

Interestingly I know how to measure the clocks without hacking but I doubt DF have that sort of equipment.
 
No, he definitely said that. He said :

"Running a GPU at 2.0GHz was looking like unreachable target with fixed frequency strategy".

He also said "running CPU at 3.0GHz was causing headaches with the old strategy"

Its at 37:13
Thanks, I completely missed that one!

The context of "the old strategy" was the PS4 strategy, so they had to change strategy for the PS5 because it would mean a 2.0/3.0 fixed clocks. They still needed to develop the power management system to be deterministic, which wasn't possible before now. Maybe that was the silicon change done in 2018? The rumored changes...
 
Last edited:
He also said "running CPU at 3.0GHz was causing headaches with the old strategy"

Where did he say that? sounds eary close to the 3.2 to 3.5ghz some noted on twitter.

"At, or close to, the majority of the time" definitely means it won't be stable. But with the reference to "2% clock for 10% less power", and "a very small drop" we can see the average can easily be 10TF from a 9.7 to 10.28 variability.

Where does that line come from? At 9.7TF, there wont be much of a difference then the 10TF figure, a stable/no variable clock at 9.7 made sense in that the system would run cool all the time and a sustained rate, better yields, power draw etc. I guess they wanted to reach the 10TF figure somewhere, it was attainable with variable clocks so they did.
 
Code with too much AVX was causing too much power draw, so either they downclock permanently for most of the time it wouldn't be necessary, or they downclock only corner cases related to AVX overuse. He's using examples, not exact figures. It's about the concept. Running at max clock almost all the time is better than running at minimum clock all the time.
 
Ok, so what is AVX instruction set good for regarding running games? I think BFV uses it (and the new cod?)
I was told AVX isn't used often in terms of average, if you can vectorize like crazy, you're usually better on the gpu. So it's corner cases using it. Maybe it will change with new engines?
 
Thanks, I completely missed that one!

The context of "the old strategy" was the PS4 strategy, so they had to change strategy for the PS5 because it would mean a 2.0/3.0 fixed clocks. They still needed to develop the power management system to be deterministic, which wasn't possible before now. Maybe that was the silicon change done in 2018? The rumored changes...
Wouldn't the specs we were shown suggest that whatever was meant to release in 2019 is what we're getting in 2020, just with higher clocks?
I would think that rdna2 was planned from the start(is anyone 100% convinced ps5 is using base rdna2 with enhancements?) so what would be these changes be other than higher clocks and the cooling needed to dissipate the additional heat?
 
Wouldn't the specs we were shown suggest that whatever was meant to release in 2019 is what we're getting in 2020, just with higher clocks?
I would think that rdna2 was planned from the start(is anyone 100% convinced ps5 is using base rdna2 with enhancements?) so what would be these changes be other than higher clocks and the cooling needed to dissipate the additional heat?
Sure they would have planned zen2/rdna2 plus all their little secret spices and sauces, but I'm womdering if it's possible they added the elaborate power management module between the two iterations? Being able to limit TDP on a deterministic basis requires a lot more than just a variable clock control, and we currently have nothing like this yet in the PC world. I thought until now the clocking was based on current, wattage, and temp limiters. None of those would be comsistent between consoles.
 
Wouldn't the specs we were shown suggest that whatever was meant to release in 2019

I thought 2019 was a good year to release next gen consoles but if you look at the software situation there was no way or even idea to release machines in 2019.

Because of this bloody covid 19 crap that has thrown my business a vicious curve ball Sony might struggle to get there massive AAA games out before when potentially next gen was supposed to launch.
 
Sony said you can drop the watts by 10% and you only drop 1-2% in performance it got me thinking...does it mean if you increase watts by 10% you only get 1-2% increase in performance this doesn't sound all that efficient. I remember seeing a chart for RNDA (which i can't find anymore) which showed that after a certain point of overclocking you get really poor results per watts. Is this what's going with the GPU?
 
Sony said you can drop the watts by 10% and you only drop 1-2% in performance it got me thinking...does it mean if you increase watts by 10% you only get 1-2% increase in performance this doesn't sound all that efficient. I remember seeing a chart for RNDA (which i can't find anymore) which showed that after a certain point of overclocking you get really poor results per watts. Is this what's going with the GPU?

In general, very high clocks and efficiency dont go well along. Unless drastic things have happened. Learned that during my lifetime overclocking.
 
Back
Top