Playstation 5 [PS5] [Release November 12 2020]

While there are likely many contributors to the power management system, a major component is that the physical and electrical behavior of the silicon is profiled at various points in the chip's operating range, to give its thermal response at various temperatures, voltages, and clocks. This profile data is looked up when a given activity counter registers an action, to get an estimate of the power consumption that resulted from the operation. Some later additions to the method include dummy ALUs or partial register file blocks that periodically perform some kind representative activity to get a tighter approximation of what the hardware is consuming. This isn't the same as instruction count, since instructions can translate into different levels of activity or different internal operations. An instruction may or may not generate multiple cache misses, or might wake up blocks that were clock-gated or worse power-gated, and all those have varying power costs.

The original motivation for this stems from the question of measuring temperature or power consumption for the silicon at small time scales and in any hot spots. The chip has thermal limits to safe operation, which traditionally required a wide safety margin for worst-case scenarios. Using temperature in a single thermal sensor doesn't cover the whole chip, and thermal sensors are relatively large to put them by every active block. Their response time and the speed that heat travels from a hot spot to a sensor can be a problem since local power spikes can push local temperatures from nominal to dangerous in millisecond or shorter time frames.
AMD's approach was to profile the silicon for how it reacted to events at different places in the performance envelope, and then used the activity count to generate an estimate. While conservative, it was based on a dynamic approximation that worked at microsecond ranges, rather than an estimate decided at product design time with very wide safety margins.
Since then, AMD may have also done more local electrical monitoring as well, which can produce more accurate estimates of power consumption and can tighten the estimate of how much a given region can heat up based on how much additional power can be consumed in a given time step.

The latest boost functionality and high clocks for Zen come from pushing silicon to near the safe limits of voltage and temperature of the silicon process for controlled periods of time, which under prior methods could very quickly overrun them. Sony's method appears to piggy-back on a lot of this work, and its operating point discards much of that boost range. The physical characterization tables seem to be tuned for consistency, which means the power management hardware thinks the silicon has a given baseline set of properties, regardless of whether the silicon itself can do better.
It's possible that AMD's method is over-engineered for what Sony is doing, but it would be more work to take it out at this point.
Thanks god some justice.
So the question is more phisical and related to the silicon characteristics that are mapped somehow... thoose maps may also change with time while the console gets old. Hopefully in a predictable way....

So my concern is well based.
And now ban me again please.
 
Thanks god some justice.
So the question is more phisical and related to the silicon characteristics that are mapped somehow... thoose maps may also change with time while the console gets old. Hopefully in a predictable way....

So my concern is well based.
And now ban me again please.

Your concern is well placed.

I have two Pros in my living room.

Mine is relatively quiet - as quiet as a Pro can be I guess. While my partner’s Pro literally sound like a Space Shuttle. No idea why but it is what it is.

So of course your concerns are valid, and I also worry about noise levels!
 
And so you think the PS5 constantly profiles different instructions and the volume of those instructions in flight across the SOC’s cpu/gpu/io devices and continuously sets frequency to limit power consumption?

Why go through all that when you can simply set a power cap? Basically a limit of how much power the SOC can pull. What’s the benefit to do it based on instruction count?
Because it has to be 100% deterministic. Every PS5s sold will perform exactly the same way whether the room temp is 5°C, 35°C, if you won or lost at the silicon lottery.

@HBRU you can definitely stop worrying about performance consistency over time. The worst thing that could happen is the PS5 could become noisier with time. That's going to be determined by the quality of the cooling.
 
Thanks god some justice.
So the question is more phisical and related to the silicon characteristics that are mapped somehow... thoose maps may also change with time while the console gets old. Hopefully in a predictable way....

So my concern is well based.
And now ban me again please.

There could be elements that can track age-based degradation. There was an optional form of voltage adjustment introduced with some of the later GCN generations that tried to account for VRM aging, although I think most SKUs did not use it and I think the long-term effect was a slow decline in peak boost performance. I'm not aware of all the possible inputs to AMD's DVFS to know if there is age-correction for the silicon die.

For things like the VRMs, it would be out of the scope of the game developers to know anything about the state of the physical box, so it would fall on Sony to appropriately add margin its power delivery so that it meets specifications for the planned lifespan of the device.
Silicon degradation is a long-term process, where the chip manufacturer will attempt to give behavior in-spec for a chosen life span. It at least used to be 10 years, though I'm not sure if it's been adjusted.
That would be out of the control of the developer, and either Sony or AMD could be providing safety measures or extra margin. The PS5's CPU clock speeds are far below where the likely first signs of lowering circuit performance would appear. The GPU is clocked high relative to most GPUs, but it may not be considered a significant burden given the silicon can support the higher CPU clock. Degradation that only shows impacts at circuit performance levels above what the PS5 is allowed to reach would be unimportant.

There's some time horizon decided at design time for the expected life expectancy of the silicon. No chip products promise limitless functionality, and there are many other components that are more likely to fail before this is a concern, and it would be a concern primarily for Sony or a retailer if it's somehow within a warranty period.
 
I worry about performance consistence over time ! As Sony's fan I hope this is not gonna happen.
Why is this worry exclusive to the PS5? And why is it so overwhelming? PS4 Pro was loud. Cerny acknowledged that, and said we’d be happy with PS5’s cooling solution. So they know PS4/Pro could get loud, and they’re doing something about it, which may explain why PS5’s so big. MS learned their lesson with a noisy 360, now Sony has indicated they’ve heard the criticism of the PS4 Pro.

Why worry about developers keeping an eye on cooling now? Current gen games spike current gen (PS4/Pro) console fans to high heaven, so obviously it’s not a big deal for Sony or devs. Try the daily races menu in GT Sport, or hit the PS button while playing Alienation.
 
There could be elements that can track age-based degradation. There was an optional form of voltage adjustment introduced with some of the later GCN generations that tried to account for VRM aging, although I think most SKUs did not use it and I think the long-term effect was a slow decline in peak boost performance. I'm not aware of all the possible inputs to AMD's DVFS to know if there is age-correction for the silicon die.

For things like the VRMs, it would be out of the scope of the game developers to know anything about the state of the physical box, so it would fall on Sony to appropriately add margin its power delivery so that it meets specifications for the planned lifespan of the device.
Silicon degradation is a long-term process, where the chip manufacturer will attempt to give behavior in-spec for a chosen life span. It at least used to be 10 years, though I'm not sure if it's been adjusted.
That would be out of the control of the developer, and either Sony or AMD could be providing safety measures or extra margin. The PS5's CPU clock speeds are far below where the likely first signs of lowering circuit performance would appear. The GPU is clocked high relative to most GPUs, but it may not be considered a significant burden given the silicon can support the higher CPU clock. Degradation that only shows impacts at circuit performance levels above what the PS5 is allowed to reach would be unimportant.

There's some time horizon decided at design time for the expected life expectancy of the silicon. No chip products promise limitless functionality, and there are many other components that are more likely to fail before this is a concern, and it would be a concern primarily for Sony or a retailer if it's somehow within a warranty period.
Someone in this 3D (and Cerny) spoke about 100% deterministic count of GPU activity that will lead to a 100% fidelity of reproduction of software in whatever ps5 hot or cold, new or old. That is fine.... for who believes that. But you (and me) suspect that is not... it's more a physical approximation of this count through sensors measures that go through algorithms elaboration... Actually keeping track of all the GPU activity with a 100% accuracy by counting one by one seems an unrealistic overhead that needs a lot of silicon and is not needed.... and as you pointed out AMD solutions are different. Sony could have pushed further this AMD solution... but how much ? To a 99,9999% track of GPU activity ? Something almost deterministic that cannot be distinguished to a pure deterministic approach ? Could be. I hope. Sony is not new to cutting edge solutions that then gives trouble. I wonder why MS did not use the same strategy having AMD as partner. Ok probably they don't need to give out this >10 TF figure that is the culprit of all this power capped "boost".... Well don't know even how to call it.
 
I wonder why MS did not use the same strategy having AMD as partner.

In a console you’d want consistent performance optimally for development targets. In the pc space things are abit different, where things are already unbalanced anyway with different hw configs. A gpu delivering between 9 to 10TF there wouldn’t matter because users have anything from a toaster to a 3080Ti.
 
Last edited:
Someone in this 3D (and Cerny) spoke about 100% deterministic count of GPU activity that will lead to a 100% fidelity of reproduction of software in whatever ps5 hot or cold, new or old. That is fine.... for who believes that. But you (and me) suspect that is not... it's more a physical approximation of this count through sensors measures that go through algorithms elaboration...
Not having a physically accurate profile is what makes the behavior of the PS5's DVFS consistent. Using activity counters to look up values in a set of tables produces the same behavior in different chips if they all receive the same table values. AMD's other products have multiple SKUs and variability in how they react based on differences in product selection and the silicon quality between chips. To make the PS5 deterministic, a representative set of idealized SOC values can be assigned to all functional chips. As long as a chip can meet or beat the requirements of the idealized tables, it is safe to have an inaccurate representation of the actual silicon quality.
All chips will act like they have the same average properties, even if some of them can perform better. The true silicon quality doesn't matter, as long as sub-standard chips are removed.

Actually keeping track of all the GPU activity with a 100% accuracy by counting one by one seems an unrealistic overhead that needs a lot of silicon and is not needed.... and as you pointed out AMD solutions are different. Sony could have pushed further this AMD solution... but how much ?
The concept of pushing the concept further has to do with the higher-order boost algorithms AMD uses, which are things like PBO or skin-temperature based boost. That's where the higher 4GHz or higher clock speeds tend to be derived for AMD's CPUs, or the upper clock range of the known GCN and RDNA chips comes from. These boost ranges often purposefully consume a non-sustainable amount of power, and this is where AMD's DVFS can react quickly enough even at operating points close to failure conditions.
The PS5's CPU clocks stay below where these modes would be used. The GPU is clocked higher than most, but those speeds aren't necessarily a challenge for a DFVS that can react to temperature spikes on 1.3V 4.7GHz cores.
 
Regarding the size
mzta20200620164104712sybf.jpg
 
Not having a physically accurate profile is what makes the behavior of the PS5's DVFS consistent. Using activity counters to look up values in a set of tables produces the same behavior in different chips if they all receive the same table values. AMD's other products have multiple SKUs and variability in how they react based on differences in product selection and the silicon quality between chips. To make the PS5 deterministic, a representative set of idealized SOC values can be assigned to all functional chips. As long as a chip can meet or beat the requirements of the idealized tables, it is safe to have an inaccurate representation of the actual silicon quality.
All chips will act like they have the same average properties, even if some of them can perform better. The true silicon quality doesn't matter, as long as sub-standard chips are removed.


The concept of pushing the concept further has to do with the higher-order boost algorithms AMD uses, which are things like PBO or skin-temperature based boost. That's where the higher 4GHz or higher clock speeds tend to be derived for AMD's CPUs, or the upper clock range of the known GCN and RDNA chips comes from. These boost ranges often purposefully consume a non-sustainable amount of power, and this is where AMD's DVFS can react quickly enough even at operating points close to failure conditions.
The PS5's CPU clocks stay below where these modes would be used. The GPU is clocked higher than most, but those speeds aren't necessarily a challenge for a DFVS that can react to temperature spikes on 1.3V 4.7GHz cores.
I trust you.
You seem to understand how is the situation and how Sony managed this boost in a smart way... that is a peak in performance limited in time and that goes through a predictable and almost exactly figurable pattern trough all console that have the same workload.

But it is not a power capped boost (in the sense of watts drawn from soket) from what I understand.... it's a boost that goes through an assisted curve of performance with the aid of distributed sensors and algorithm feedback that modulate the derived performance in the almost exactly way they wanted. The cap is in the abstract typical curve desired... not in the actual power drawn that of course may change with silicon quality, temperature, oldness of silicon....

We can call this PS5 mechanism an HW assisted HW abstraction... ? [emoji33] ... somehow I think.
 
Last edited by a moderator:
Hi everyone - new here :D

I've been following this thread just by lurking and reading other stuff to fill in gaps in between, but I have a question that I can only ask being signed up. I kind of need that higher level of experience and knowledge to get this right in my head.

So the PS5 has a fast SSD and Cerny shows us a slide with the calculation for how fast it can move data into VRAM. It's the slide with the 0.27 speed rated and the equation on the top of (2GB / 5GB / 1.5 = 0.27 seconds). Then it has a table below it. OK, so this makes sense and if you swap out the 5GB with the 2.4GB of the XsX, you see a rough doubling of the time to load - which again seems right to me as a layman. It would take the 2.4GB SSD round about 0.6 seconds to do the same.

ps5_2gb_load_times.jpg


Where I'm getting confused is the section on loading from HDD and the spiderman PS4/PS5 comparison. So in the tech demo, we saw the function of 'Fast Travel' demonstrated. On the PS4 it took roughly 8 seconds and on the PS5 it took roughly 0.8 seconds.

0.8 seconds based on above would suggest it's moving around 6GB of assets, but the table on the HDD (PS4 equivalent) says it can move 1GB in 20 seconds. I just don't get this. That would mean Spiderman should be taking 120 seconds right? Even if we take half of that size at 3GB, that's still 60 seconds on the old HDD - yet it's on screen in 8 seconds. What is happening here - why the big discrepancy? The only thing that possibly makes sense is

Static textures are in the cache making the 'reload' quicker
The game starts with about 500MB worth of data and loads the rest as you're in game
The majority of the 0.8 seconds are compute tasks that can't be eradicated by I/O and the data is actually only 500MB/1GB due to original game not being huge in terms of textures etc

But it's racking my brain trying to figure this out. Anyone cast any light on this for us?
 
I trust you.
You seem to understand how is the situation and how Sony managed this boost in a smart way... that is a peak in performance limited in time and that goes through a predictable and almost exactly figurable pattern trough all console that have the same workload.
Performance is a function constrained by a number of boundaries. There are the maximum clock values for the CPU/GPU that any workload whose activity level is not high enough to hit the power ceiling.
If some level of activity at a given clock speed will exceed the power budget, the clocks and voltages will find a clock/voltage point defined by the DVFS system that will bring the consumption in-bounds.

But it is not a power capped boost (in the sense of watts drawn from soket) from what I understand.... it's a boost that goes through an assisted curve of performance with the aid of distributed sensors and algorithm feedback that modulate the derived performance in the almost exactly way they wanted. The cap is in the abstract typical curve desired... not in the actual power drawn that of course may change with silicon quality, temperature, oldness of silicon....
The cap represents the power budget Sony expects the PSU and VRMs to handle (plus some additional margin) and the amount the cooler needs to dissipate through the console's operating temperature range (plus some additional margin).
The SOC is a black box, where the rest of the system just needs to know that it won't exceed those limits. The power cap is a design point that the silicon needs to not exceed. If the silicon is somewhat better than average, then a given chip may draw slightly less power. However, the algorithm that chooses the clock and voltage level will make decisions based on the presets all PS5 chips will be loaded with.

Per the following:
While it's true that every piece of silicon has slightly different temperature and power characteristics, the monitor bases its determinations on the behaviour of what Cerny calls a 'model SoC' (system on chip) - a standard reference point for every PlayStation 5 that will be produced.
https://www.eurogamer.net/articles/...s-and-tech-that-deliver-sonys-next-gen-vision

Silicon that performs worse than the model expects would need to be rejected as falling outside the spec.

We can call this PS5 mechanism an HW assisted HW abstraction... ? [emoji33] ... somehow I think.
All these systems use a model of some sort to govern their decision making, since it's not possible to capture the full complexity of real-world operation. The PS5 is using the same infrastructure as other AMD chips, but the DVFS algorithm is less aggressive and tuned for platform consistency.
 
Hi everyone - new here :D

....

But it's racking my brain trying to figure this out. Anyone cast any light on this for us?

Welcome. I think there Cerny is talking about raw numbers, not taking into account compression and other optimizations.

Check this talk (16:40 mark)

 
Last edited by a moderator:
This is nice!


https://blog.playstation.com/2020/06/22/marvels-avengers-confirmed-as-free-upgrade-to-ps5/

When we received our first few kits, I rushed to put an engineering team on the task of creating a PS5 version of Marvel’s Avengers as quickly as possible. I wanted to push Foundation, our proprietary game engine, to its limits on PS5 and see what it could do. Take a look at what we’ve accomplished so far!

The new GPU allows us to increase our texture resolution, push a higher level of detail farther from the player, enhance our ambient occlusion, improve our anisotropic filtering and add a variety of new graphics features such as stochastic screen-space reflections with contact-aware sharpening.

If you’re a technophile like me, you enjoy having a bit of choice in how you leverage your cutting-edge console’s capability. As gamers, we sometimes want every ounce of power put into extra graphics features to achieve the highest image quality possible. For this, Marvel’s Avengers will offer an enhanced graphics mode on PS5. At other times, we want the most fluid gameplay experience possible. For that, Marvel’s Avengers will offer a high framerate mode on PS5, which targets 60 FPS with dynamic 4K resolution.

The GPU and CPU improvements on PS5 are exciting, but even more exciting is the introduction of an ultra-high speed SSD with lightning fast load speeds. This is a transformative improvement in consoles that will reduce load times down to one or two seconds and enable real-time streaming of massive worlds at ridiculously fast speeds. Without any optimization work, the loading and streaming of Marvel’s Avengers improved by an order of magnitude on PS5. When optimization is complete, loading content will be nearly instant, allowing players to seamlessly jump into missions anywhere in the game world. And as Iron Man flies through content-rich levels, higher resolution textures and mesh will stream in instantly, maintaining the highest possible quality all the way to the horizon.
 
Last edited:
1 - 2 seconds loading times on PS5. Get ready for next-gen SSD-gate !

Oh yeah.. can't wait to see what the loading time is like on a good PC. Not that anything I've seen about thing game has interested me in any way.
 
Back
Top