Playstation 5 [PS5] [Release November 12 2020]

Perhaps, you made some good points earlier and you could definitely be right about that.

But to move the GPU clock 10% by taking power from the CPU which uses significantly less, I expect to see a large swing.
Removing 40% power from CPU to move up GPU by 10% for instance (numbers).

If you remove 40% power from the CPU what frequency are we left with.
I don't believe this is a hypothetical scenario that may 'rarely' show up. Everyone loves Sony 1P for it's graphics and how immersive their games are. That should by default push the GPU to full saturation as they have done and always have done. So what's left for the CPU in those types of situations? Very curious because this was the number that wasn't talked about. Cerny only talked about best case scenarios. He did talk about the CPU maybe pulling power from the GPU (and saying it's wouldn't be much) ; but he gave us no hints on the other way around.

CPU matters!

Boost mode is going to show its greatest performance benefits early on when the hardware is less well utilised. That's also when it's most important to not look significantly weaker than the Xbox Series X.

When AVX use is the norm for SIMD stuff and developers have plumbed the depths of the GPU to maximise utilisation the preformance delta will increase.
 
Perhaps, you made some good points earlier and you could definitely be right about that.

But to move the GPU clock 10% by taking power from the CPU which uses significantly less, I expect to see a large swing.
Removing 40% power from CPU to move up GPU by 10% for instance (numbers).

If you remove 40% power from the CPU what frequency are we left with.
I don't believe this is a hypothetical scenario that may 'rarely' show up. Everyone loves Sony 1P for it's graphics and how immersive their games are. That should by default push the GPU to full saturation as they have done and always have done. So what's left for the CPU in those types of situations? Very curious because this was the number that wasn't talked about. Cerny only talked about best case scenarios. He did talk about the CPU maybe pulling power from the GPU (and saying it's wouldn't be much) ; but he gave us no hints on the other way around.

CPU matters!
Agreed, my guess the practical implementation would be situations where a modest shift would allow games to hit targets that are expected like 30 or 60fps. CPU matters a lot I'm just not sure we know how much of the workload is going to be native to the GPU.

A different way to say is, the CPUs are more a performance leap than the GPU (especially on PS5) so if there is going to be extra resources, it more probable to find it there than from the GPU.

I just think it's likely we're going to see more situations where games are GPU bound than CPU bound on PS5.
 
Agreed, my guess the practical implementation would be situations where a modest shift would allow games to hit targets that are expected like 30 or 60fps. CPU matters a lot I'm just not sure we know how much of the workload is going to be native to the GPU.

A different way to say is, the CPUs are more a performance leap than the GPU (especially on PS5) so if there is going to be extra resources, it more probable to find it there than from the GPU.

I just think it's likely we're going to see more situations where games are GPU bound than CPU bound on PS5.
Right, basically a page from this generation: purposefully design games to offload as much as possible to the GPU.

What is the CPU approximately for both? Ryzen 3700 ?
 
Perhaps, you made some good points earlier and you could definitely be right about that.

But to move the GPU clock 10% by taking power from the CPU which uses significantly less, I expect to see a large swing.
Removing 40% power from CPU to move up GPU by 10% for instance (numbers).

If you remove 40% power from the CPU what frequency are we left with.
I don't believe this is a hypothetical scenario that may 'rarely' show up. Everyone loves Sony 1P for it's graphics and how immersive their games are. That should by default push the GPU to full saturation as they have done and always have done. So what's left for the CPU in those types of situations? Very curious because this was the number that wasn't talked about. Cerny only talked about best case scenarios. He did talk about the CPU maybe pulling power from the GPU (and saying it's wouldn't be much) ; but he gave us no hints on the other way around.

CPU matters!
That's not what they do. The gpu can take "unused" cpu power based on the power management algorithm, and the lowering of cpu clock was not described outside of a mention of the clock being 3.5ghz most of the time. In fact there is much fewer reasons to apply any downclock to the cpu than the gpu since the cpu is clocked as conservatively as the xbsx. One reason given was about AVX, which is unclear whether it was just an example of wattage allocation.

In other words, if we had a 4.5ghz cpu with dynamic clocking versus 3.6/3.8 of xbsx there would be an advantage in downclocking it.
 
That's not what they do. The gpu can take "unused" cpu power based on the power management algorithm, and the lowering of cpu clock was not described outside of a mention of the clock being 3.5ghz most of the time. In fact there is much fewer reasons to apply any downclock to the cpu than the gpu since the cpu is clocked as conservatively as the xbsx. One reason given was about AVX, which is unclear whether it was just an example of wattage allocation.
That seems to not make sense to me. Why not just fix the 3.5Ghz since the power allocation is so conservative there and feed the GPU separately.

I don't see how this power shifting in this way makes it easier for developers. It just adds complexity.
 
DF promised a follow up article. Can’t come soon enough.

A couple tech demos, a look at the box (cooling!?) and/or controller and perhaps some rough indicator of the timeline would be lovely from Sony.

I personally enjoyed Cerny's talk and am really excited for PS5 in itself.

But advertising their GDC-esque talk on consumer facing channels and having it as their first live(ish) event for the PS5 wasn't a great idea. Whether anyone thinks Sony in themselves are having a rough time with marketing or not; there's no arguing that relatively speaking, MS are just knocking out of the park in terms of presentation.
 
Microsoft is running a bigger part than the 5700XT which has 225W TBP at the same clocks. It’s almost like the 5700XT may not be a great reference point anymore.

AMD told use RDNA2 was a 50% improvement in performance per watt over RDNA1. People need to adjust their expectations.
 
That seems to not make sense to me. Why not just fix the 3.5Ghz since the power allocation is so conservative there and feed the GPU separately.

I don't see how this power shifting in this way makes it easier for developers. It just adds complexity.
Because then it would burn the unused power all the time, during memory stalls or the few ms idle at the end of a frame, it's an advantage to be able to downclock when possible and free up "unused" power to the gpu which might be peaking demand at that point in time. And it also allows to not count an additional safety margin and downclock if something starts cramming the AVX in an unpredicted way. If the margin is for rare cases, the downclock is a rare case, but the power delivery advantage is always.

I monitor a render farm doing cg rendering with all cores used efficiently. Enabling or disabling intel adaptative clocking has a negligible impact on render time, but when lookng at the cores they are not always at max clock. Power comsumption goes down without any real world impact. So in an SoC that's more power that can go to the gpu.
 
Because then it would burn the unused power all the time, during memory stalls or the few ms idle at the end of a frame, it's an advantage to be able to downclock when possible and free up "unused" power to the gpu which might be peaking demand at that point in time. And it also allows to not count an additional safety margin and downclock if something starts cramming the AVX in an unpredicted way. If the margin is for rare cases, the downclock is a rare case, but the power delivery advantage is always.

I monitor a render farm doing cg rendering with all cores used efficiently. Enabling or disabling intel adaptative clocking has a negligible impact on render time, but when lookng at the cores they are not always at max clock. Power comsumption goes down without any real world impact. So in an SoC that's more power that can go to the gpu.
I’m not following along;
power is a fixed commodity in this case. There is no boost clock on the GPU apparently so when the GPU is peaking demand and it needs more it needs to draw voltage from the CPU. It’s trying to maintain its frequency.

This isn’t a typical boost scenario where the CPU has unused power and is transferring it to the GPU to reach higher frequencies. It is the GPU is always set on that higher frequency and borrows power when it’s saturated.
 
It was largely true when they said the same about the transition from GCN to RDNA1.

Ye, a generational shift like that big, from a rather 'bad' GCN arch compared to the competition. Take that 50% with a grain of salt from RDNA to RDNA2. See the XSX performance in Gears 5.
 
I’m not following along;
power is a fixed commodity in this case. There is no boost clock on the GPU apparently so when the GPU is peaking demand and it needs more it needs to draw voltage from the CPU. It’s trying to maintain its frequency.

This isn’t a typical boost scenario where the CPU has unused power and is transferring it to the GPU to reach higher frequencies. It is the GPU is always set on that higher frequency and borrows power when it’s saturated.
The gpu doesn't control this. There is a central module managing the power and the algorithm. It decides on all clocks to make the most of that power. There is more gain dropping the gpu clock by 2% than the cpu clock, unless the cpu is doing an unpredicted pattern, or if cores are idling. If the threshold is reached only during corner cases, the overall performance is close to peak. Also, design margins are no longer necessary.

Did you watch the presentation? There's a section about it.
 
No hard data from anyone (Sony or others) yet about instruction mixes and measured impacts on the balance.
 
There is more gain dropping the gpu clock by 2% than the cpu clock, unless the cpu is doing an unpredicted pattern, or if cores are idling. If the threshold is reached only during corner cases, the overall performance is close to peak. Also, design margins are no longer necessary.

I think Cerny said that the developers will have full control of the power envelope.
Essentially decide what the CPU/GPU power distribution will be.
But it's not clear if it can be done in runtime or only before game is starting.
 
No hard data from anyone (Sony or others) yet about instruction mixes and measured impacts on the balance.

No. I mean on PC, downclock the CPU, see the difference in 4K.
I suspect it will be 0.01%
But maybe someone has some real data?
 
I think Cerny said that the developers will have full control of the power envelope.
Essentially decide what the CPU/GPU power distribution will be.
But it's not clear if it can be done in runtime or only before game is starting.
That's new to me, do you have the exact quote from Cerny?

A bias control would be a resonable addition to provide. I.e. What to prioritize if the power threshold is reached. But that's not controlling power distribution. That's just a contention priority flag.
 
Ye, a generational shift like that big, from a rather 'bad' GCN arch compared to the competition. Take that 50% with a grain of salt from RDNA to RDNA2. See the XSX performance in Gears 5.
Why would you trust them on one but not another?

And why base anything off a port done in 2 weeks?
 
That's new to me, do you have the exact quote from Cerny?

A bias control would be a resonable addition to provide. I.e. What to prioritize if the power threshold is reached. But that's not controlling power distribution. That's just a contention priority flag.
That's how I understood it too. Developers won't be able to control the frequencies (that wouldn't work given the total power limit), only apply some priorities on power distribution to CPU and GPU.
 
Back
Top