Playstation 5 [PS5] [Release November 12 2020]

So what would you say AMD SmartShift technology is, secondary, tertiary, quadrinary, quintenary?

No one is saying SmartShift is primary, only that Sony uses it.
 
So what about using SmartShift? It's a good method for increasing power efficiency. The amount of concern trolling about PS5 is just baffling, and Sony game threads are more of the same.
 
So what would you say AMD SmartShift technology is, secondary, tertiary, quadrinary, quintenary?

No one is saying SmartShift is primary, only that Sony uses it.
The small part of smartshift sony uses is not an important part of the architecture.
 
This is entirely about power and thermal limits of the PS5 SOC. In order to get the GPU above 2GHz they had to smartshift other parts of the SOC to lower speeds such as the CPU. The speed limits of the PS5 SOC has no impact on PC Graphics Cards or even other consoles because those chips are different and thus have different properties.
Smartshift doesn't downclock neither the CPU, nor the GPU. You guys need to carefully re-read Cerny's presentation and interview.
 
I am corrected, the downclocking of the CPU and GPU speeds is not AMD's doing, it's entirely Sony's doing.
The primary clocking decision circuitry used by sony doesn't appear in amd cpu, gpu, or apu. Smartshift was presented as a side note after explaining for 10 minutes how they did it. He ended the segment with "while we're at it we also used amd's smartshift tech to send any unused power from cpu to gpu, so we can squeeze a few more pixels".

Another choice quote more broadly about the collaborative nature of amd-sony:

"I'd like to make clear two points that can be quite confusing: first we have a custom amd gpu based on rdna2 tech. What does that mean? Amd is continuously improving and revising their tech. For rdna2 their goals were roughly speaking to reduce power consumption by rearchitecting the gpu to put data closer to where it's needed, to optimize for performance, and to add more advanced features. But that feature set is malleable which is to say we have our own needs for ps5 and that can factor into what the amd roadmap becomes. So collaboration is born. If we bring concepts to amd that are felt to be widely useful, then they can be adopted into rdna2 and used broadly including in pc gpus. If the ideas are sufficiently specific to what we're trying to accomplish like the gpu cache scrubber, then they end up being just for us. If you see a similar discrete gpu available as a pc card roughly at the same time we release our console, that means our collaboration with amd succeeded in producing tech useful in both worlds. It doesn't mean we at sony simply incorporated the pc part into our console."
 
So they did it for the few pixels.

14% increase in performance according to AMD using a Ryzen 7 4800H and Radeon RX 5600M while increasing also efficiency in power consumption. But that doesn't have anything to do with you saying smartshift downclocks the CPU, that's not true. Cerny explained it very clearly, CPU and GPU frequency / power consumption it's going to be determined by the tasks each of them is executing. Smartshift only transfers additional power to the GPU when it is not required by the CPU.
 
14% increase in performance according to AMD using a Ryzen 7 4800H and Radeon RX 5600M while increasing also efficiency in power consumption. But that doesn't have anything to do with you saying smartshift downclocks the CPU, that's not true. Cerny explained it very clearly, CPU and GPU frequency / power consumption it's going to be determined by the tasks each of them is executing. Smartshift only transfers additional power to the GPU when it is not required by the CPU.
Yes it will be a reasonable difference on a laptop, but here it will be more to deal with corner cases, since they have both clocks at or near the maximum most of the time. Considering cerny was not putting much emphasis on it, I think they already have individual tdp limits based on keeping the die wattage density equal across the entire surface (shifting too much means hotspot), so the advantage of smartshift is a small one in practice compared to the rest of the scheme. Also, without knowing their internal tdp target versus their real world tests we have no idea how often corner cases happen. Devs would be able to answer if they were allowed to talk about sony's "project squirrel".
 
Let’s say Sony cooling solution is capable of 2.1GHz with very very heavy workload, then why can’t they choose fixed frequency 2.1GHz? I am sure I am not the only one who questions this,

Sony didn't choose fixed frequency because they get more performance out of fixed power consumption.

What else is there to question? Every PS5 will still show the exact same performance on each given load.
 
14% increase in performance according to AMD using a Ryzen 7 4800H and Radeon RX 5600M while increasing also efficiency in power consumption. But that doesn't have anything to do with you saying smartshift downclocks the CPU, that's not true. Cerny explained it very clearly, CPU and GPU frequency / power consumption it's going to be determined by the tasks each of them is executing. Smartshift only transfers additional power to the GPU when it is not required by the CPU.
For the CPU, 3.5GHz is at the top end of the spectrum, and he also suggests that this is the typical speed - but under certain conditions, it can run slower.
https://www.eurogamer.net/articles/...s-and-tech-that-deliver-sonys-next-gen-vision
 
Saying Sony is using their variable clocks strategy "for marketing because higher numbers is better PR" is really silly and nigh on console warrish.

They did it because they could extract more performance out of a setup like this versus one at fixed clocks with their current priority, and they have atleast been transparent on that. They didnt have to do anything with the actual core components beyond giving those components more leeway in the set up of their box, that's just smart design.
 
Which is not from smartshift, cerny only gave one example of this possibility and it was from overusing AVX256. Which is happening on every processor right now.

I really think this mesh with their goal of having a thermal density limit on the entire SoC surface, which is a problem with avx256, apparently.

Each section have their own rules, and the gpu is allowed to use the unused portion of the cpu, not the other way around.
 
Last edited:
Simplified, made up numbers to convey a point - real world numbers may be better or worse and there will likely be a lot of intermediate steps in clocks, but the general concept still stands:

PS5 with Fixed Clocks / Variable Power = 2GHz during 100% of your workloads.

PS5 with Variable Clocks / Fixed Power = ~2GHz during 10% of your workloads & ~2.23GHz during 90% of your workloads.

Which is better? As far as I'm concerned, it's a design win for a given piece of hardware. It's not a band-aid, it's taking nothing away from the machine, it's making the most of what it already has. When you provision fixed clocks based on the rarer scenario, you're throwing away power the rest of the time.

As for SmartShift; that's in addition to Sony's Continuous Boost. I expect it will help mitigate some throttling in the Continuous Boost scheme, while also offering general flexibility, this is also a good thing. If your GPU is running flat out and being rinsed by a heavy workload while your CPU has juice to spare, why not redirect it?

I wouldn't be at all surprised if this approach becomes the norm going forward in this space, to not do it is to not extract all the performance in your device.
 
Sony didn't choose fixed frequency because they get more performance out of fixed power consumption.

What else is there to question? Every PS5 will still show the exact same performance on each given load.
The question is how does it perform in the real world. From what I know, and contrasting with many expressed convictions here, a lot of people will be pleasantly surprised. We do have plenty of subjective statements from Cerny and developers, but it's impossible to put a number on game performance. They cannot give us a general number even if they wanted to.

And even if they chose a ceiling that successfully stays at max clock all the time in their gaming tests, using this scheme would still allow a higher clock with all else being equal on cooling/power, or otherwise allow to save money on cooling/power at the same clock. The GDC presentation explains this.
 
The two posts above explain it better than i did. If this turns out particularly well for Sony, i expect every machine going forward to take on this approach because the positives would outweigh any negatives by far.
 
Back
Top