Digital Foundry Article Technical Discussion [2024]

This is true, but as some of the clocks are dropping at above 1.1v or even at the chips max of 1.2v, I interpret that as meaning there's probably power related throttling going on. At say 0.88v or 0.95v I'd guess that was because the GPU was twiddling its thumbs.

Interestingly, they measured Furmark at 1.006 to 1.031 V, meaning with the right (wrong?) workload you can become power limited well below maximum V.

It could be thermal or even the driver recognising Furmark and dropping power, as I remember AMD and Nvidia did implement a method of detecting Furmark and throttling the card to stop it being killed.

Bit OT, but I've always thought that Nvidia were better at avoiding those kind of transient micro power draw spikes that seem to affect even mid range AMD GPU's. Look at Furmark, Gaming and Vsync on the aforementioned 6700 XT. Aggressive little fella, frequently boosting too aggressively .... 😬 I'd guess console makers don't want to have to fit a 1000W Gold five rail PSU in their BOM conscious units.

power-consumption.png

That's odd as I would expect the V-Sync result to look like the gaming result and visa versa.
 
Plenty of developers were coming out and saying all of this before, during, and after the consoles released. No better verification than the developers who are actually building games for these machines. Look no further than the post above. Loads more where that came from.

A side from some mublings at the very start of the generation from a few developers, who else?
 
It could be thermal or even the driver recognising Furmark and dropping power, as I remember AMD and Nvidia did implement a method of detecting Furmark and throttling the card to stop it being killed.

Fair point, but I think that was a while ago and as the (averaged) power consumption is up around the absolute limit of the card I think it's more likely to be a about how the GPU is managing boost.

That's odd as I would expect the V-Sync result to look like the gaming result and visa versa.

Yeah, it looked odd to me too. Only thing I can think is that due to vsync limiting power draw so much over over the period of a frame that the GPU's boost algorithm is too aggressive and detects low power then slams the frequency up too high, then has to correct massively and drop aggressively too. This seems to be something that Nvidia is a better at IIRC.

There's a German website that test for power transients at a much higher frequency than TPU, and their tests show spikes and troughs at even greater amplitude than TPUs tests which are averaged over a greater number of ms. Can't remember which site it was right now though.
 
Yields is not only defined by max clocks but actually mainly by max power the silicon is able to sustain (which is why overclocking is often done with undervolting). XSX is usually consuming 30W less than PS5 but in some cases it must be able to consume pretty much the same amount of PS5. We saw this in a Matrix demo analysis comparison. My point being yields from launch systems might not be that different.

Well, firstly, not quite apples to apples comparisons, you’re looking at total system power being measured there.

Secondly, PS5 chips do have a maximum on voltage allowance as well, in which they will downclock or draw power from the CPU if it hits those maximums. By comparison, XSX was still doing anywhere from 10-30W less than PS5 in quite a few portions of the matrix demo, and that was at system power. With a much larger fan and more memory chips with a larger bus.

Both of these consoles, running the same software, using the same node and architectures, clockspeed is going to the leading factor for power usage.
 
Last edited:
They can make a game performant and optimized for PS5... and simply stop optimizing the Xbox side if it's already around what the PS5 is... because in the end there's not a whole lot of incentive to push beyond.
nobody wants to program for the Xbox Series X, it's called selling games.
 
nobody wants to program for the Xbox Series X, it's called selling games.

Don't buy it. There's still like 30 million Xbox consoles around, or something like that. With game budgets and the need to extend sales as much as possible, with Sony even selling games on PC now, I can't imagine any larger devs ignoring the Xbox platform. The technical discussions are interesting, but I highly doubt developers aren't making some effort to get the best out of all the platforms.
 
Don't buy it. There's still like 30 million Xbox consoles around, or something like that. With game budgets and the need to extend sales as much as possible, with Sony even selling games on PC now, I can't imagine any larger devs ignoring the Xbox platform. The technical discussions are interesting, but I highly doubt developers aren't making some effort to get the best out of all the platforms.
It's not about ignoring the Xbox platform, it's just about treating it with lower priority. The games still come out on it, they just aren't taking full advantage of everything the hardware could do.

I very much doubt this is costing anybody any real sales. If they achieve 'mere PS5' levels of performance/quality, I doubt anybody except technical diehards are going to scoff at that, and if all they have is an Xbox, it's not gonna stop them getting a game they want.
 
... I highly doubt developers aren't making some effort to get the best out of all the platforms.
The question is how much returns they expect on their efforts? How much do you improve sales for a given level of execution quality, per platform? It does not seem obvious to me that technically poor games sell worse (save the very worst offenders), and having a technically polished game will net you much more sales. The number of gamers who'll refuse to buy a game because it has stutters or whatnot seems low.

There'll be a level of developer pride in their work/art, wanting to create something good, but there'll also be economic pressures to just accept 'good enough' and get it out the door. The interest in getting a really polished XB execution has to be limited relative to other platforms due to a smaller market that'll benefit, unless you can be convinced that working to get a mostly 60fps to a locked 60fps will net you an additional 500,000 sales to justify the cost. Any amount of polishing work is worth twice as much returns on PS5 or PC, but then PC has much higher cost to polish, meaning PS5 is the lowest of the rather high up hanging fruit.
 
The question is how much returns they expect on their efforts? How much do you improve sales for a given level of execution quality, per platform? It does not seem obvious to me that technically poor games sell worse (save the very worst offenders), and having a technically polished game will net you much more sales. The number of gamers who'll refuse to buy a game because it has stutters or whatnot seems low.

There'll be a level of developer pride in their work/art, wanting to create something good, but there'll also be economic pressures to just accept 'good enough' and get it out the door. The interest in getting a really polished XB execution has to be limited relative to other platforms due to a smaller market that'll benefit, unless you can be convinced that working to get a mostly 60fps to a locked 60fps will net you an additional 500,000 sales to justify the cost. Any amount of polishing work is worth twice as much returns on PS5 or PC, but then PC has much higher cost to polish, meaning PS5 is the lowest of the rather high up hanging fruit.

That’s a really good point. I’ve always viewed feedback on technical issues as a way to shame/encourage developers to take more pride in their work. The wider gaming population is far less bothered by some of this stuff so it’s not really impacting sales. Not to say that people are immune to technical issues - login and server issues raise lots of hell. But stutter is definitely low on the list of priorities for the avg person.
 
A side from some mublings at the very start of the generation from a few developers, who else?

Whispers about those things are poping out constantly since 2013 in graphics programers/developers twitter sphere from many esteemed persons. I have seen it many times. Unfortunately its all nda'ed to hell and any discussion quickly fizzles out.
 
The question is how much returns they expect on their efforts? How much do you improve sales for a given level of execution quality, per platform? It does not seem obvious to me that technically poor games sell worse (save the very worst offenders), and having a technically polished game will net you much more sales. The number of gamers who'll refuse to buy a game because it has stutters or whatnot seems low.

There'll be a level of developer pride in their work/art, wanting to create something good, but there'll also be economic pressures to just accept 'good enough' and get it out the door. The interest in getting a really polished XB execution has to be limited relative to other platforms due to a smaller market that'll benefit, unless you can be convinced that working to get a mostly 60fps to a locked 60fps will net you an additional 500,000 sales to justify the cost. Any amount of polishing work is worth twice as much returns on PS5 or PC, but then PC has much higher cost to polish, meaning PS5 is the lowest of the rather high up hanging fruit.

If you sell 2 million copies on playstation, you sell maybe 750k or 1 million on Xbox. Those numbers are an estimate, but it feels roughly right based on install base. That's still like $45 - $60 million. Not sure how that wouldn't be an incentive. In general I don't think games really need to be hyper-optimized in every aspect. Games largely sell based on gameplay. I just don't know that Xbox would be any less optimized than Playstation for a multiplatform title, even if it's the smaller platform. I think if they really go into low-level optimization on one, they're likely the kind of developer that will do low-level optimization on the other.
 
In general I don't think games really need to be hyper-optimized in every aspect. Games largely sell based on gameplay.
Which is kinda the point. It's not worth optimising the bejesus out of your game as it won't generate more revenue. However, you want to do some optimisation. So grab a platform and optimise that as much as you care to, and then see how it runs on other platforms, and if that's okay, that seems like job done. Which platform will you optimise for? PS has both a notable sales lead and more useful tools, it seems, so for a given amount of effort on that platform you'll get better yields than the same effort on XB. Then port that to XB and if it's running okay, why put in more effort to optimise the XB version more? What exactly does that get you? That or the XB version shares the PC build effectively so optimise for PC and use that. Games that priorities PC over PS probably show an advantage on XB which might explain those games and when we get games that run obviously inferior, that's because optimising for the one platform wasn't deemed worth it.
 
Well, firstly, not quite apples to apples comparisons, you’re looking at total system power being measured there.

Secondly, PS5 chips do have a maximum on voltage allowance as well, in which they will downclock or draw power from the CPU if it hits those maximums. By comparison, XSX was still doing anywhere from 10-30W less than PS5 in quite a few portions of the matrix demo, and that was at system power. With a much larger fan and more memory chips with a larger bus.

Both of these consoles, running the same software, using the same node and architectures, clockspeed is going to the leading factor for power usage.
Well, I disagree. PS5 dynamic clock system shows us the clockspeed is not the only factor. What also matters is what kind of instructions are used and how many of them are used by cycles. Besides the architecture is rather different. PS5 uses mainly RDNA 1 / 2 architecture (L1/L2 caches, CU by SE) while XSX has a custom architecture not seen on any RDNA 1 or 2 desktop GPU. Maybe this architecture focusing on compute prevented them to increase the clocks the way RDNA2 GPUs and PS5 did. About the compute thing maybe it's because of their compute cloud servers? that's what Spencer told us years ago; that XSX has being designed for gaming and compute cloud servers, but I digress.

And when they test the clockspeed / power consumption (and look for some kind of sweatspot) for yields they must test for the maximal power consumption possible by the APU for instance using Furmark, not an average of power consumption. Remember what Cerny told us here: Without dynamic clocks they could not even reach 2ghz because I assume in some rare cases, even if very short, the system can consume the max at those clocks. In the case of XSX that maximum power consumption that can be reached is very similar to PS5 max power consumption, hence their static clocks being relatively so low because for the yields they must plan for that maximum power consumption. So maybe with that architecture and those clocks they have the same yields as PS5.
 
What's strange to me is that it's literally the same company engineering the chips. They know what's the best combination of CU's and frequency for yields and cost. How did Sony came to the conclusion of using a smaller chip with dynamic higher frequencies and Microsoft went in the other direction?

Especially since rdna 2 gpu's are clocked more like a PS5 and higher, while xsx is so much lower. Unless AMD and Sony have horrible yields at those frequencies, the series x chip is the odd one. The PS5 is also cheaper to manufacture (already in 2021 they were selling at cost, then the cost rose again), so I don't understand how it all happened.

This is purely speculative but something I would think makes sense is that there has be something in place that prevents AMD's semi-custom customers from essentially speccing the same (or near same) end design. As I would think it would be rather awkward if both Sony and MS ended up announcing roughly the same APU.

If we look at both the PS4 and PS5 configuration for the APU/subsystems anyways they are fairly "conventional" (for lack of a better term) with what you would expect and what AMD (or any other GPU vendor) would release independently (the GPU configurations are basically near similar to what AMD would sell on the PC). The 256 bit GDDR unified memory subsystem is also fairly conventional in terms of a mainstream high performance configuration, inline with how PC GPUs are configured.

The Xbox designs on the other hand, especially with how they approach the memory subsystem, has been rather "exotic" for the last two generations already compared to what you would expect to find as equivalents as something AMD would sell for the PC.

For all we know in the contact Sony has some sort of exclusivity on a 256 bit GDDR configuration and/or some other design specs and as the second customer Microsoft then has to work around them.
 
This is purely speculative but something I would think makes sense is that there has be something in place that prevents AMD's semi-custom customers from essentially speccing the same (or near same) end design. As I would think it would be rather awkward if both Sony and MS ended up announcing roughly the same APU.

If we look at both the PS4 and PS5 configuration for the APU/subsystems anyways they are fairly "conventional" (for lack of a better term) with what you would expect and what AMD (or any other GPU vendor) would release independently (the GPU configurations are basically near similar to what AMD would sell on the PC). The 256 bit GDDR unified memory subsystem is also fairly conventional in terms of a mainstream high performance configuration, inline with how PC GPUs are configured.

The Xbox designs on the other hand, especially with how they approach the memory subsystem, has been rather "exotic" for the last two generations already compared to what you would expect to find as equivalents as something AMD would sell for the PC.

For all we know in the contact Sony has some sort of exclusivity on a 256 bit GDDR configuration and/or some other design specs and as the second customer Microsoft then has to work around them.
They are both conventionally designed but customized systems. Each customer has their goals, and they look at AMD's roadmap to decide what they want to achieve with their systems. Cost is a major factor in how these companies decide what gets put into their systems and each customer tackles this in a different way. This is where I believe Sony's experience in manufacturing and penchant for cost cutting ever since PS3 bit them in the ass, plays a major role.

Xbox One went with DDR3 with eSRAM backing, because DDR3 was cheaper than GDDR5. Sony bet that the price of GDDR5 would drop so they went with that, but they also cost reduce in other ways. Both still used the same 256-bit bus width. Microsoft because of their design choice ended up with a larger SoC Xbox One 363 mm² vs PS4 348 mm2 which means PS4 was cheaper to manufacture than Xbox One.

Xbox Series X and PS5 follow the same design philosophy as their previous generation older siblings.

Xbox Series X went with 10/6GB GDDR6 on 320-bit/192-bit bus width so they can mix different size NAND for cost reasons, while PS5 splits the difference with 16GB GDDR6 256-bit bus width. Sony bet that RAM price would drop and focused on reducing as much die space as they possibly can. End result is an Xbox Series X with 360 mm² vs PS5 ~300mm² SoC. There is really nothing exotic about their memory subsystems, they are not even using HBM like AMD used in their consumer GPUs. Current AMD has bet on a large L3 cache which is not a luxury console manufactures can rely on because die space is precious and getting more expensive.
 
Last edited:
In the power equation for silicon, frequency is the leading coefficient, as it is cubic. So that clock speed differential is going to be heavily drawing on power. It’s the main reason why we move to more cores or more processing units. We take scaling loss in compensation from dropping the power requirements dramatically.

Otherwise we would have stuck to single core and maximum frequency. But as a cubic coefficient, that little chip will have more power density than a nuclear reactor very quickly as you keep pushing it upwards.

It is understandable why PS5 uses so much power from that standpoint alone. That’s the risk that Sony takes with its running higher clock speeds. Your yield has to support it.
I don't really disagree and I wouldn't mention this if it were just a 20-30w differential. But 70-80w is a massive delta on a ~230w console and represents a full third of its power envelope.

The ~160w PS4 Pro also has fixed clock rates but it doesn't show this kind of behavior. I ran a few tests on mine and most Pro-enabled games ran in the 140-160w range. The only exceptions were non-Pro enabled games and 20 year old remasters like Kingdom Hearts which don't support shading hardware.

I'm not saying there's anything nefarious going on but it does suggest that the console is still somewhat underutilized. Power-capping my 2080 Ti at 230w vs 150w results in a 200-300mhz differential.
 
I don't really disagree and I wouldn't mention this if it were just a 20-30w differential. But 70-80w is a massive delta on a ~230w console and represents a full third of its power envelope.

The ~160w PS4 Pro also has fixed clock rates but it doesn't show this kind of behavior. I ran a few tests on mine and most Pro-enabled games ran in the 140-160w range. The only exceptions were non-Pro enabled games and 20 year old remasters like Kingdom Hearts which don't support shading hardware.

I'm not saying there's anything nefarious going on but it does suggest that the console is still somewhat underutilized. Power-capping my 2080 Ti at 230w vs 150w results in a 200-300mhz differential.
Has anyone tested the power consumption on series x for something like hellblade 2? I wonder if it's closer to the matrix demo or if it's lower.
 
Back
Top