Digital Foundry Article Technical Discussion [2024]

This is purely speculative but something I would think makes sense is that there has be something in place that prevents AMD's semi-custom customers from essentially speccing the same (or near same) end design. As I would think it would be rather awkward if both Sony and MS ended up announcing roughly the same APU.

If we look at both the PS4 and PS5 configuration for the APU/subsystems anyways they are fairly "conventional" (for lack of a better term) with what you would expect and what AMD (or any other GPU vendor) would release independently (the GPU configurations are basically near similar to what AMD would sell on the PC). The 256 bit GDDR unified memory subsystem is also fairly conventional in terms of a mainstream high performance configuration, inline with how PC GPUs are configured.

The Xbox designs on the other hand, especially with how they approach the memory subsystem, has been rather "exotic" for the last two generations already compared to what you would expect to find as equivalents as something AMD would sell for the PC.

For all we know in the contact Sony has some sort of exclusivity on a 256 bit GDDR configuration and/or some other design specs and as the second customer Microsoft then has to work around them.
That would be really strange, but there have been stranger clauses in the tech industry.
 
That would be really strange, but there have been stranger clauses in the tech industry.
How would those design meetings go though?

"We want a blah blah chip with..."
"Hang on, we can't give you that."
"Why?"
"Can't say due to confidentiality agreements. Pick a different design."

I think it far more likely you just have different compartmentalised projects with no cross-over, and then you just get what you get as does the other guy.
 
How would those design meetings go though?

"We want a blah blah chip with..."
"Hang on, we can't give you that."
"Why?"
"Can't say due to confidentiality agreements. Pick a different design."

I think it far more likely you just have different compartmentalised projects with no cross-over, and then you just get what you get as does the other guy.
I don't believe in it either, as that would mean that the first to get the meeting would get a competitive advantage. Still strange that Microsoft didn't just make a normal memory system, and that they didn't use dynamic clocks. They climbed 8 of the 10 steps of the ladder, and at the end, they forgot how to climb. I'm pretty sure that adding those two things would have guaranteed superiority in all cases, but they decided to cheapen out on, like, the last 10 dollars. The multibilioner company.
 
I don't really disagree and I wouldn't mention this if it were just a 20-30w differential. But 70-80w is a massive delta on a ~230w console and represents a full third of its power envelope.

The ~160w PS4 Pro also has fixed clock rates but it doesn't show this kind of behavior. I ran a few tests on mine and most Pro-enabled games ran in the 140-160w range. The only exceptions were non-Pro enabled games and 20 year old remasters like Kingdom Hearts which don't support shading hardware.

I'm not saying there's anything nefarious going on but it does suggest that the console is still somewhat underutilized. Power-capping my 2080 Ti at 230w vs 150w results in a 200-300mhz differential.
Yea, there’s definitely some games like that I would agree the GPU is just sitting there idle.
 
I've yet to play the full game but it was in the 190-210w range during the first mission.
That's interesting. In essence we could use the power draw to measure system utilisation. I'd like to see that used in DF comparisons to compare performance on screen with power draw, and whether there's a correlation between result and power across all titles. At first glance you'd think lower (average) power draw for the console is indicative of greater efficiency, what we'd expect with the different clocks, but if XB peaks higher, those lower clocks across games could actually mean the hardware just isn't being pushed as hard as it could be.

In short, what are the games that draw the most power on both consoles and how do they compare?
 
Not sure if this is any related to Cerny's comment of going narrower and faster which would help the CUs to be better utilized compared to going slower but wider
 
Well, I disagree. PS5 dynamic clock system shows us the clockspeed is not the only factor. What also matters is what kind of instructions are used and how many of them are used by cycles. Besides the architecture is rather different. PS5 uses mainly RDNA 1 / 2 architecture (L1/L2 caches, CU by SE) while XSX has a custom architecture not seen on any RDNA 1 or 2 desktop GPU. Maybe this architecture focusing on compute prevented them to increase the clocks the way RDNA2 GPUs and PS5 did. About the compute thing maybe it's because of their compute cloud servers? that's what Spencer told us years ago; that XSX has being designed for gaming and compute cloud servers, but I digress.

And when they test the clockspeed / power consumption (and look for some kind of sweatspot) for yields they must test for the maximal power consumption possible by the APU for instance using Furmark, not an average of power consumption. Remember what Cerny told us here: Without dynamic clocks they could not even reach 2ghz because I assume in some rare cases, even if very short, the system can consume the max at those clocks. In the case of XSX that maximum power consumption that can be reached is very similar to PS5 max power consumption, hence their static clocks being relatively so low because for the yields they must plan for that maximum power consumption. So maybe with that architecture and those clocks they have the same yields as PS5.
I know what you’re trying to say, but following your line of thinking here, Xbox should just be able to enable variable clocks as well then. And they’ve had 4 years to flip it on but haven’t and Sony did within a year.

There are probably larger impacts here then noted, but if it was a free performance bump with no cons, MS would have gone this route as well. I think the hardware cannot handle it, perhaps there are BC or FC implications. I’m not sure.

But there is a reason the PS5 looks the way it does and requires Liquid Metal. Even if both Xbox and PS5 were designed to run the same wattages at maximum, PS5 is doing with 20% less surface area, and the chip therefore runs at 20% more watts per mm^2 and as a result runs much hotter and is harder to cool due to less surface area to do it over. And that will factor into yield.
 
I know what you’re trying to say, but following your line of thinking here, Xbox should just be able to enable variable clocks as well then. And they’ve had 4 years to flip it on but haven’t and Sony did within a year.

There are probably larger impacts here then noted, but if it was a free performance bump with no cons, MS would have gone this route as well. I think the hardware cannot handle it, perhaps there are BC or FC implications. I’m not sure.

But there is a reason the PS5 looks the way it does and requires Liquid Metal. Even if both Xbox and PS5 were designed to run the same wattages at maximum, PS5 is doing with 20% less surface area, and the chip therefore runs at 20% more watts per mm^2 and as a result runs much hotter and is harder to cool due to less surface area to do it over. And that will factor into yield.
No because the APU needs specific silicon units to count the instructions. It needs to be designed from the start of the APU. I think Cerny talked about it one time.
 
No because the APU needs specific silicon units to count the instructions. It needs to be designed from the start of the APU. I think Cerny talked about it one time.
I don’t believe There is anything novel about silicon frequency based on instructions. This is what we do today for all silicon. It detects an activity level and increases or decreases the frequency based on that activity level for a specific time frame and it continues to check this for a specific interval.

PS5 does the same thing. The only thing novel about it is that all chips have to run the same algorithm, meaning even if there is headroom to run faster, because if the bin is better, it still won’t. Whereas on PC, a better bin will be allowed to run a higher frequency at that activity level.

Effectively, they profiled the lowest bin they were willing to accept, and that profile is applied to all their bins. This is Like putting a slower profile on a faster SKU in the PC space.
 
Back
Top