Digital Foundry Article Technical Discussion [2024]

This is purely speculative but something I would think makes sense is that there has be something in place that prevents AMD's semi-custom customers from essentially speccing the same (or near same) end design. As I would think it would be rather awkward if both Sony and MS ended up announcing roughly the same APU.

If we look at both the PS4 and PS5 configuration for the APU/subsystems anyways they are fairly "conventional" (for lack of a better term) with what you would expect and what AMD (or any other GPU vendor) would release independently (the GPU configurations are basically near similar to what AMD would sell on the PC). The 256 bit GDDR unified memory subsystem is also fairly conventional in terms of a mainstream high performance configuration, inline with how PC GPUs are configured.

The Xbox designs on the other hand, especially with how they approach the memory subsystem, has been rather "exotic" for the last two generations already compared to what you would expect to find as equivalents as something AMD would sell for the PC.

For all we know in the contact Sony has some sort of exclusivity on a 256 bit GDDR configuration and/or some other design specs and as the second customer Microsoft then has to work around them.
That would be really strange, but there have been stranger clauses in the tech industry.
 
That would be really strange, but there have been stranger clauses in the tech industry.
How would those design meetings go though?

"We want a blah blah chip with..."
"Hang on, we can't give you that."
"Why?"
"Can't say due to confidentiality agreements. Pick a different design."

I think it far more likely you just have different compartmentalised projects with no cross-over, and then you just get what you get as does the other guy.
 
How would those design meetings go though?

"We want a blah blah chip with..."
"Hang on, we can't give you that."
"Why?"
"Can't say due to confidentiality agreements. Pick a different design."

I think it far more likely you just have different compartmentalised projects with no cross-over, and then you just get what you get as does the other guy.
I don't believe in it either, as that would mean that the first to get the meeting would get a competitive advantage. Still strange that Microsoft didn't just make a normal memory system, and that they didn't use dynamic clocks. They climbed 8 of the 10 steps of the ladder, and at the end, they forgot how to climb. I'm pretty sure that adding those two things would have guaranteed superiority in all cases, but they decided to cheapen out on, like, the last 10 dollars. The multibilioner company.
 
I don't really disagree and I wouldn't mention this if it were just a 20-30w differential. But 70-80w is a massive delta on a ~230w console and represents a full third of its power envelope.

The ~160w PS4 Pro also has fixed clock rates but it doesn't show this kind of behavior. I ran a few tests on mine and most Pro-enabled games ran in the 140-160w range. The only exceptions were non-Pro enabled games and 20 year old remasters like Kingdom Hearts which don't support shading hardware.

I'm not saying there's anything nefarious going on but it does suggest that the console is still somewhat underutilized. Power-capping my 2080 Ti at 230w vs 150w results in a 200-300mhz differential.
Yea, there’s definitely some games like that I would agree the GPU is just sitting there idle.
 
I've yet to play the full game but it was in the 190-210w range during the first mission.
That's interesting. In essence we could use the power draw to measure system utilisation. I'd like to see that used in DF comparisons to compare performance on screen with power draw, and whether there's a correlation between result and power across all titles. At first glance you'd think lower (average) power draw for the console is indicative of greater efficiency, what we'd expect with the different clocks, but if XB peaks higher, those lower clocks across games could actually mean the hardware just isn't being pushed as hard as it could be.

In short, what are the games that draw the most power on both consoles and how do they compare?
 
Not sure if this is any related to Cerny's comment of going narrower and faster which would help the CUs to be better utilized compared to going slower but wider
 
Well, I disagree. PS5 dynamic clock system shows us the clockspeed is not the only factor. What also matters is what kind of instructions are used and how many of them are used by cycles. Besides the architecture is rather different. PS5 uses mainly RDNA 1 / 2 architecture (L1/L2 caches, CU by SE) while XSX has a custom architecture not seen on any RDNA 1 or 2 desktop GPU. Maybe this architecture focusing on compute prevented them to increase the clocks the way RDNA2 GPUs and PS5 did. About the compute thing maybe it's because of their compute cloud servers? that's what Spencer told us years ago; that XSX has being designed for gaming and compute cloud servers, but I digress.

And when they test the clockspeed / power consumption (and look for some kind of sweatspot) for yields they must test for the maximal power consumption possible by the APU for instance using Furmark, not an average of power consumption. Remember what Cerny told us here: Without dynamic clocks they could not even reach 2ghz because I assume in some rare cases, even if very short, the system can consume the max at those clocks. In the case of XSX that maximum power consumption that can be reached is very similar to PS5 max power consumption, hence their static clocks being relatively so low because for the yields they must plan for that maximum power consumption. So maybe with that architecture and those clocks they have the same yields as PS5.
I know what you’re trying to say, but following your line of thinking here, Xbox should just be able to enable variable clocks as well then. And they’ve had 4 years to flip it on but haven’t and Sony did within a year.

There are probably larger impacts here then noted, but if it was a free performance bump with no cons, MS would have gone this route as well. I think the hardware cannot handle it, perhaps there are BC or FC implications. I’m not sure.

But there is a reason the PS5 looks the way it does and requires Liquid Metal. Even if both Xbox and PS5 were designed to run the same wattages at maximum, PS5 is doing with 20% less surface area, and the chip therefore runs at 20% more watts per mm^2 and as a result runs much hotter and is harder to cool due to less surface area to do it over. And that will factor into yield.
 
I know what you’re trying to say, but following your line of thinking here, Xbox should just be able to enable variable clocks as well then. And they’ve had 4 years to flip it on but haven’t and Sony did within a year.

There are probably larger impacts here then noted, but if it was a free performance bump with no cons, MS would have gone this route as well. I think the hardware cannot handle it, perhaps there are BC or FC implications. I’m not sure.

But there is a reason the PS5 looks the way it does and requires Liquid Metal. Even if both Xbox and PS5 were designed to run the same wattages at maximum, PS5 is doing with 20% less surface area, and the chip therefore runs at 20% more watts per mm^2 and as a result runs much hotter and is harder to cool due to less surface area to do it over. And that will factor into yield.
No because the APU needs specific silicon units to count the instructions. It needs to be designed from the start of the APU. I think Cerny talked about it one time.
 
I also Imagine that not every XSX SOC produced can maintain those GPU clocks stable. Them going lower clock helped with yields for sure in 2020.
 
No because the APU needs specific silicon units to count the instructions. It needs to be designed from the start of the APU. I think Cerny talked about it one time.
I don’t believe There is anything novel about silicon frequency based on instructions. This is what we do today for all silicon. It detects an activity level and increases or decreases the frequency based on that activity level for a specific time frame and it continues to check this for a specific interval.

PS5 does the same thing. The only thing novel about it is that all chips have to run the same algorithm, meaning even if there is headroom to run faster, because if the bin is better, it still won’t. Whereas on PC, a better bin will be allowed to run a higher frequency at that activity level.

Effectively, they profiled the lowest bin they were willing to accept, and that profile is applied to all their bins. This is Like putting a slower profile on a faster SKU in the PC space.
 
0:01:04 News 01: Has VRR become a crutch for good performance?
0:20:00 News 02: Auto SR impressions!
0:37:26 News 03: New Crazy Taxi is open world, massively multiplayer
0:47:43 News 04: Mark Cerny dishes on PS5 architecture, games
1:03:22 News 05: Sony killing off recordable disc production
1:11:19 News 06: Elden Ring patch notes suggest odd performance fix
1:19:43 News 07: OutRun steeply discounted on Nintendo eShop
1:29:16 Supporter Q1: When do you think we’ll hear more about the PS5 Pro?
1:36:55 Supporter Q2: Could DF provide technical context to Eurogamer or IGN game reviews?
1:47:42 Supporter Q3: Why does the PS5 only support VRR over HDMI 2.1?
1:54:15 Supporter Q4: Why do many developers only offer software Lumen in their UE5 games?
1:56:40 Supporter Q5: Whatever happened to SLI and CrossFire?
2:02:23 Supporter Q6: How should console versions of games expose graphical options?
2:06:17 Supporter Q7: Is there a place for cinematic trailers?

 
For some devs, VRR has become a crutch for good performance. A game at release should not rely on vrr to hit your performance targets. Any game that fails to hit it's performance targets has failed in one of it's primary targets.
 
For some devs, VRR has become a crutch for good performance. A game at release should not rely on vrr to hit your performance targets. Any game that fails to hit it's performance targets has failed in one of it's primary targets.
It's more like VRR has allowed some devs to offer 60fps modes even if performance is variable thanks to the commonality of VRR displays, where before they'd just lock the game to 30fps and that would be the only option.

Also, just to be very specific here, VRR does not help a developer to 'hit' a performance target. VRR doesn't improve performance.
 
I saw some breakdowns of the new snapdragon x elite laptops, and the GPU architecture of those things in closer to maxwell-pascal as far as features go, and compute operations are pretty slow. Given that mobile drivers are horrible, that's the reason why games that push graphics on mobile struggle to hit performance targets in line with the theorical power of the GPU. Unless qualcomm-apple-mediatek start to modernize those gpu's and start writing competent drivers, things will not change.
 
I saw some breakdowns of the new snapdragon x elite laptops, and the GPU architecture of those things in closer to maxwell-pascal as far as features go, and compute operations are pretty slow. Given that mobile drivers are horrible, that's the reason why games that push graphics on mobile struggle to hit performance targets in line with the theorical power of the GPU. Unless qualcomm-apple-mediatek start to modernize those gpu's and start writing competent drivers, things will not change.
Sorry but Apple silicon has 3-5 times the GPU performance compared to snapdragon X in most titles, that is M3 with active cooling as in the cheapest MacBook Pro
 
Back
Top