Digital Foundry Article Technical Discussion [2024]

That quote reminded me of something i've always wondered, If you remade the pentium 166mmx using todays best process how big would it be
It has 4.5 million transistors..


A17 pro is a “”3nm“” chip with 19 billion transistors so if you take a A17 pro and cut it in 4000 pieces, A single piece would potentially be about as big as a “”3nm”” Pentium 166MMX :p

(it would be bigger because of the connections to the CPU but the core could be that size if it were not for that)
 
That quarter of a century has made the quote more true, not less.

Moreover if ARM is such a slam-dunk over x86 why has it taken decades of companies trying, and finally the resources of a trillion dollar company to reach approximate performance parity?
Is it really the cpu that has reached parity or is it the on package ram that is doing the heavy lifting. How would a Zen 5 with 8 gigs of the same ram setup as the m4 perform vs just a regular zen 5.

Apple gets a large performance increase at the expensive of ram limitations.
 
They don’t stand a chance either, unless they are allowed to use 300-500% the energy then they might get double digit percentage advantages:
The M3 Max is not the right comparison point, as it is a 92B transistor chip, compared to 25B transistors for the 7840U. It would be fairer to compare to the M3 instead, which has the same transistor count. Even ignoring that, the Handbrake test in your link shows the 7940HS consumes 41% more power than the M3 Max to complete the same workload. That's a long way from 300-500% more energy use.

The quote you have is almost a quarter century old btw
That's exactly the point. If the silicon cost of X86 backwards compatibility was already tiny in 2008, we would expect it to be negligible in 2024, when transistor counts have increased by a further 10X.
 
Last edited:
The M3 Max is not the right comparison point, as it is a 92B transistor chip, compared to 25B transistors for the 7840U. It would be fairer to compare to the M3 instead, which has the same transistor count. Even ignoring that, the Handbrake test in your link shows the 7940HS consumes 41% more power than the M3 Max to complete the same workload. That's a long way from 300-500% more energy use.


That's exactly the point. If the silicon cost of X86 backwards compatibility was already tiny in 2008, we would expect it to be negligible in 2024, when transistor counts have increased by a further 10X.
My point is the entire x86 chip is obsolete, also this is a single benchmark, so probably the best case scenario for your argument.
 
My point is the entire x86 chip is obsolete, also this is a single benchmark, so probably the best case scenario for your argument.
Your claim would need to be that X86 chips can't be as efficient (or at least in the same ballpark) as ARM chips, which you have provided no real evidence for.

The best case benchmarks were the ones I linked to earlier, showing the M3 losing to the 7840U in multithreaded applications.
 
I think for many years what consoles should have only custom hardware as that was before 8th gen. Yes that hardware is harder to program, but results will be better.
maybe...., although nVidia and so on improve their efficiency over time too, so a console with a AMD/Intel/ARM CPU and nVidia GPU could be a sight to behold. Making such powerful custom hardware for a console could be very costly.

Consoles used to create tendencies in the past. The adoption of DVD was mostly due to the PS2 having a DVD player. Then the HDTVs appeared when consoles started to support 720p in the PS3/X360 era. When the XB1 came out I got a 1080p Full HD TV. After that, we are in the "4K and freesync era" but that's heavily influenced by current computer features.
 
My point is the entire x86 chip is obsolete

Your point seems to be that x86 has been "obsoleted" (real world: approximately matched in performance) by a chip designed by a company that has zero interest in supplying either Sony or Microsoft with CPUs for future gaming consoles.

Does this prove that ARM can match x86 in performance? Yeah, but I don't think many people really ever doubted that if enough billions was thrown at the problem.

Do the console suppliers have the resources and motivation to try to repeat this feat just for consoles? Well one of them has the resources, but historically has not done well on the custom development front. The others I truly doubt it.

Apple ARM chips sell in the billions. Consoles do not. That matters.
 
Unfortunately, it gets even worse:


from 17minutes on the M3 max video

All you are showing here is comparisons to massively cut down and power constrained mobile x86 parts. No-one has claimed that ARM can't be more power efficient than x86 chips, it generally can be. But in terms of raw performance there is nothing shown here that comes close to the performance of full blown desktop parts like the 14900K and 7900X3D.

And bare in mind to achieve this "no-where near" top end x86 performance (at impressive power consumption it must be said), Apple is using an enormously expensive 92b transistor 3nm process die on the M3 Max vs less than 18m on a 5nm process for the 7900X3D. How do you expect something like this to be affordable for a console? And it would still perform far worse than a far, far cheaper high end x86 CPU (while using far less power as noted above).

Lol, you mean the expendable storage on PS5?

"lol" No. I mean all of the interconnects. From the PS5 internal SSD to the IO complex, and from the IO complex to the CPU, memory and GPU, The PS5 uses PCIe4 for comms to the SSD and the IO complex which handles communication between the main memory, CPU and GPU is basically just derived from an AMD APU.
 
Last edited:
They don’t stand a chance either, unless they are allowed to use 300-500% the energy then they might get double digit percentage advantages:

So when asked to support your claim that all x86 processors are by their nature "obsolete legacy garbage" you point to a processor made by one of the world richest companies, that took years and billions to develop using some of the best people in the industry, made on a more advanced node than the x86 processors you're comparing them to, using a frikkin' huge chip, supported by an expensive exotic memory setup, tested within a market segment that the chip is specifically designed for, made possible because of Apples insane margins, and ...

.... you look at all that, and conclude that the only real significant factor is ... x86?

Apple wanted to make their own processors for their own somewhat limited line of products and chose Arm which they were familiar with. But lets be honest, they couldn't have got an x86 license anyway.

Disparity in power consumption is nothing like what you're making out either, unless maybe if you push the x86 chips right up the frequency curve where they'll look bad whatever you do.
 
My point is the entire x86 chip is obsolete, also this is a single benchmark, so probably the best case scenario for your argument.
I'm about to ban you from this thread if you don't start contributing in good faith. You bang on and on with a perspective but no corroborating evidence nor intelligent point/counterpoint to those trying to debate your perspective - vis-a-vis chips of radically different sizes and processor nodes.

If the next post I see from you doesn't have relevant benchmarks with links and a well expressed theory connecting them beyond repeating again that "x86 is just obsolete legacy garbage", I'll move this thread forwards by terminating this line of discussion and your involvement in it.
 
And in the midst of this conversation about design efficiency and architectures... it's typically always based around the idea that games are developed first and foremost with console architecture in mind.

How many of these engines which are truly tailored for PC... actually have games built on them which efficiently make use of PCs architectural strengths? From what I can recall, it's been easier to port from console to PC than a PC specific game to console. Certainly in the past, when architectures were much different and bespoke in the consoles.. maybe it's less so today, but still.

Then there's other factors which contribute to this idea. Let's be honest... a game being made for a console has less excuses to perform sub-par compared to PC. PC by its nature can be easily blamed for all sorts of sub-par performance and handwaved away by devs and pubs. Oh it could be driver issues, it could be OS issues, could be Anti-virus, could be APIs, could be lack of testing due to huge variety of hardware.. and so on. It's not as easily identifiable what the specific issue is... and we already know most PC gamers will blame issues on the user before accepting that the game itself has issues. "Not on my PC", "works perfectly here, just gotta disable control flow guard", ect ect.. and thus when you have a huge player base accepting that the issues are of their own doing or decisions, there's less incentive to fix actual issues. Not to mention how many people suggest mods as methods of fixing issues with games... Yes, they can fix issues.. but it just systemically perpetuates the idea that inefficient unoptimized game releases are ok, because someone will fix them eventually.. or eventually the hardware will brute force past the problem.

My point is, if these games were developed with PC in mind first and foremost, which received optimization priority... and ported to console... the issue would be WAY worse despite consoles having a more efficient memory architecture. If games were REALLY designed for PC and it's split memory architecture.. games could be FAR ahead of where they are now. The issue is not architecture.. the issue is prioritization. Consoles get priority.. that is why games are more optimized.
 
How many of these engines which are truly tailored for PC... actually have games built on them which efficiently make use of PCs architectural strengths? From what I can recall, it's been easier to port from console to PC than a PC specific game to console.
This is gonna be very dependent on many things and there's no good way to make any kind of generalization here, especially without specifying whether you mean on a technical/performance level or on a game design/gameplay/input level.

It's also naturally become easier on the technical side to port games from PC to console since the switch to x86 as standard and more traditional hardware paradigms.

But before that, I think you could make pretty good arguments that going console->PC or PC->console could be equally difficult depending on the specific challenges of the game in question.

There's been a whole shifting picture of what constitutes a 'console' or a 'PC' in terms of hardware and features and whatnot over the years, that no definitive argument can be made here. I mean, there was a small window in time when many console games would have been straight up impossible to do on PC, and of course the opposite situation existed as well at times.
 
It's also true that console games can and do stutter as well... signifying that the ACTUAL issue isn't CPU power, but code.
There are many causes of stutter. Trying to argue it down to one single thing isn't reasonable. There are absolutely, undeniable stuttering issues that can exist on PC that wont exist on console.

Also, as we've had this argument a thousand times, it's worth noting that while in certain situations there might have been solutions to such stuttering on PC, we dont always know what the cost of doing so would be. It's not super straightforward and not all just a case of 'unoptimized code' or whatever simplified argument people like to bandy about.

EDIT: Sorry, I feel like my last few responses seem to be picking you out, but that was very unintentional. lol
 
My point is, if these games were developed with PC in mind first and foremost, which received optimization priority... and ported to console... the issue would be WAY worse despite consoles having a more efficient memory architecture. If games were REALLY designed for PC and it's split memory architecture.. games could be FAR ahead of where they are now. The issue is not architecture.. the issue is prioritization. Consoles get priority.. that is why games are more optimized.
We could certainly be shipping much better PC games if it was an industry wide priority, but code optimized for the broad surface area, defensive to OS priotization and work stealing, scapable to multiple hardware configurations, etc, is always going to be more complex, more brittle, and worse than code optimized to shared memory and a locked down OS. In your hypothetical I'd much rather work at a console porting studio than a PC porting studio in today's world. "We have to re-implement this engine in a way that performs well with unified memory and a locked-down close to metal rendering API on a single hardware target you can profile" sign me up.
 
We could certainly be shipping much better PC games if it was an industry wide priority, but code optimized for the broad surface area, defensive to OS priotization and work stealing, scapable to multiple hardware configurations, etc, is always going to be more complex, more brittle, and worse than code optimized to shared memory and a locked down OS. In your hypothetical I'd much rather work at a console porting studio than a PC porting studio in today's world. "We have to re-implement this engine in a way that performs well with unified memory and a locked-down close to metal rendering API on a single hardware target you can profile" sign me up.
Of course it will never be possible to optimize for PC as well for console. Every developer would say the same thing. Especially skilled low-level programmers.. they literally live and breath for being able to attack a fixed platform and make every optimization possible for it. It's completely understandable. Consoles have better dev tools and profiling tools.. developers prefer to program for them, I get it. But it still doesn't change the fact that there are things that could be possible on PC, and hell, could have been possible for decades now... which weren't ever done.. because consoles.

Ironically, Shawn Layden just did a recent interview and he basically said it was about time to put the console wars to an end and that there was a single console platform instead of having multiple console platforms to target which basically serve the exact same market. He says the technical differences between the PS5 and XSX are negligible.


"Xbox, PlayStation, high-end PC, that's almost at a plateau where all things being equal, they're pretty much the same. We'd be in a better world if we could get down to one standard home console technology that we could come together, and get this platform war thing out of the way," the former PlayStation head added.

I agree with him completely. I said MS should just make the Xbox a PC so that devs could simply make one SKU and support all the form-factors. Make the next console a partnership between the two, standardized hardware they could license out to 3rd parties to make their own versions. Ultimately a PC with targetable specs, and MS and Sony could both work on improving the OS, APIs, Tools, ect ect.. and devs could focus on a single codebase for the most part.

Yea, I'm dreaming here, maybe certain developers would hate it.. but I think it will eventually get there one way or another. It does not make sense to have vastly different architectures and limiting games which require massive budgets to just a single platform.
 
Back
Top