MS leak illustrates new console development cycle

X86 legacy bloat is a huge negative for a console.

I doubt this for a number of reasons.
1. They could easily just ask AMD to remove all the unused legacy cruft from the SOC that they build for future systems - perhaps they even do this already.
2. By using x86 they get to leverage all the CPU based optimizations that devs are doing on the PC side. I think there has already been a fair amount of work done to improve nanites perf on x86 and AVX2 SIMD. Moving to Arm, means having to redo a lot of those optimizations all over again. Sure AVX --> Neon is possible, but it's high level work. and if the x86 stuff is tuned to console specific metrics, it's gonna be even harder to port to Arm and get good perf.

The upcoming AAA on apple will be a very interesting data point, but Apple is going to be giving away their top tier arm core, MS would either have to design their own, or license an inferior one someone else.

Time to go searching the Gen 10 console predictions thread, who had zen6 and Navi 5?
 
I didn't say they were, I'm saying the entire "wall of ideas" they have here is insipidly uninspired. There's indie companies demoing a combined handheld gaming PC/VR headset hybrid with detachable controllers, and apparently it's pretty useable, and that's coming out next year and is already more interesting than anything they're even imagining for 2027-28.

Traditional consoles as a homebox are dead, but a "hybrid cloud console!" is a meaningless no hw designer worth anything would write down. Hybrid in what way?

Hybrid cloud in that you aren’t forced to download and install to play titles. The feature exist in some form or fashion in a number of devices. But I can’t think of a consumer electronic device where it’s a first class standard feature.

Instant access to a new game or app over cloud with it seamlessly downloading in the background if you so choose. It’s nullified by preloading on not yet released titles but most AAA games come with some type of download wait. But given the state of games at release, preloading is only an option if you are willing to take the gamble.

On smartphones the impact of downloading apps is immaterial because it’s so fast. It convenient enough that I would dread the ideal of waiting 15 min to an hour to download a smartphone app. Never mind a movie or a TV show.

Hybrid cloud devices can bridge the time to access gap between AAA games and smartphone apps and games as well as TV/film/music content. A hybrid console device that works well could help reduce the perception that consoles are nothing more than dressed up PCs.

Plus it alleviates some of the pressure that comes from the ever growing size of games. You can have instant access to your total library without investing in multiple terabytes of SSD drives.

The difference in this and VR is that it’s a QOL feature that’s applicable across the user base while is VR is only to applicable to the fraction of gamers that desire such functionality. Furthermore, to me at least, VR doesn’t have a innovation issue, it has a content issue.

I have nothing against innovation but refinement is just as important.
 
Aren’t new Euro regulations requiring portable device to have easily replaceable batteries. I remember reading an Apple article about it and how it affects the iPhone and Macs.

I believe there's an out for that if the battery has enough longevity, or I guess enough of a buffer which is likely readily achievable if you were to hypothetically limit max charge capacity these days.

The regulation is also regarding replaceable batteries (can require disassembly that needs tools) and not necessary swappable batteries in the sense on the go.
 
I doubt this for a number of reasons.
1. They could easily just ask AMD to remove all the unused legacy cruft from the SOC that they build for future systems - perhaps they even do this already.
No. Not easily. It will require designing and validating a new *architecture* (not just microarchitecture and implementation). It'll be hilariously cost-prohibitive unless it's part of a larger effort to design a more lean ISA for diverse market segments.

2. By using x86 they get to leverage all the CPU based optimizations that devs are doing on the PC side.
Yes.

In general moving architectures is very very painful and you need a substantially strong motivation to do so. But I'll admit that that motivation exists -- vertical integration is everybody's game these days in the consumer space. ARM64 is a painful but necessary step towards more vertical integration and reducing MS's dependence on horizontal vendors. Whether that will happen in the next gen or not is anybody's guess.

On the sentiment that NVIDIA may be at play here -- it's very unlikely. ARM64 is a way for MS to move *away* from chip/IP vendor dependence, not towards it. They want to move everything in-house.
 
MS had worked with Qualcomm on the SQ1 and 2 for Windows and the next gen version is rumored to be faster than Apple's M2. Maybe they'd use the Orion cores from that chip?
 
By this point I think tools are suitably abstracted and perfected that CPU makes no difference to devs. Compile and build and run.
I would like to point out that it's not that simple. ARM has a noticeably weaker memory model than x86. A lot of multithreaded code written for x86 will compile and run unmodified on ARM, for a while...

... Until it crashes with weird heisenbugs caused by the fact that the developer was depending on the stronger memory model to function. This can not be automatically fixed by your compiler and toolchain, because it can require high-level changes, that are not visible to them. (Well, it can be automatically fixed, by making every load an acquire and every store a release, because that's the equivalent of x86 semantics, but this would absolutely crater performance.)

The fundamental problem is that on ARM, you need to understand how multithreading works a bit better than you need to on x86.
 
ARM’s weaker consistency does not mean a particular implementation can’t provide TSO though. (As I understand it, the Apple M1 has a TSO mode, and it’s one of the reasons emulating x86 doesn’t crater performance as much as you’d expect on that CPU.)
 
ARM’s weaker consistency does not mean a particular implementation can’t provide TSO though. (As I understand it, the Apple M1 has a TSO mode, and it’s one of the reasons emulating x86 doesn’t crater performance as much as you’d expect on that CPU.)
Huh. I wasn't aware. If it does have a performant TSO mode, why isn't it enabled by default? I suppose there may be a small-ish perf hit, say 10% or so.
 
No. Not easily. It will require designing and validating a new *architecture* (not just microarchitecture and implementation). It'll be hilariously cost-prohibitive unless it's part of a larger effort to design a more lean ISA for diverse market segments.


Yes.

In general moving architectures is very very painful and you need a substantially strong motivation to do so. But I'll admit that that motivation exists -- vertical integration is everybody's game these days in the consumer space. ARM64 is a painful but necessary step towards more vertical integration and reducing MS's dependence on horizontal vendors. Whether that will happen in the next gen or not is anybody's guess.

On the sentiment that NVIDIA may be at play here -- it's very unlikely. ARM64 is a way for MS to move *away* from chip/IP vendor dependence, not towards it. They want to move everything in-house.
Personally, I don't see the benefit of moving to Nvidia if you cannot get hardware backwards compatibility. I was watching the latest DF and they discussed an idea proposed by someone else of doing backwards compatibility via the cloud. Doing backwards compatibility via the cloud is an automatic own goal. From a marketing, I do not think an explanation is needed as to why offering backwards compatibility via a paid cloud service is a bad idea. Especially when your competitors are guaranteed to offer hardware backwards compatibility.

More importantly, there's a reason why companies avoid Nvidia if possible. You'll just get bad value for money spent on the silicon. Sony ditched Nvidia due to the under performing GPU and the horrible contract offered by Nvidia over the life time of the chip. Microsoft also ditched Nvidia in the past. Nvidia is simply the type of company that'll try to sell you a 4060ti at $500. The whole objective of making the next generation of consoles is to make money and Nvidia will not help you do that at all.

With regards to moving to ARM64, that's something I do not think Microsoft should even consider. I don't think Microsoft is competent/capable enough to license arm and design something performant. If they use an existing third-party core(not apple), the performance is just going to pale in comparison. The only logical decision in my opinion is to stick with AMD for the CPU and maybe see if AMD/Intel can collaborate on the GPU portion. License XESS, etc. Then again, this is the same company that managed to release a console that's "30%" more powerful than it's competitors and most of time it's performance is either on par or worse. My confidence in their ability to execute effectively could not be lower.
 
If they use an existing third-party core(not apple), the performance is just going to pale in comparison.
Eh, the standard 08/15 high-end ARM core (X3) already has higher IPC than anything AMD makes and they are iterating faster. The "problem" with ARM is clock, but consoles don't go crazy with them anyway due to power consumption and heat, and the high-end core in phones already reaches ~PS5 clock.
 
A theoretical console using whatever Apple’s current tech is would likely be untouchable in performance at the TDP consoles play in.
 
Back
Top