Feasibility of an upgradeable or forwards compatible console *spawn*

Interesting, I didn't create a separate thread for this because I didn't think it was quite at that stage, but seems to be walking towards that path we have predicted here.
In a speech to reporters, he said that the Xbox One could see a future in which it is upgraded, rather than replaced by new consoles. Spencer was talking about how Microsoft has sought to align its Windows 10 and Xbox One development activities under the internal "Universal Windows Platform" while offering backwards compatibility for many Xbox 360 games, now playable on Xbox One. He was addressing the concerns of some Xbox One owners that the exclusives destined for that console are appearing on PC, thus eroding the value of owning a console.

http://www.polygon.com/2016/3/1/11121666/xbox-one-hardware-upgrades-phil-spencer-microsoft

I imagine the explicit multi-adaptor must be a factor here.
 
A new device wouldn't need to be shunted out every year, but when there was a point to it. For example, 14nm and HBM2 would allow a smaller, faster, and cooler device so there would be a reason for that.

When you think about it, 5 - 8 years is a long time to go without being able to take advantage of architectural advances and perhaps more importantly, without being able to adapt to some of the changes in user behaviour and display devices.

14 nm, Zen CPUs, and adaptive vsync over HDMI on their own would already make for some worthwhile advances in console technology - especially if software support for old and new software was uninterrupted. Making people wait another three years for adaptive sync and 60 hz 4K support doesn't seem in a customers best interests. If other people having access to that makes you life worse then you're the problem, not the other folks.
 
It can certainly happen; imagine a GPU card switchable system.
Now imagine the GPU being an APU.

Every 5-7 years the base spec stays the same; only the number of processing units, TDP (lower) and frequency can change with the newer APU. a maximum of 3 different APU's are released for the base spec.
The first and lowest model is used for development, after that, framerate, resolution, lighting precision, and so on get turned up for the better spec version.

This is the only method to minimise both game development, and well as hardware development costs. Users can have their APU upgraded by MS, to ensure the cooling systems are mounted correctly, at reduced costs; the old APU is switched out and will be sold in a new casing as the base model.

You do need extra calling capacity as well as extra PSU power at the beginning of each base spec.

Still the problem is they will never fully target the latest spec; GTA5 was developed for about 192MB, that is megabyte, I repeat: megabyte of ram. Imagine how a properly developed GTA6 would be on PS4 instead of the GTA5 remaster.. The new APU's would only receive "remasters" instead of games that actually push the hardware.

So even though I proposed the best possible system, I still think it's really stupid. MS is losing the console battle, hard, soon it will not even be able to buy itself exclusivity; no way in hell that TitanFall 2 or Tombraider 3 will ever be exclusive, so the idea of a new spec revitalising the brand is, sadly, wishful thinking at best.
 
User replaceable APUs aren't going to happen in the console space (system integration and risk count this out), and I don't think that's what Phil Spencer is even suggesting.

Games not targeting the latest system as the "base spec" isn't an issue, as if the uprated system weren't to exist then the "base spec" wouldn't have improved anyway. You've lost nothing. Eventually the older systems would no longer be targeted, just as they aren't in the traditional console market and so the "base spec" would move on. It's just that in the mean time, the platform isn't stagnating for 6, 7, 8 years.

Never "fully targeting" the latest spec is still a better situation that being "completely stuck" on the "ancient spec".

And in reality? PS4 has a ~45% faster GPU, and despite the "base spec" being the XBone GPU games still look and run better on it.

And GTAV (developed for ~500 MB or RAM) is a great example of just how much better a game can look and play when you move beyond its "base spec".

TL: DR an uprated system would still offer a much better experience than the "base spec"; only offering the "base spec" is not a win; eventually the uprated system can become the "base spec" when there is a more highly uprated system introduced. They key is continuity of the software base - something no-one has really been in a position to offer before.
 
Trying to imagine how this would work. With PC, the problem is you can have so many configurations of CPU, memory and GPU. Someone could have a high-end CPU with 16GB of RAM and a low-end GPU. Someone else could have a low-end CPU, 32GB of RAM and an mid-tier GPU. Then on top of that you have all kinds of configuration of each part from different vendors. They can't have that on the console. There needs to be a limited number of configurations so it's easy for both the consumer and developers.

So my guess would be you'd need one module that can be upgraded every two to four years. If you start getting into more than module to swap, you're back into PC territory with permutations. What would need to be included? Well, CPU +GPU (APU) and RAM. Along with that comes cooling and HDMI ports. Cooling would have to change with each upgrade, and HDMI ports need to change to keep up with tv specs. I'm imagining something that looks like a big titan gpu, full enclosed with all of those parts inside, that plugs into a slot in your console. Open the case, swap the modules, done.

Seems kind of weird. They end up with a module that replaces most of the major parts in the console. What's left behind is disc drives, the power supply. How do they size a power supply? At that point, why not just release a new console every three or four years and make sure they're forwards compatible, so devs support two generations at a time. For example, devs would make on version of a game (same executable, same download) that runs on Xbox One and Xbox Two with different graphical settings. Then when Xbox Three comes out, Xbox One gets dropped and devs support Xbox Three and Xbox Two. You buy a console it's good for 6-8 years, and Microsoft can sell the newest at a premium and the older at a discounted price.

Upgrade-able sounds really cool, but I don't know how it doesn't turn into a mess if they go with more than one upgradeable module. I suppose they could split game hardware from system hardware. They have a low-end CPU and GPU that run the OS, and a game module that's upgradeable that includes all of the gaming hardware. But then you get into the issue of how often you have to upgrade the base unit so they can add more features to the OS, or upgrade the spec of the HDMI port.
 
Last edited:
The xbox one launched holiday 2013. So this will be its fourth holiday on sale. If they replace it during its 5th holiday with a new upgraded console then a consumer should still get a full 5-7 years of life out of the xbox one. Which is industry standard

What Phil Spencer suggested is that the Xbone will evolve through hardware upgrades, not by ending its lifetime as result of introducing a new console.
This strongly suggests that the 2013's Xbone will be able to run the same games as 2017's Xbone II, though I imagine it would be at a lower resolution, lack of VR/AR support, lower shadow detail, lower framerate and other things that can be toned down because they're not always crucial for gameplay.

The question I raised was where/when the 2013's Xbone is supposed stop supporting new titles coming for the Xbone II or III or whatever.
 
I think this is more about cross compatible hardware rather than upgradeable hardware. There are pros and cons but I do think cross compatibility will happen whether the hardware gap is 3 years or a more traditional 5-7 years.

Con
  • Devs need to build two or more targets. But then, they kind of do already, targeting X360 and XB1 in 2014, 2015 games. In fact cross compatible saves cost.
  • Can't optimize for one hardware, code to the metal. While true, there's nothing stopping a dev from targeting only the latest, but they know that's a bad sales strategy.
  • Might create market confusion. Is my hardware compatible? This might happen and will need clear game box labeling.
  • Resentment from buyers with "old" hardware. Not sure how widespread given iPad, iPhone have done this for years without a ton of resentment.


Pro
  • No more hard generational restarts. XB2 and PS5 will already come with a huge library of games day one.
  • Old consoles continue to get support well after new hardware is released, essentially becoming min spec machines.
  • Mature OS in new hardware--XB1 and PS4 had sparse features in the beginning because new OS had to be written.
  • No waiting 6-7 years for new hardware while PC is running laps around consoles.
 
The question I raised was where/when the 2013's Xbone is supposed stop supporting new titles coming for the Xbone II or III or whatever.

This is a cluster fu** waiting to happen. The reasoning behind console gaming was that all users could have (enjoy) the same experience across one configuration/environment. That’s why PC gaming is so fractured in that sense. Who really wants to invest money/time into a console with a 3yr (or less) expectancy? What developer(s) would want to even support this problematic scheme?
 
  • Like
Reactions: JPT
This is a cluster fu** waiting to happen. The reasoning behind console gaming was that all users could have (enjoy) the same experience without messy configurations. That’s why PC gaming is so fractured in that sense. Who really wants to invest money/time into a console with a 3yr (or less) expectance? What developer(s) would want to even support this problematic scheme?

Amy developers who only want to deal with 2 target specs instead of 123456789 bazillion different specs that can and do exist in PC land.
 
Amy developers who only want to deal with 2 target specs instead of 123456789 bazillion different specs that can and do exist in PC land.

Which is why PC gaming (the user experience) is so dysfunctional. I can understand "WHY" Microsoft wants to do it, on not having their apps/games tied to one particular hardware configuration, by having more-and-more scalable hardware solutions (i.e. PC) during a given period (3yr cycle). However, console gamers aren't PC gamers for those reasons...
 
Which is why PC gaming (the user experience) is so dysfunctional. I can understand "WHY" Microsoft wants to do it, on not having their apps/games tied to one particular hardware configuration (i.e. PC). However, console gamers aren't PC gamers for many reason(s)...

That's why I think it'll be very specific upgrade(s) at specific dates. It won't be like PC where you can swap parts individually, and end up with many permutations of hardware. Otherwise, why would they even make a console at all? But there aren't any details yet, so who knows.
 
Which is why PC gaming (the user experience) is so dysfunctional. I can understand "WHY" Microsoft wants to do it, on not having their apps/games tied to one particular hardware configuration, by having more-and-more scalable hardware solutions (i.e. PC) during a given period (3yr cycle). However, console gamers aren't PC gamers for those reasons...

Console developers are well used to developing for two generations of a vendor's systems at once, even across radically different architectures and no binary compatibility whatsoever (e.g. PS3 -> PS4; 360 -> Xbone).

So long as there is a straight forward, well communicated roadmap with compatibility and consistent tools and deployment methods, this would remain a million miles from the confusing, infinite permutation, driver breaking world of the PC.

A device's life expectancy is not the same thing as its time at the top the tree.

Edit: sorry thought this was the appropriate thread, but we've gone OT in the exclusives thread again. I'll post this in there, a mod can delete this is they wish (no delete?).
 
That's why I think it'll be very specific upgrade(s) at specific dates. It won't be like PC where you can swap parts individually, and end up with many permutations of hardware. Otherwise, why would they even make a console at all? But there aren't any details yet, so who knows.

Hopefully, they don't follow (go down) the same disastrous path of "upgrades" and "add-ons" previous console manufactures took. Console gamers are very fickle towards these types of things... especially the associated cost of such upgrades/add-ons.
 
Which is why PC gaming (the user experience) is so dysfunctional. I can understand "WHY" Microsoft wants to do it, on not having their apps/games tied to one particular hardware configuration, by having more-and-more scalable hardware solutions (i.e. PC) during a given period (3yr cycle). However, console gamers aren't PC gamers for those reasons...

Console developers are well used to developing for two generations of a vendor's systems at once, even across radically different architectures and no binary compatibility whatsoever (e.g. PS3 -> PS4; 360 -> Xbone). What P.S. (seems to) propose would be far more straight forward than that.

So long as there is a straight forward, well communicated roadmap with compatibility and consistent tools and deployment methods, this would remain a million miles from the confusing, infinite permutation, driver breaking world of the PC.

A device's life expectancy is not the same thing as its time at the top the tree.
 
So long as there is a straight forward, well communicated roadmap with compatibility and consistent tools and deployment methods, this would remain a million miles from the confusing, infinite permutation, driver breaking world of the PC.

If this is the case (which seems like a solid foundation)... but, haven't we had developers here, even argue against this? Arguing that optimizations (sync_timing_debugging_etc...) will still be required no matter the platform revision the game code lands on. In essence driving up development hours, cost, even delays.

Mind you, I'm not totally against this. I'm just wondering what happen to the mindset (thoughts / opinions) that such a thing would be terrible for the console industry - versus now?!
 
Last edited:
If this is the case (which seems like a solid foundation)... but, haven't we had developers here, even argue against this? Arguing that optimizations (sync_timing_debugging_etc...) will still be required no matter the platform revision the game code lands on. In essence driving up development hours, cost, even delays.

Mind you, I'm not totally against this. I'm just wondering what happen to the mindset (thoughts / opinions) that such a thing would be terrible for the console industry - versus now?!
Is that something they currently do on a pc? If not then why on a console?
 
Back
Top