Feasibility of an upgradeable or forwards compatible console *spawn*

The market would be fragmented in terms of hardware but as long as its not in terms of software, it doesn't matter.

Most big game devs would target the highend even if the segment is a tenth of the overall userbase. Why? Marketing. It would be no different than the PC arena.

Backward and forward compatibility should be alot easier if you talking just 2-3 different skus. The PC does just fine in handling thousands of combinations with differing cpus, gpus, motherboards, RAM, optical disc players, sound cards and etc. And all from different vendors, while MS or Sony would assert complete control of just a few different iterations of their consoles.

The problem I see with this is scenario, is that it won't be cheap. You think MS's HDD pricing is horrible. You can only guess what MS or Sony would charge for a CPU-GPU module that only sells in the 10k-30K a month versus 200K-300K. Hardware costs to consumers would have to go up across the board because MS and Sony would be selling less of each modulalized component as well as base consoles in a fragmented enviroment.

To alleviate that situation, the manufacturers would basically have to sell all new consoles with the upgrades. But, you just end up with really short generations with the ability for older gen owners to upgrade with modulized components, which means you end again with higher hardware costs to consumers.
 
Last edited by a moderator:
Look at Crysis. We know what happens if a developer releases a PC game that is "forward compatible" (as in, can only be maxed out in the future).
It'll be heavily criticized for not being optimized enough because people with $3000 overclocked multi-graphics card systems can't play it maxed out.

You wouldn't neccessarily have that problem on a console with limited fragmentation. Because you wouldn't need to give users the ability to tweak settings. It was very transparent that even with highest-end PC at the time, you couldn't play Crysis with everything at max. In a console setting, everything can be set by the software because everyone already indoctrinated with limited tweaking available to console users. You can't complain about something you don't really know is there, unless pubs are stupid enough to market their console titles at settings for hardware thats not available on the market.
 
Let me tell you what would happen because the same thing happens now in X platform titles.

Publishers look at which SKU is predominantly bought (I'd bet on the cheaper one), target that for development and either throw in some marginal improvements for the high-end, or find someway to hack it down for the low end.

Console are consoles because there is no real hardware fragmentation.

There are other issues, lets say I sell a base console with a slot to plug in a second processor. I'd bet before year 1 is out Datel or someone else would be taking the manufacturer to court for the right to provide that processor. Maybe they can sell you a slightly faster processor than the OEM, and your off on your way to many different SKU's and a testing nightmare.

Having said all of that I actually believe that the long term future of consoles is a software platform/service, and not hardware based at all.
 
In my world view, it's possible. Times are different now compared to the last decade. Hardware upgrades can be managed if the platform holder has tight software and network policies. It should not be a problem for third parties to offer the upgrades too. I believe MS has strict rules in their new Win8 (tablet ?) hardware partners. They want to standardize on the user experiences too.

Also I reckon on the software side, people are too ingrained into the existing UI. Something like the Magic Mirror, EyeDentify lifestyle apps should be possible now. Developers offer incremental software benefits because no one on the market shows them how it's done. :p
 
Let me tell you what would happen because the same thing happens now in X platform titles.

Publishers look at which SKU is predominantly bought (I'd bet on the cheaper one), target that for development and either throw in some marginal improvements for the high-end, or find someway to hack it down for the low end..

Developer of cheap wares or those more interested in mainstream gamers would probably take such approach. But how many high profile PC titles that target the core PC market takes such approach? And why would console pubs be any different? How marketable is a COD, Gears, Halo, UC, GT, Forza, Fable, AC, Madden, KZ or just about any high profile console title with a huge following going to look by ignoring the highend specs.

Its practical to go with the lowest common denominator when the gap is small as it is with the 360 and the PS3. But if the Wii was a third party centric console, you honestly think Activision and other big pubs would limit thier ware to Wii type visual while the GTs, UCs, KZs, Gears and Halos of the world would make their look like titles look last gen by comparison.

Console are consoles because there is no real hardware fragmentation.

There are other issues, lets say I sell a base console with a slot to plug in a second processor. I'd bet before year 1 is out Datel or someone else would be taking the manufacturer to court for the right to provide that processor. Maybe they can sell you a slightly faster processor than the OEM, and your off on your way to many different SKU's and a testing nightmare.

People make console accessories because they are cheap to manufacture and have great profit margins. A third party GPU or CPU would have major problems as they wouldn't be cheap to reproduce and reverse engineering would be kind of impractical as the manufacturer would provide no help. A memory card or a controller is one thing a GPU or CPU is another.
 
If the console uses off-the-shelves GPU, then it would be ok. If they use custom GPU, third parties can still license from the platform holder. As long as the potential to earn $$$ is greater, and it is fully compliant technically, it may be fine.
 
People make console accessories because they are cheap to manufacture and have great profit margins. A third party GPU or CPU would have major problems as they wouldn't be cheap to reproduce and reverse engineering would be kind of impractical as the manufacturer would provide no help. A memory card or a controller is one thing a GPU or CPU is another.
I don't think 3rd party upgrade knockoffs would be as big a problem as cheat cards. And expansion like that, whether for memory or another processor, is wide open for hacking. In theory the console companies could sell their device at lower margins because they make money on the software - they could sell a GT game expansion pack on their network, for example, so you buy the game and then buy an extra five buck HD texture pack, sort of thing. That's an option these days, though one they'd have to be damned clear to advertise effectively so people don't expect better content from the hardware alone.

The software-only future is definitely the future, I agree with ERP. But that might be too long off such that there'll be a couple more console iterations. Principally though I don't see quite what the console companies would get out of an upgradeable console. I suppose competition with hardcore gamers wanting the best experience and willing to pay for it, will choose the platform they can upgrade. And if 'GT packs' were sold, that'd be a good incentive.
 
There is another movement in server-based computing. The electric bill is getting ludicrously high. They are looking to push some of the computing power back to the client boxes. ^_^

I wouldn't be surprised with some sort of hybrid P2P + server-based infrastructure in the (near) future.
 
I don't think 3rd party upgrade knockoffs would be as big a problem as cheat cards. And expansion like that, whether for memory or another processor, is wide open for hacking. In theory the console companies could sell their device at lower margins because they make money on the software - they could sell a GT game expansion pack on their network, for example, so you buy the game and then buy an extra five buck HD texture pack, sort of thing. That's an option these days, though one they'd have to be damned clear to advertise effectively so people don't expect better content from the hardware alone.

The software-only future is definitely the future, I agree with ERP. But that might be too long off such that there'll be a couple more console iterations. Principally though I don't see quite what the console companies would get out of an upgradeable console. I suppose competition with hardcore gamers wanting the best experience and willing to pay for it, will choose the platform they can upgrade. And if 'GT packs' were sold, that'd be a good incentive.

Knock-offs for controllers or memory is easy, its not like the underlying tech IP is owned by the manufacturer. Knocking offs of a CPU and GPU is another monster all together. Not only would be incur the wraith of MS, Sony or Nintendo but you would incur the wraith of ATI, IBM or Nvidia as well due to IP infringement. Nevermind, that logistically pulling off a knock off cpu or gpu is outright impossible but for only a few companies. And most of those companies have relationships with MS, Sony, Nvidia, IBM or ATI and wouldn't jeopardize those relationship over some knock off accessories.
 
They do have one option to upgrade their current consoles... they can do an Onlive and give people the option of using their current box to play next generation titles. So in 2013 or 2014 they can give people the option of instead of laying out $3-400 to buy a new box they can subscribe to a new box for say $100 a year or whatever with the games being additional or part of a subscription on top.
 
The idea seems impractical to me. The initial console is going to be some combination of more expensive/less powerful than it would otherwise be due to the inherently increased complexity of the design. Then you add overhead to the console OS to allow for multiple hardware configurations. Then there's all the additional QA testing of software and peripherals to ensure compatibility with all of the different hardware configurations. Most of us are familiar with how much of a nightmare PC support is for games (check any PC title's support forums). Why would any sane console developer want to introduce even more of PC gaming's problems to console gaming than have already been imported during this gen?
 
The idea seems impractical to me. The initial console is going to be some combination of more expensive/less powerful than it would otherwise be due to the inherently increased complexity of the design. Then you add overhead to the console OS to allow for multiple hardware configurations.
Nah, instead of creating an upgradeable console, just create a stackable console. That way you have the same CPU, GPU etc (as well as identical software driving it all) in every configuration, cutting out the need to do lots of specific testing. Adding another console would increase available CPU and GPU processing power in a predictable fashion, and there would be no components with different performance characteristics, nor any 'knock-offs'.

There's already methods to scale CPU and GPU performance with the number of available processors with good results, in a console form, the equivalent of SLI could be even more efficient since support for it would be built right into the base APIs, and games would be written with it in mind.

An efficient and fast bus would be needed to tie the colsoles together, but that should be solvable too.

Still, you're probably right it's impractical in reality, but it's within the realm of possibilities on a theoretical level.
 
Yap. All the needed components and tech are already shipping in the consumer market. ^_^ More solutons are on the way.
It's more of a business question.
 
Nah, instead of creating an upgradeable console, just create a stackable console.
Yeah. Treat it as distributed computing. Could even sell full consoles with optical drive and stuff, and "processing nodes" that just have the relevant processors. It'd be very expensive though, and a very niche market I'm sure. $150 to play the same games only with a better framerate won't likely see mainstream adoption. You'd need a convincing business argument to incorporate this into the design; otherwise the hardware engineers can just through in a fast network port and say, "if any devs want to try it, there you go," leaving no real implementation.
 
Nah, instead of creating an upgradeable console, just create a stackable console. That way you have the same CPU, GPU etc (as well as identical software driving it all) in every configuration, cutting out the need to do lots of specific testing.

Yeah. Treat it as distributed computing. Could even sell full consoles with optical drive and stuff, and "processing nodes" that just have the relevant processors. It'd be very expensive though, and a very niche market I'm sure.


That's the same approach as the Mega Drive upgrades... It didn't go well exactly because it was expensive.

Developers making console games are willing to target a market that's as large as possible, especially the ones developing AAA titles. They won't bother for niche markets in consoles...
For that they already have the PC, which can create a much bigger "halo effect" because of much more powerful hardware.


If a console maker would ever try to make an upgradeable console, the upgrades would have to be as cheap as possible, in order to increase the adoption and make it relevant.
That's why the 4MB Ram upgrade for the N64 was a mild success and Mega Drive upgrades were a total failure.

That said, anything above a $100-$120 upgrade (good for a second graphics card matching the internal GPU, a couple of years after the console is released) would be too risky.
 
Nah, instead of creating an upgradeable console, just create a stackable console. That way you have the same CPU, GPU etc (as well as identical software driving it all) in every configuration, cutting out the need to do lots of specific testing. Adding another console would increase available CPU and GPU processing power in a predictable fashion, and there would be no components with different performance characteristics, nor any 'knock-offs'.

There's already methods to scale CPU and GPU performance with the number of available processors with good results, in a console form, the equivalent of SLI could be even more efficient since support for it would be built right into the base APIs, and games would be written with it in mind.

An efficient and fast bus would be needed to tie the colsoles together, but that should be solvable too.

Still, you're probably right it's impractical in reality, but it's within the realm of possibilities on a theoretical level.

Yeah. Treat it as distributed computing. Could even sell full consoles with optical drive and stuff, and "processing nodes" that just have the relevant processors. It'd be very expensive though, and a very niche market I'm sure. $150 to play the same games only with a better framerate won't likely see mainstream adoption. You'd need a convincing business argument to incorporate this into the design; otherwise the hardware engineers can just through in a fast network port and say, "if any devs want to try it, there you go," leaving no real implementation.

Well sure, it's possible. It's just not a very good idea. And the distributed computing model isn't a great fit for many applications, IMO.
 
That's the same approach as the Mega Drive upgrades... It didn't go well exactly because it was expensive.
The megadrive upgrades weren't anyhting like what I described. They were either almost, or even entirely dissimilar in hardware architecture to the base console they attached to, and there was no inherent support for these expansions. Games needed to have explicit support of the expansion devices for them to do anything at all, or else they'd just be an expensive plastic box sitting there and looking ridiculous.

Well sure, it's possible. It's just not a very good idea. And the distributed computing model isn't a great fit for many applications, IMO.
It's a much better idea than anything else proposed in this thread so far. :p And as for distributed computing, tell your opinions about that to AMD for example, who have used distributed computing with great success as far as scaleability is concerned in their server CPUs for example. ;)
 
The distributed computing thing is exactly how Nuon was supposed to work, although that was largely to allow DVD player manufacturers to keep costs down and still claim Nuon compatability.
They had 4 core chips and were later to release 16 core chips, with the software supposed to be designed to scale.
They released 8 titles the best of which had to be recalled because it wasn't compatible with all configurations.
 
It could be a problem of mismanagement or failed execution though.

I am questioning if it's a niche concept. There are consistent rumors about iOS and Mac support for Thunderbolt across all SKUs. Recently, Intel declared that they want to deliver 50Gbps interconnect within 5 years. IBM announced memory breakthrough for small and large devices.

It may be possible to deliver half a console for entry level applications. Share computing resources between current and next-gen consoles. Shared h/w resources between PC, appliances and consoles. All the way to a scaled up homogeneous cluster for media heavy apps like what Grall mentioned. The concept may be more appealing if all the parts are cheap, high volume, off-the-shelf components though.
 
Back
Top