Feasibility of an upgradeable or forwards compatible console *spawn*

where devs already accommodate PC ports

That's a misconception, most devs don't accommodate them at all, they drop the code on the doorstep of an external team and tell them to deal with it.
Obviously PC centric devs moving to console likely do the work themselves, and I think it's likely for a lot of next gen games that PC will actually be the primary development SKU, though control schemes will still be tailored towards consoles where the bulk of the buyers are.

My issue with a spec level that changes annually, is that you lose what makes a console a console, and that's optimizing for a single target.

As an aside it's difficult to compare mobile development and console development, mobile is much closer to what console development was 20 years ago, games built in weeks or months by small teams. vs games taking years built by huge teams.
 
That's a misconception, most devs don't accommodate them at all, they drop the code on the doorstep of an external team and tell them to deal with it.
Obviously PC centric devs moving to console likely do the work themselves, and I think it's likely for a lot of next gen games that PC will actually be the primary development SKU, though control schemes will still be tailored towards consoles where the bulk of the buyers are.

My issue with a spec level that changes annually, is that you lose what makes a console a console, and that's optimizing for a single target.

As an aside it's difficult to compare mobile development and console development, mobile is much closer to what console development was 20 years ago, games built in weeks or months by small teams. vs games taking years built by huge teams.

Specific console devs might not handle PC ports but the cost of PC ports cost are ultimately handled by pubs. And if porting to the PC was so cost intensive because of the level of fragmentation, pubs wouldn't greenlight ports to the PC. The big pubs already support multiple hardware configuration when it comes to porting to different consoles and PCs. Providing support for multiple ports within a platform is alot easier and cost friendly than supporting different hardware.

Smartphone development may be akin to console development 20 years ago, but the console games never sold new for 99 cents. You may be able to push out apps in relatively short periods of time but Android and iOS libraries are orders of magnitude bigger than any console ever. You providing cheaper wares in a more highly competitive market while accommodating annual refreshes.

What makes a console is that its a gameplaying CE device thats readily accommodates the everyday users and provides an experience that only a PC can provide. Where PCs are either more expensive, require greater user knowledge and/or more logistically involved than consoles. Its simply not a matter of one sku per brand. Its not fragmentation of the hardware pe se that separates consoles from PCs. Its the fragmentation that exist on mulitple levels. A few intragenerational refreshes is not going to destroy the line between a console and PC.
 
My issue with a spec level that changes annually, is that you lose what makes a console a console, and that's optimizing for a single target.
I disagree. A console was a device for playing games. That it was closed hardware meant you could optimise for it, but that wasn't the purpose nor the selling point. People bought consoles to play games, and if the new boxes play games, the consumers are happy (and oblivious to what's running under the hood. Let's be honest - we've no idea how optimised games are or aren't!).

What you lose in maximum performance, you gain in flexibility. Would anyone want their games to run 20% less? Nope. Does anyone want to lose access to all their old favourite games? Probably not. Would everyone like to play their existing library at improved quality? Yep! Would they all want to pay for that option? Not all, but some would, and I'm guessing a sizeable portion too once they see it in operation. If we imagine XB supports forwards compatibility and PS doesn't, when both consoles launch, gamers will point at XB1's inferior games and identify its shortcomings. Two years later, when XB1.5 launches, those XB1 gamers that upgrade will laugh at PS owners who are stuck with their inferior experience. There'll no doubt be arguments about spending more money, but the experience will be better for those that make that choice. 2 years after that, the XB gamers laugh again. And two years after that when PS5 releases with no BC, XB just progresses to another iteration.

I think that's the better user experience overall. I think that's also tied in with cross-device applications and services as Joker wants. Those same games run on all your consoles and PC and Android device, etc. The reason consoles don't do this already isn't because society demanded closed boxes with high utilisation, but because the origins were expensive hardware that needed a long life, didn't have a hardware path, wasn't fast enough to run middleware or abstraction, etc. The technology constrained the experience to a particular implementation without that implementation necessarily being the platform of choice.
 
I disagree. A console was a device for playing games. That it was closed hardware meant you could optimise for it, but that wasn't the purpose nor the selling point. People bought consoles to play games, and if the new boxes play games, the consumers are happy (and oblivious to what's running under the hood. Let's be honest - we've no idea how optimised games are or aren't!).

What you lose in maximum performance, you gain in flexibility. Would anyone want their games to run 20% less? Nope. Does anyone want to lose access to all their old favourite games? Probably not. Would everyone like to play their existing library at improved quality? Yep! Would they all want to pay for that option? Not all, but some would, and I'm guessing a sizeable portion too once they see it in operation. If we imagine XB supports forwards compatibility and PS doesn't, when both consoles launch, gamers will point at XB1's inferior games and identify its shortcomings. Two years later, when XB1.5 launches, those XB1 gamers that upgrade will laugh at PS owners who are stuck with their inferior experience. There'll no doubt be arguments about spending more money, but the experience will be better for those that make that choice. 2 years after that, the XB gamers laugh again. And two years after that when PS5 releases with no BC, XB just progresses to another iteration.

I think that's the better user experience overall. I think that's also tied in with cross-device applications and services as Joker wants. Those same games run on all your consoles and PC and Android device, etc. The reason consoles don't do this already isn't because society demanded closed boxes with high utilisation, but because the origins were expensive hardware that needed a long life, didn't have a hardware path, wasn't fast enough to run middleware or abstraction, etc. The technology constrained the experience to a particular implementation without that implementation necessarily being the platform of choice.

Actually I don't disagree.
But the primary "to the metal" optimizations that people associate with consoles are largely a function of understanding the balance of the system, and being able to build assets that target a specific piece of hardware. You lose that as soon as you add an extra target.

Having said that with the prevalence of Middleware Engines, it's becoming much less of an issue. But even with that you have to deal with multiple different CPU/GPU splits, probably you end up catering to the lower end hardware and just try and put in cosmetic additions for the 1.5 SKU. Though I guess it could go the other way.

The other thing I think works against this model on console vs mobile is the relatively slow rate of increase in performance. I'm not sure 12 months or even 2 years from now you'd have something that would demonstrate a clear advantage in games.
 
how does it work with iPads and smartphones then?

The number of exclusive iPhone 5/iPad 4 games is almost nil. Buy the time you actually start to use the hardware to the fullest in those devices, 2 more generations have been released. You've eliminated the biggest upside of having a console: buy product and have a guarantee that 4 years down the road a game works the exact same as someone buying the machine for the first time.

It never ends well, see where Sega is now.
 
Does it need repeating again?! You don't use the hardware to its fullest. You sacrifice peak performance for flexibility!
 
The number of exclusive iPhone 5/iPad 4 games is almost nil. Buy the time you actually start to use the hardware to the fullest in those devices, 2 more generations have been released. You've eliminated the biggest upside of having a console: buy product and have a guarantee that 4 years down the road a game works the exact same as someone buying the machine for the first time.

It never ends well, see where Sega is now.

No.

You ever looked at the performance increase for an iphone over time when it comes to annual refreshes? Its almost like going from a PS1 to a PS3 in 4 years. Refreshes on consoles aren't going to lead to the level of annual performance increase seen in smartphones or tabs.

Is any 4 year old PC obsolete and receives no support for software released today?
 
Last edited by a moderator:
Actually I don't disagree.
But the primary "to the metal" optimizations that people associate with consoles are largely a function of understanding the balance of the system, and being able to build assets that target a specific piece of hardware. You lose that as soon as you add an extra target.

Having said that with the prevalence of Middleware Engines, it's becoming much less of an issue. But even with that you have to deal with multiple different CPU/GPU splits, probably you end up catering to the lower end hardware and just try and put in cosmetic additions for the 1.5 SKU. Though I guess it could go the other way.

The other thing I think works against this model on console vs mobile is the relatively slow rate of increase in performance. I'm not sure 12 months or even 2 years from now you'd have something that would demonstrate a clear advantage in games.

I would agree that getting the correct release cycle (annually, every 2/3/4/etc. years) will be important. I was thinking more along the lines of 3/4 years. That way there would only be one additional target at a time that devs would need to optimize against.

The faster the release cycle the more peak performance you would sacrifice. Finding the correct balance will be important.
 
Your points are somewhat immaterial to the discussion. Everything you say is about things valued about console or disliked a PC is true, but that's also not proof an upgradeable console couldn't work - only that some people would complain (and those same people may also upgrade and still complain). Those PC gamers who dislike that their hardware doesn't get maxxed out still choose to play on PC rather than outdated consoles that are maxxed out but produce worse results.
Well and those that would have prefered better resolutions and better textures and better franerates on their console still play on consoles ;)

Note I have never said the upgradeable console is a perfect solution nor ideal for everyone - only that it could work and the gaming populace wouldn't shun it outright, which is what you've suggested. You also cite unrelated arguments like driver issues. That's not a problem for a console platform as described in this thread. Effectively consoles give the PC upgradeable experience without the drawbacks. It's the iPad experience in console form.
It could but there is no clear evidence it would. Android devices/iOS devices arent such clear evidence
So if you don't want to upgrade in 3-5 years to get the same game at a better experience, you want to deny other people the opportunity to? If Sony had released a PS3.1 last year that could play LoU at 60 fps, high IQ, your PS3 version would be exactly the same as you have now. You've lost nothing from other people having the chance to buy a better machine to play the same game at a better experience. Those people who pay more money than you are entitled to get a better experience, no? Bare in mind that the argument isn't for a $500 launch console followed by a $200 replacement. At worst, you'd pay $500 earlier to get more years of play, and then a later buyer of the new console would get more hardware for the same money, but that's technology for you. If you buy a $500 PC this year to game, someone who buys a new PC in three years will get a better experience than you. Same for their TV, their mobile, their new car, and their camera. Everyone is acutely aware that their technology will soon be superseded, so it's hardly a concern for consoles.
It doesnt work that way Since there is no better version of the TLOU , in my perception what I get now is the best experience there is of that game. And as you can see this game is already amazingly successful, mass enjoyed and memorable at its current form. This hypothetical thinking about how it would have been with better IQ does not interfere with the way we experience and perceive this well crafted and optimized game for the PS3 owners whether some bought the console at launch or a few years later except by some outliers that always exist. If you introduce a version that is better then it begins to interfere with that perception. In essence the current refresh rate of consoles doesnt really create the need to really ask for an upgrade every 2 years. Otherwise people would have been asking for it and consoles would have been sharply losing appeal 2 years into their life cycle.

In addition its only an assumption that games will be well optimized for all versions. If the developer can cram easier visuals to console version 2.0, 3 years after the 1.0 release it doesnt quarantee that the dev will put the necessary effort, let alone trying to find smart solutions and coding to the metal to get 100% efficiency and the capabilities for console 1.0. Even basic optimization may be under applied. It makes more sense in terms of money and time to take the easier approach provided by the upgraded more powerful version and simply scale down for the previous console. It also doesnt create incentive to optimize fully on version 2.0 since it can easily provide better results (whereas current consoles give the incentive to utilize better the hardware to give better games than the previous releases). So basically you are screwing the first adopters (and potentially the later adopters) by not giving the most out of what the hardware they paid for can do, all for the sakes of an unnecessary flexibility that makes things more complicated and potentially provides games that underperform in relation to what the hardware is really capable of. Simply because the devs have the easy approach available.

Edit: So in an alternate universe we could have gotten a worse version of TLOU on PS3 V.1 and a version of TLOU that is kind of better than on PS3 V.1 that just isnt as good as it should have been on PS3 V.2
PC gaming is not the same as console gaming. You get a different library, different game emphasis, different online structures, different living room experience. You also have driver and maintenance issues. A PS3 gamer wanting better visuals for LoU wouldn't be served by PC, but would be served by PS3.1.
The same counts also for TVs, cameras and Android devices so again that goes back to what I said earlier. They cant be held as evidence that it should be applied for consoles ;)
Firstly, I'm not forcing anything. I'm not a policy maker. Secondly, the upgradeable console is an option not forced on anyone even if it was realised. Thirdly, the market has never had an option to try an upgradeable console. Moving to PC isn't the same, so it's never been tested and you can't prove gamers would react against it.

If you really believed that, you'd be in favour of MS or whoever releasing a new, compatible box in 2 years time and letting the consumer decide if they want to buy it or not.
If there was a need for an upgradeable console it would have happened already as the market would have been responding accordingly ;)
 
I think the key is the amount of time between consoles. I think at 8 years for the xbox to xbox one is to long and most of us will agree. Would 4 years be the sweet spot. That would put us at 2017. By then we will be at sub 10nm , 3d chip stacking would be much more viable. DDR 4 will be very common , AMD will most likely have a new low power core and GCN will be a thing of the past and they will be on a whole new gpu tech.

I would think in 2017 a 32 gig console would be viable along with maybe 128 megs of esram . That would create another boost in graphics.

Throw it out there for $500 and have it play all xbox one games. Over the first year or two of the device devs and gamers will migrate to the new system.

Games that come out in 2016/17 can already have new texture packs on disc to take advantage of the new ram amounts thus giving more insentive for people to upgrade
 
In essence the current refresh rate of consoles doesnt really create the need to really ask for an upgrade every 2 years. Otherwise people would have been asking for it and consoles would have been sharply losing appeal 2 years into their life cycle.
There was no option before, so people couldn't ask for it. People weren't asking for smart phones in 1990 because the opportunity didn't exist, but now they lap them up. Stick an frequently upgraded console out there and for all any of us know it'll outsell the static consoles 5:1.

In addition its only an assumption that games will be well optimized for all versions.
But games aren't 100% optimised for static hardware as it! It doesn't make economical sense. A few 1st party AAA titles squeeze every ounce of the hardware. A few AAA 3rd parties might do the same in a very competitive software marketplace. The rest cross their fingers and hope for the best because it's not economically viable to go down to the metal across three platforms. The vast majority of games sit on middleware. Do you refuse to buy Unreal based games on your PS3 because it's not getting 100% utilisation from the hardware? Are you going to refuse to buy games on your PS4 or XB1 that aren't coded low level enough to get 100% use from the hardware?

Edit: So in an alternate universe we could have gotten a worse version of TLOU on PS3 V.1 and a version of TLOU that is kind of better than on PS3 V.1 that just isnt as good as it should have been on PS3 V.2
In the alternate universe I presented, you got exactly the same game on PS3 which is already over the limits of the machine in terms of what it can achieve at a smooth framerate, and the same game on another machine capable of playing it at a better framerate, giving you the choice depending on how much you want to spend. I was using it to show people would prefer better machines than the ones they currently have.

If PS3 had been 'soft' and the games sat on an API layer, LoU wouldn't have been as good. But as you say, you won't have anything else to compare it to, so a few cut-backs here and there wouldn't be noticed. You'd then see the PS3.5 version which looks even better, and you have the option to upgrade or not. Will you be disappointed that your machine can't play it as well as the new (more expensive) model? Probably. But then people have to live with the constant disappointment of other people having better cars, phones, houses, computers, etc. I don't see why the console space has to provide an egalitarian society where everyone is equal.

If there was a need for an upgradeable console it would have happened already as the market would have been responding accordingly ;)
The technology didn't exist earlier to enable it earlier. It's worth noting that if the market needed an upgradeable computer when every computer architecture was static, they would have, and...oh look, they did. All those static, coded to the metal home computers were replaced with generic PCs. If the market needed an upgradeable mobile technology then...yep, that one happened too. Fixed hardware, discrete products were replaced with versatile ones not coded to the metal. No-one to date has offered an upgradeable console lineage because it wasn't feasible (let of software technology and predictable hardware futures), but where elsewhere across this industry we're talking about consoles being twice as powerful in hardware terms probably not being able to really differentiate to the end users, knocking some percent of peak hardware usage that people probably won't notice to provide full forwards and backwards compatibility, easy development, more consumer choice and greater opportunity for profits from the console companies (can sell higher margins on new hardware while still supporting the entry-level version), it's hard to argue against as both consumers and businesses.
 
The Vita platform may be heading there.

And of course iOS too, in a different way:
http://www.macrumors.com/2013/09/19/logitech-and-clamcase-teasing-first-two-mfi-game-controllers/

Two controller makers are teasing MFi "Made for iPhone" game controllers following the public release of iOS 7. The new OS includes special APIs for third-party hardware game controllers, turning the iPhone and iPad into gaming systems on par with other handheld consoles.

ClamCase has published a trailer for its GameCase iPhone controller, which connects via Bluetooth, includes its own battery, and supports all iOS 7-compatible iOS devices.

...

At the same time, Logitech is teasing its new hardware controller on its Facebook page. The position of the hands strongly suggests the leaked iPhone enclosure controller that surfaced from Logitech back in June.

...

I haven't tried any of these controller. Doubt they are as good as DS4 and Xbox controller.

EDIT:
Touchscreen latency measurements conducted by Agawi:
http://appglimpse.com/blog/touchmarks-i-smart-phone-touch-screen-latencies/

It looks like Agawi streams games/apps to iOS devices for demo, similar to Gaikai.
 
Whether it happens in 2 years or 6 years, I do think the next Xbox and PS will be backwards compatible and in some cases, forwards compatible, where the older consoles can play future gen games at reduced settings. I think that's the new way of doing a console transition rather than these hard restarts.

Steamboxes will essentially do this in PC like fashion, but in a yearly cycle in line with Intel's CPU updates and Nvidia/AMD's GPU updates. Yes, at some point, your 2014 mid-range Steambox will not be able to play a future game even at the lowest settings, and that's when you have to upgrade but that will be staggered in the same way PC's are staggered.

The PS4 should have no problems with backward/forward compatibility, but the Xbox could because of their more proprietary hardware (ESRAM, "move engines"). It could be problematic as it becomes a boat anchor that needs to be included in future consoles unless they can emulate the functionality in software/firmware.
 
I think Sony and MS will just sit back and wait to see how much trouble the Steambox is actually going to be for Valve. They're in no hurry as the console business has been working well enough these past years and any screw-ups were usually the result of trying to mess with it.
 
If the RAM of a successor is fast enough emulating the ESRAM should be easy enough.

This was the biggest question in my eyes. So if the Xbox One+ was to have 8 GB of DDRX RAM with sufficient speed then any code specifically written for the ESRAM will be supported by the virtual device drivers that would be put into place on the new box ?
 
Unless there was code dependent on the actual behavior/timing of the ESRAM, yes it should just work especially if the other components were faster.

Now it's likely that some game somewhere will be dependent on RAM timing, but you have the same issue with just increasing RAM speed or even changing disk manufacturer.

One of the downsides of developing and testing on a single hardware target, is you can have bugs related to timing that aren't exposed.
 
This was the biggest question in my eyes. So if the Xbox One+ was to have 8 GB of DDRX RAM with sufficient speed then any code specifically written for the ESRAM will be supported by the virtual device drivers that would be put into place on the new box ?

They would just stub out all esram api related code on the os side so that all those calls do nothing at all, and just let it all run from regular ram if it's fast enough.
 
This was the biggest question in my eyes. So if the Xbox One+ was to have 8 GB of DDRX RAM with sufficient speed then any code specifically written for the ESRAM will be supported by the virtual device drivers that would be put into place on the new box ?

If you have a 16 GB HMC running at 500+ GB/s that will provide a lot of leeway especially with a stacked SOC.
 
Back
Top