Feasibility of an upgradeable or forwards compatible console *spawn*

That's not completely true. Old techniques are used until the new hardware has a large enough install base to worth targeting directly. Ergo, your new hardware runs the old code just fine, and devs can make new content for it, but it's not used to anything like its full potential for some time after release, as the lowest common denominator is last gen HW.

Beat me to it. GPGPU springs to mind.
 
That's not completely true. Old techniques are used until the new hardware has a large enough install base to worth targeting directly. Ergo, your new hardware runs the old code just fine, and devs can make new content for it, but it's not used to anything like its full potential for some time after release, as the lowest common denominator is last gen HW. That's not true of traditional consoles that, as closed boxes, also get targeted as a complete unit.

Sure but old techniques all suddenly work and run better which is still an instant improvement across the board without having to wait for new techniques to be developed. It's still a win. It's not like traditional consoles get all these new features used on day one anyways since devs have to re-write all their code just to get the basics working first and get games to make the launch schedule, then over time they will get around to making more use of the new hardware. In some cases new hardware features don't get used at all because maybe their implementation is weak like tessellation on the 360, or maybe it's broken like scaling was on the ps3, or maybe it's too damn difficult to make use of any time soon like spu's, or maybe it's hobbled by other aspects of the hardware design like limited edram, etc, and now you are stuck another 7 years with nothing. With a more frequent hardware release schedule if there is a hardware feature that is broken, limited or gimped in some way at least you'd only have to wait 2 years for it to get fixed rather than 7.


Except that comes at added cost buying new hardware. In the traditional console, you certainly get your money's worth when every ounce of performance is squeezed out of it. With progressive hardwares, improvements come more with buying new hardware than devs advancing the software, although of course software developments do help.

It's optional though, those that don't care to spend don't have to and can be stuck with ancient hardware if they so choose...exactly as it works on current console hardware except that everyone is forced to stick with ancient hardware for 7 years. The difference is that people would now have the option to upgrade when they want rather than wait 7 years, and those that didn't care could keep their old hardware.


Hypothetically, let's imagine MS released a BC console. Moving to compute based rendering would probably be slowed down versus releasing a completely new hardware with no legacy ties.

Now if continuing consoles are fully BC and get refreshes every two years, as new techniques are developed (like tessellation was) they won't be fully implemented on the new hardware as the old hardware is the primary target. This is the problem with PC and mobiles - new hardware with new techniques goes unused. That's where console's greater hardware utilisation is a big win.

It's not really a problem because at a minimum all existing techniques get improved and run better, whereas worse case new stuff like say tesselation won't get used until a later date when rev B of the hardware becomes more prevalent. However because developers are now dealing with a software platform rather than a fixed function console they will likely make use of said features far quicker for the simple reasons that:

1) They know it will not be throwaway code, it will be code that they can continue to use for years on subsequent hardware revisions so it's more in their interest to support it sooner than later.

2) Because rev B of the hardware is so similar to Rev A, they don't have to spend 2 years re-writing everything to just have the basics work, so they can look into supporting the new hardware feature far faster than they could on typical consoles.

3) Because it's a software platform of similar architecture, you can be sure that middle ware will support it very quickly, so supporting new hardware features will be easier.

You can just look back to last gen tesselation and spu's to see how long it can take for developers to support custom hardware on fixed function consoles. Just because they have this freaky new hardware doesn't mean it will get supported right away, or supported at all.


Obviously the change in the market and cross-platform middlewares means new boxes aren't viewed in isolation, and personally I think the move to abstracted hardware makes more sense than the fixed box paradigm. And as a result, getting better results warrants smaller, more frequent hardware upgrades rather than the 5+ year huge leap. But it'd be wrong to underplay the benefit of targeted boxes versus an ever moving target. The latest, greatest hardware in these case will never be ideally used. It'll basically be current gen +1, and it'll only play the new, exciting technologies once it's become worn in and a little slow by the latest standards.

I actually think it's the exact opposite, to where specialized fixed hardware is more holding everything back rather than helping, and that we would get better results and better games if we went with abstracted hardware on more regular release cycles and relied on middleware to solve all the implementation crap. It's like the old argument of yesteryear of C vs assembly language. Eventually there came a point that to the metal assembly language simply made no sense anymore in the grand scheme of things.Sure it was to the metal and let you tap into anything and everything on the hardware, but the time to implementation, added complexity, lack of portability, etc eventually pushed it to the side to be replaced by "bloated un-optimized" C (as was so often said back then by the assembly language purists). The same thing needs to happen now but on a grander scale and we would all benefit from it.


Beat me to it. GPGPU springs to mind.

How long will it take for gpgpu to get commonly used on the new machines compared to on a more general software platform? I'd be willing to bet you would have seen 1 or even 2 new hardware revisions of upgradeable backward compatible consoles in the time it would take for most developers to fully embrace gpgpu on the new fixed machines. That's because they are still too busy just re-writing everything to work on the new machines to have time to support the new features as of yet.
 
Last edited by a moderator:
How long will it take for gpgpu to get commonly used on the new machines compared to on a more general software platform?.
I don't know but GPGPU isn't new. Hell, it was a introduced as a resource framework in OSX 10.6 on the Mac which was released on 28 August 2009.

Sometimes it just takes a long while for software to catch up to hardware, or rather be worth the effort exploiting.
 
Sure but old techniques all suddenly work and run better which is still an instant improvement across the board without having to wait for new techniques to be developed...
I mostly agree with you and I've largely argued the other side of this discussion in this thread. However, it's important to recognise that few things in life are 100% win, and we should note what gets lost, then decide if the losses make up for the gains. I feel you're a little too enthusiastic for the change and are somewhat idealising the outcome on a couple of points.

In some cases new hardware features don't get used at all because maybe their implementation is weak like tessellation on the 360, or maybe it's broken like scaling was on the ps3
That's far, far truer on PC than ever it's been on console. Hardware IHVs have included features that have pretty much never been used over several gens of GPU. Introduce a new technique on one console in 2017 that isn't on the others and it won't get used. In 2019 if the sequel has it, it might start to see use, by which time the first implementation is possibly too long in the tooth to be any use at all (see first efforts at tessellation HW on PC. Basically a waste of silicon as it was unused when released and too impotent to be valid when tessellation became featured in games).

With a more frequent hardware release schedule if there is a hardware feature that is broken, limited or gimped in some way at least you'd only have to wait 2 years for it to get fixed rather than 7.
A more frequent update strategy would need a very robust HW basis, which'd basically mean PC. There's no point using an eDRAM based solution if that's just adding issues to utilising the hardware and cross-platform. Which is where consoles have always had an advantage in actually being able to choose more esoteric solutions that provide a better bang per buck than PC's generic solution. I agree that the loss of esoteric hardware and simplification of development is a Good Thing overall, but generic, abstracted hardware will always be potentially at a pure-performance disadvantage.

It's optional though, those that don't care to spend don't have to and can be stuck with ancient hardware if they so choose...exactly as it works on current console hardware except that everyone is forced to stick with ancient hardware for 7 years. The difference is that people would now have the option to upgrade when they want rather than wait 7 years, and those that didn't care could keep their old hardware.
I've repeated that argument multiple times in this thread. ;)

You can just look back to last gen tesselation and spu's to see how long it can take for developers to support custom hardware on fixed function consoles. Just because they have this freaky new hardware doesn't mean it will get supported right away, or supported at all.
Right, but the argument is that the peak potential is better in fixed hardware. It may take a few years, and by then new hardware will make the old hardware look weak, but you get it all used.

It's something like a choice between:

Fixed hardware -
50% utilisation on day one (5 performance metric units), 80% in year 3 (8 PMUs), and 99% (9.9 PMUs) in year 5.

Abstracted progressive hardware
70% utilisation on day one (7 PMUs), 75% in year 3 (7.5 PMUs), and 80% (8 PMUs) in year 5
Then, with added expense, 70% of the new B spec machine at 2x speed in year 3 (14 PMUs) and 75% utilisation at year 5 of A spec (15 PMUs)
For someone not wanting/unable to afford to upgrade to the B spec, they get less performance from the hardware they own. For those who do want to upgrade, they'll get way better performance than the old hardware could hope to achieve no matter how efficiently it is used.

Fixed hardware gives better peak, extracted performance for the money spent on it for gamers, at added developer effort. Progressive hardware gives the option for higher peak performance at any given time because you can buy the latest, greatest machine, but its utilisation is lower. You sacrifice peak performance for gains in flexibility, ease of development, and the option to upgrade more frequently.

Personally I'm in favour, but I recognise the downside as well as the up.

How long will it take for gpgpu to get commonly used on the new machines compared to on a more general software platform?
GPGPU has been around as an option on PCs for ages, but not used. It's now that it's an option on the consoles that it'll get utilised. More evidence that new hardware features get overlooked while the lowest common denominator remains the target. New consoles set a new lowest common denominator, and all that potential that PCs have had is now going to be unlocked. If XBToo is the same as XBOne but more so and with a raytracing unit, will the RT unit actually be used, or will devs just run their XB1 code on the XB2 and save themselves the bother? History suggests the latter. New PC and mobile technologies aren't adopted for quite a whiles after their introduction.
 
I think those losses already exist in most games, especially if the console is running a degree of VM abstraction. We've been told Sony lets devs hit the metal harder, suggesting HW is already distanced from devs on XB1. Throw in middleware and there's already lots of potential lost to make development and porting easier.

As came up before, you lose say 10% performance from the hardware in order to gain portability of games across generations. Overall I'd say that was a benefit in the same way it is on other devices.

Maybe. But now the middleware has twice the work. ;) Instead of spending 5 years of running better on the same hardware, they will spend each year on just running on the next hardware. Just moves the problem.
 
I don't know but GPGPU isn't new. Hell, it was a introduced as a resource framework in OSX 10.6 on the Mac which was released on 28 August 2009.

Sometimes it just takes a long while for software to catch up to hardware, or rather be worth the effort exploiting.

Yes but even if gpgpu had launched with a perfect implementation in 2009 it would still not have been used. The reason is because of the consoles as they are built today.

Back in the day pc games were actually pc games in that they were written for that platform in mind. So when a new hardware feature came out it was used and swallowed up very quickly. That's because the new hardware played all the old games and code so rather than have to rewrite their games from scratch like consoles force you to, instead the devs could make use of new features right away, be they an instruction set, gpu or whatever.

Fast forward to today and there really isn't such thing as a pc game anymore. So while people like to point at the pc and say "look at how hardware features aren't supported", the entire reason they aren't supported is 100% due to consoles. That's because games are written as console games first and foremost, and that is what dictates what hardware features will get used. As such the reason we are not reaping the benefits of gpgpu today is because of the fixed, limited, rigid hardware nature of how consoles are currently designed. We have not benefited from the consoles being so rigidly designed, and have actually all lost out when it comes to hardware use.

Now we're in the same boat yet again as xb1/ps4 will dictate what hardware features will get used for the next 7 years. So if Nvidia comes out with some crazy new gpu hardware features next year well too bad, it won't get used. Putting consoles on middle ware, backward compatible and on more frequent release schedules would cure this. Instead now all you have to look forward to for the next 7 years is hoping that gpgpu will finally get used. Yay.
 
I mostly agree with you and I've largely argued the other side of this discussion in this thread. However, it's important to recognise that few things in life are 100% win, and we should note what gets lost, then decide if the losses make up for the gains. I feel you're a little too enthusiastic for the change and are somewhat idealising the outcome on a couple of points.

It's due to two reasons, one is historical and one is financial.

I don't know if you were a pc gamer back in the day, but back then we used to love any and all new hardware announcements. That's because they would get supported, so a new instruction set was a performance boost, a new gpu feature was a performance boost, etc, because pc games were actually pc games back then so all that delightful new hardware was used even without the middleware crutches of today. Today it's largely who cares when new hardware comes out when it comes to games. Because Amd and NVidia can show me a million different new hardware features and I know it won't matter because none of it will be used because of the consoles, not for 7 years anyways. Improvements in games today are purely software based, not hardware based, so much of the fun and hardware use for games has dissapeared. I feel that can come back though with tweaks to how consoles are handled and in a way that makes future development cheaper and easier as well.

Secondly, I think what I've suggested is the best thing to financially help the games industry and as such I kind of feel it will have to happen one way or another. It's easy to say "meh' when yet another game company dissapears or has another round of layoffs, but it's a situation that's in dire need of repair. Putting out yet another console is not the right way to go about things in my mind for the health of the industry. And hell if they ever want to get more women in the industry they will have no choice but to change. The problem we had by and large in retaining females in the biz was the lack of quality of life. Young males tolerated it while females would ultimately quit. So drastic change needs to happen one way or the other for financial reasons, to simplify development and to diversify the industry, which in turn will lead to new game ideas.
 
Now we're in the same boat yet again as xb1/ps4 will dictate what hardware features will get used for the next 7 years. So if Nvidia comes out with some crazy new gpu hardware features next year well too bad, it won't get used.
This is largely my take and why I no longer build my own gaming PCs. Indeed for the vast majority of games I've been good with an iMac, Bootcamp and Windows. (512mb Radeon HD4850 in 2009, 4Gb Geforce 780MX this year). Surprisingly, because of the lack of games chasing the gaming dragon, hardware lasts a lot longer than when I first start gaming on PC, first with a Nvidia Riva 128 chip, then a TNT chip, then a Geforce256.

It's not as though new GPUs offer no improvements, obviously performance ramps up relative to resolution and AA settings but genuine leaps in games feel fewer and farer between.
 
I don't know if you were a pc gamer back in the day, but back then we used to love any and all new hardware announcements. That's because they would get supported.
I don't remember it ever being like that. Hardware features like hardware scrolling and sprites and shadowing tech went unused, because the hardware base was fragmented and they weren't worth using. DX decided what features games would have, and anything beyond DX's API went unused.

It's the same on mobile now. iPad 5 has features other iPads don't have. They aren't being targeted by games because iPad 5 represents a small niche in the iPad market and it's far more cost effective to target the OGL2 standards of the mainstream. It'll remain this way until OGL2 devices are superseded in number by OGL3, save for possibly a few titles that try to win over power users.

Of course, middleware that expands to support OGL3 may make some use of the later HW's features, but new hardware isn't targeted. Never has been. It's too new and niche to be economical to spend the extra effort in supporting properly.
 
Here we go again...

http://www.dualshockers.com/2015/10...e-release-of-a-ps4-with-improved-performance/

Sony Executive Weighs in on the Possibility of a Future Release of a PS4 With Improved Performance


"During an interview on 4Gamer, Sony Computer Entertainment Senior Vice President Masayasu Ito was asked his take on the possibility of a future release of a PS4 console with improved performance.

Ito-san immediately clarified that he has nothing specific to announce, but the adoption of the X86 architecture, made performance improvements in due time possible. Yet, Ito-san continued, the real question is whether those improvements should be made or not.

With the PS3 architecture and the Cell, it was impossible to expand the machine beyond the capacity of the hard disk. On the other hand, with the PS4 adopting a conventional X86 architecture, it’s easy to achieve flexible performance enhancements while using the same game assets.

That’s why providing a standard performance version of the PS4 and a high performance version of the console side-by-side is, according to Ito-san, an idea that might be considered.

It’s worth stressing on the fact that Ito-san was talking hypothetically, prompted by a specific question. So don’t go taking this as a confirmation a PS4.1 is actually in the works."

Original source in japanese
http://www.4gamer.net/games/990/G999024/20151021121/


Interesting, not counting on it, but...
 
Isn't it great the way a diplomatic non-answer to a stupid question from a reporter will spread bullshit like wildfire?
 
With PS3, at 2 year mark Sony was still at the "review what went wrong & right" phase. That was just a time when they have brought Cerny, and he spent a lot of time after that looking at available tech that will not make developers as mad as before.

This time of course, they probably already have the "vision" of future hardware, but I don't think they are anywhere near close starting to think what will that exact hardware be. They will really start looking at that ~2-3 years before PS5 needs to be out, when AMD can guarantee what parts can be made. If PS4 is aimed at late 2019 launch, true plans for will start to form in 2017. They will anyway want to wait a bit to see what can AMD do with mass produced 14/16nm chips in near future and how will prices of HMB memory change.
 
This thread is coming up on 3 years. PS4 is coming up on 2 years since release and 3rd Holiday season.

They should be planning the PS5 now, though with the success of the PS4, they may want to milk it for years longer.

They may like to milk it, but then again if PSVR takes off they may be looking to upgrade early to keep that platform relevant. Even then, everything depends on MS and if they are going to stay in the console market. If they loose this month (November) I can't see the new CEO wanting to stay in it, and even if he does, there's no way they go another 6 yrs of taking it in the sac.
 
With PS3, at 2 year mark Sony was still at the "review what went wrong & right" phase. That was just a time when they have brought Cerny, and he spent a lot of time after that looking at available tech that will not make developers as mad as before.

This time of course, they probably already have the "vision" of future hardware, but I don't think they are anywhere near close starting to think what will that exact hardware be. They will really start looking at that ~2-3 years before PS5 needs to be out, when AMD can guarantee what parts can be made. If PS4 is aimed at late 2019 launch, true plans for will start to form in 2017. They will anyway want to wait a bit to see what can AMD do with mass produced 14/16nm chips in near future and how will prices of HMB memory change.
That could be now though, they don't want to get caught late again like PS3. 2016 could be a defining year in many ways, that could include the baseline console hardware to come.
 
If that happens, MS will be the one who will potentially enrage their entire current install base, making users who purchased original Xbone a "second class citizens". I don't think they are willing to gamble this little good will they managed to get back after Mattrick debacle.

If they want to introduce a new console, they should market it as a clear successor, a 9th gen console. If the they want to get it to market a year or two before PS5, so be it, but they need to first make sure that Xbone has had a good lifecycle before it [and that Xbone2 has enough software to fight against PS4 juggernaut that will then have enormous install base].

I agree that it would be a bit of a risk......There are people that would be upset.
But the alternative is to just sit and for Microsoft to get embarrassed every single month for the rest of this generation.
I mean lets face it.......this resolution-gate stuff is a PR nightmare for both XBOX Brand and Microsoft itself, one that keeps sticking into them like a thorn over and over and over and over with no apparent end.

I'm not saying an enhanced XBOX One absolutely will happen........but I wouldn't really be surprised if it does.
But if it does......I'm taking all "prognosticator credit", you can be sure of that. ;)
 
PS3's inferior resolutions didn't really affect its brand or image. I think 'resolution-gate' is a niche gamer concept with little baring on the larger market. The reason XB1 isn't selling as well is it's 'not as good' and costs the same and PS4 managed to gain the momentum to snowball interest. Had X1 been what it is now but $100 cheaper, say, it'd probably be selling very well despite having lower resolution games.

As for viability and reasons to have an upgraded machine, this thread has already covered it all. ;)
 
PS3's inferior resolutions didn't really affect its brand or image. I think 'resolution-gate' is a niche gamer concept with little baring on the larger market. The reason XB1 isn't selling as well is it's 'not as good' and costs the same and PS4 managed to gain the momentum to snowball interest. Had X1 been what it is now but $100 cheaper, say, it'd probably be selling very well despite having lower resolution games.

As for viability and reasons to have an upgraded machine, this thread has already covered it all. ;)

They fixed the resolution issues with the PS3. The XBOX One's hardware is clearly inferior and that's not likely to get fixed.
I do agree that price is more important than resolution and/or power, it's a "knife to the gut", if you will......But if each and every month a new game comes out that is found to be superior on the PS4 because of power...well, that's a twisting and turning of the knife, for years at that.
The fact that Microsoft officials have commented on it several times shows me that it's bothersome. Hell, it's a PR nightmare, one that I'm sure Microsoft wishes it could awaken from.
 
PS3 still got inferior versions of games. A more direct comparison might be PS2, inferior to XB all its life. It too suffered lower resolutions, but there wasn't an internet back then telling everyone how many pixels were actually being rendered! XB1's lower res isn't really a PR nightmare as it's not damaging to the public face of the brand. It's more a competitive disadvantage.
 
PS3 still got inferior versions of games. A more direct comparison might be PS2, inferior to XB all its life. It too suffered lower resolutions, but there wasn't an internet back then telling everyone how many pixels were actually being rendered! XB1's lower res isn't really a PR nightmare as it's not damaging to the public face of the brand. It's more a competitive disadvantage.


I'm not really sure I see the difference there.
Everytime a game comes out that's lower res, it looks bad for the XBOX One.
Again, if it were to end like it did for the PS3, then yeah, it could be more palatable. But we all know it's not likely to end.......and that's the worst part, imo.

I bet MS wishes they hadn't skimped on the gpu now.

That's why I think they'll at least consider an enhanced XBOX One.
 
Back
Top