Feasibility of an upgradeable or forwards compatible console *spawn*

Yeah but what benefit would 60fps deliver if the controls were sluggish and physics couldn't react in time? You should know well enough how hard it is to make a 60fps game, so for most releases this would mean a complete rewrite - or basically locking most of the engine at 30fps independent of the graphics.

And even if they'd decouple everything it'd still require significant differences between a 30 and 60fps version. Anything you manually time and finetune at 30fps would have to be re-done (for a start, most post process AA solutions and related stuff)...
 
That doesn't make sense to me. The PC has been able to take the same code and run at at double/quadruple framerates no worries. Now admittedly that's on a very accommodating API, but if just a matter of taking a 30 fps game and rendering it at 60 fps on consoles, I can't see super refined timings being a problem. Maybe for the best of the best devs, but the usual 3rd party titles written to more realistic standards should happily run at twice the framerate given twice the clocks or twice the shader cores (and BW of course). I agree with everything else you were saying about RAM etc., but getting a faster framerate by just doubling up the hardware should work, unless console devs are working at a more sophisticated than everything I've seen and heard suggests!
 
PC games are usually not locked 60fps, they're even having trouble maintaining 60fps (see Carmack's recent comments!) and there's a lot of other stuff going on (like mouse input sampling rate is usually very very high) that's not present in consoles.

A 60fps console game is different, you need to maintain 60+ fps and always aim to conserve resources instead of using a general purpose solution.

Also, simply adding a second CPU is very different compared to having a CPU with double the clock rate on the PC, where things could more or less just scale up. Consider the PS3. You have a PPU handling the main code and 6 SPUs working on elemental tasks. Data sizes and latencies are tweaked for the Cell's internal and external memory sizes, bus widths and clock speeds. If you add a second CPU sitting on the other end of a who-knows-how-wide bus with who-knows-how-many cycles of latency, it won't be able to just run every piece of code two times as fast. You need to manually synchronize the two PPUs, separate and redistribute tasks, so that every game tick can finish twice as fast. Although I'd say just getting two Cells even on the same motherboard would still not be enough to speed up a game from 30fps to 60fps, as the efficiency gain isn't 100%.

Same goes for graphics. You add another memory bank, you need to manually split and move some of the data there so that the second GPU can have fast access to it (the existing memory buses definitely can't feed two GPUs). It's not like a PC where you use SLI or stuff like that and basically duplicate everything on both video card's local memory...

The only way for this to work more or less smoothly would require you to basically replace the main CPU and GPU and memory banks. Buy a completely new console, in other words.


So no, I don't think it's just a simple case, adding more horsepower to get an automatic speedup on your code. It'd require a lot of extra work to utilize more resources. And how much would a second Cell, RSX and an extra 512MB of memory cost anyway? I doubt that it'd come out under $100, more like two times as much...
 
On the PC I could swear I've come across situations where there are different gameplay outcomes based on performance.

In Bioshock 2 for instance, I've come across a kind of stuttering related to updates of player position even while frame rates remained very high. On Defence Grid I've also noticed that playing with accelerated time can deliver slightly different results in terms of what-targets-what and what gets through (perhaps a bug rather than a frequency issue).

If you target a certain update rate to maximise simulation complexity at the cost of sampling frequency, changing (in this case increasing) the sampling frequency could potentially yield different results. And with a carefully honed gameplay mechanic or a p2p mulitplayer game, this could potentially be detrimental to the game.

You could probably get around this in the case of planning for an upgrade by just targeting the higher level of performance for gameplay-critical components on both versions.
 
Yeah, but in that case we'd still be in a situation where the majority of the user base has to suffer drawbacks because of the hardcore minority who bought the upgrade. I definitely wouldn't want something like that to happen (and yeah, I wouldn't get any such upgrades for my console, ever)
It'd also mean a significant addition to the development effort that wouldn't necessarily result in increased sales or any other advantage.

In fact it could even make the product less competitive, if others wouldn't be constrained by the need to support two versions of the console.
 
Also, simply adding a second CPU is very different compared to having a CPU with double the clock rate on the PC, where things could more or less just scale up...
So no, I don't think it's just a simple case, adding more horsepower to get an automatic speedup on your code. It'd require a lot of extra work to utilize more resources.
I agree with all this, though that's slightly different to what I was saying which wasn't talking from context of adding a box. Putting a second CPU on an expansion bus will make for troubles, as will a GPU on an expansion. Replacing RSX with something meatier shouldn't though. As for PC, maybe modern games are pushing limits so hard that you can't get above 60fps, but it's always been the case that you can replace your CPU and/or GPU and get a speed up without the devs having to reengineer their game substantially for your particular hardware configuration. There are lots of bugs and issues that can appear, but the sentiment - that you can just upgrade your hardware and run the same code faster - still seems valid to me. If PS4/XB3 came with an expansion protocol that the API linked into, games should be able to make use of extra hardware without any bother to the devs. Without such an API, and if devs had to code on less virtualised hardware, yes it'd be a headache. ;)
 
Yeah, but in that case we'd still be in a situation where the majority of the user base has to suffer drawbacks because of the hardcore minority who bought the upgrade. I definitely wouldn't want something like that to happen (and yeah, I wouldn't get any such upgrades for my console, ever)
It'd also mean a significant addition to the development effort that wouldn't necessarily result in increased sales or any other advantage.

In fact it could even make the product less competitive, if others wouldn't be constrained by the need to support two versions of the console.

I love upgrades!

They are almost always an unmitigated disaster, and at best largely inconsequential footnotes, but there's something about fastening an expensive booster unit to an older system (that would be better off being replaced or allowed to age gracefully and profitably) that's exciting.

There's so much interesting history around upgrades. My favourite upgrade is the 32X, it was like Sega of Japan was a terrorist who wanted to make Sega of America destroy itself. The 32X was the jab to set up the super secret knockout punch, the Saturn.

Upgrades are different to peripherals, of course, which can be very successful.
 
That doesn't make sense to me. The PC has been able to take the same code and run at at double/quadruple framerates no worries. Now admittedly that's on a very accommodating API, but if just a matter of taking a 30 fps game and rendering it at 60 fps on consoles, I can't see super refined timings being a problem. Maybe for the best of the best devs, but the usual 3rd party titles written to more realistic standards should happily run at twice the framerate given twice the clocks or twice the shader cores (and BW of course). I agree with everything else you were saying about RAM etc., but getting a faster framerate by just doubling up the hardware should work, unless console devs are working at a more sophisticated than everything I've seen and heard suggests!

Some games just seem like they are porting code without much rewrite when it comes to running on PC CPUs. It's as if the code they are running is less dependent on core performance and more dependent on clock cycles and core numbers. I've noticed this on my older Core 2 Duo powered Asus laptop. It's modest 2.26 GHz Core 2 Duo P8400 should be handily faster than the Xenon, but Call of Duty 4 has difficulties maintaining a constant 60 FPS and benefits highly, even from a small overclock to around 2.6 GHz. Granted on the higher speed dual core Athlon x2s I've had (2.8 GHz Windsor and 3.0 GHz Regor), they had no trouble maintaining 60 FPS, though these were on XP systems. Maybe it's just Windows Vista and 7 sucking up extra clock timing, but I honestly I don't think it is. CoD4 possibly seems to be built on a clock speed premise in reference to the relatively high speeds that the Xenon and Cell.

I really wish my Sony Vaio had a proper GPU instead of a 310M, that way I could test this theory in a much more controlled manner, as it has a 2.16 GHz Core i3. Of course it schools any Core 2 Duo below 3.0 GHz with extreme prejudice.
 
Any upgrade path (other than storage) in consoles would probably wind up like the upgradeable laptops. A few years down the road you get an option to upgrade that costs more than just buying a new laptop. It just doesn't work out in the real world. You build in an extra cost up front for every box to support the upgrade, and then you get hit with another charge down the road. It increases the cost to developers as well. I don't think there's enough upside, people who want this dynamic can buy a PC.
 
So Apple released Thunderbolt Air and Display last few days. Would be interesting to see how comes after Lion.

Apple has been pushing Grand Central Dispatch as a way to scale normal applications from small devices to multicore professional desktops. You'll need parallel abstraction at the software and hardware level to scale. Things like tiles, queues, zones, etc. are usually familiar developer concepts.

As for business issues, the key thing is for laptop makers, they probably earn more by selling a complete laptop as opposed to modules. For Sony, since the sell so many piecemeal devices, such Lego block approach may benefit them more to consolidate their base.

Anyway, it's one of Kutaragi's vision when he pondered over Sony's future. Now it depends on Hirai, Stringer and others. It's also quite interesting to think about hardware partitioning for network platforms.
 
The idea that a console has to be identical in features and performance for its entire life doesn't apply anymore, iOS proves it. Every year a faster iPhone/Pod/Pad comes out, the games keep working just fine and many get patched to take advantage of the new model.

If the first version of the xbox/ps4 comes out "gimpy" doesn't mean it has to stay that way for 5+ years. Nothing is stopping them from scaling CUs and memory as falling costs begin to allow it.
 
There's a reason why it rarely has happened besides saturn and N64 for consoles. Why would you expect devs to code for constantly changing specs when the whole strength of a console is fixed specifications?

And also, why would they change the components to any dramatic effect and leave the customers who bought the original version out in the cold?

They spend a shitload of money on this stuff, this isn't a goddarned ipad, not everybody is apple or a cellphone client who can just throw away models every year.
 
The idea that a console has to be identical in features and performance for its entire life doesn't apply anymore, iOS proves it. Every year a faster iPhone/Pod/Pad comes out, the games keep working just fine and many get patched to take advantage of the new model.

If the first version of the xbox/ps4 comes out "gimpy" doesn't mean it has to stay that way for 5+ years. Nothing is stopping them from scaling CUs and memory as falling costs begin to allow it.
It's not a back-compatibility issue, it's a forward-compatibility one. These are platforms that most people expect not to purchase often and which are typically sold at some loss early on. And the major programs for them are designed to push them to their limits.

Consumers will be ticked off when their console has to be constantly traded in or 32X'd, and third-party AAA devs may see the advanced models as a chicken-and-egg problem when contemplating whether THEY should support them or just target the present "everyone."
 
ipads and iphones don't get thrown away, they are resold or handed down to family and friends. If the basic architecture is the same, Jaguar x86 and AMD GCN, why not release a premium model as a mid-cycle refresh? Most of these games will already have higher fidelity assets anyway, devs can just crank up the settings like a pc version would. I don't see a problem with it, and I don't hear iOS users or devs complaining either.
 
Consoles are not iOS. There are rules for this market and strict boundaries. Why would devs go back and refresh their games that they already released for weaker hardware for a half way refresh or whatever? That makes no sense if they have already spent their allotted budget on the game making all multiplatform versions.

They don't have time and effort to just throw away remaking games every five seconds because the hW manufacturer decided it was time to refresh their product. Devs have priorities too.
 
Consoles are not iOS. There are rules for this market and strict boundaries. Why would devs go back and refresh their games that they already released for weaker hardware for a half way refresh or whatever? That makes no sense if they have already spent their allotted budget on the game making all multiplatform versions.

They don't have time and effort to just throw away remaking games every five seconds because the hW manufacturer decided it was time to refresh their product. Devs have priorities too.
Simple, you patch the game and then run a sale on the app store. "Now optimized for Xbox 3.5!" And free to play games, which will we see more and more of, are constantly being updated.

I'll stop now, I don't think we will see eye to eye before the mods get annoyed. Maybe in another thread.
 
ipads and iphones don't get thrown away, they are resold or handed down to family and friends. If the basic architecture is the same, Jaguar x86 and AMD GCN, why not release a premium model as a mid-cycle refresh? Most of these games will already have higher fidelity assets anyway, devs can just crank up the settings like a pc version would. I don't see a problem with it, and I don't hear iOS users or devs complaining either.
We're talking about dedicated game platforms. Games that target the low-end platform most likely aren't going to be all that much better on the high-end platform, and games that target the high platform are going to be Halo 4 split-screen on the low-end platform. Unless work is put in to mitigate those issues, which makes dev costs rise.

There's also the issue of console profits. The big console makers like the consoles themselves being profitable, and that will be harder to pull off when they're constantly upgrading the system and doing R&D upgrade work.
 
these are huge and slow moving companies, they need to be to put out a huge product like a console and have devs continuously support it years at a time with a monetary system and routine years and years in the making with huge effort on all sides.

They can't suddenly downgrade and act like a start up making an android based product like ouya or something.
 
We're talking about dedicated game platforms. Games that target the low-end platform most likely aren't going to be all that much better on the high-end platform, and games that target the high platform are going to be Halo 4 split-screen on the low-end platform. Unless work is put in to mitigate those issues, which makes dev costs rise.

There's also the issue of console profits. The big console makers like the consoles themselves being profitable, and that will be harder to pull off when they're constantly upgrading the system and doing R&D upgrade work.
I'm not a dev so correct me if I'm wrong, but I thought game assets are created in much higher detail than ends up in the game. I don't see where the extra costs come in, you just enable more detail.

The difference between the two could be 720p and 1080p, or something in between for the former. Or the quality of the global illumination in UE4, if I remember correctly the accuracy of their algorithm depends on the width of the cone, which takes flops. They do some tricks later to hide the artifacts but with a more powerful model maybe they wouldn't have to.

As for profits, the original model can keep selling at a lower price making money, I just don't see why you can't sell a premium model 2-3 years later at the original launch price. Just like Apple adds more or upgrades Imagination modules in the Ax processors and increases memory bandwidth when necessary, MS could do the same without any issues of forward compatibility. Shrink the design, add more CUs (and memory channels if necessary). The original model stays as the base spec. Have you seen how fast iOS games get patched to take advantage of the hardware when a new model comes out? I haven't seen anything where devs are complaining, if anything it gives them a chance for new publicity for an older game.
 
The upgrade hardware idea has been hashed out before, it's still a terrible idea. Consoles make most of their money from software so streamlining development is important. A single target for development of multiple games is important for publishers to be profitable.

As for iOS, please point me to the games on that platform with a $10million budget.
 
Back
Top