Feasibility of an upgradeable or forwards compatible console *spawn*

It could be a problem of mismanagement or failed execution though.

I am questioning if it's a niche concept. There are consistent rumors about iOS and Mac support for Thunderbolt across all SKUs. Recently, Intel declared that they want to deliver 50Gbps interconnect within 5 years. IBM announced memory breakthrough for small and large devices.

It may be possible to deliver half a console for entry level applications. Share computing resources between current and next-gen consoles. Shared h/w resources between PC, appliances and consoles. All the way to a scaled up homogeneous cluster for media heavy apps like what Grall mentioned. The concept may be more appealing if all the parts are cheap, high volume, off-the-shelf components though.



I don't think you can reach that kind of mass mass adoption with wired connections, though.
 
It all depends on the end user benefits and cost. If the vendors can demonstrate clear proposition, it may be ok. From the first wave of implementation, we know they didn't and can't run away from quick, wired charging. So there has to be wires and h/w connectors somewhere.

You won't use the interconnect to hook up to the controllers in general... if that's what you meant. But hooking up a HD, 3D camera to a PSn via a high speed link may be ok.

If I were Sony, I'd implement the first (or rather second, after Vaio Z) wave this gen. And also do more aggressive R&D in P2P networking.
 
It's a much better idea than anything else proposed in this thread so far. :p And as for distributed computing, tell your opinions about that to AMD for example, who have used distributed computing with great success as far as scaleability is concerned in their server CPUs for example. ;)

Do you see a sudden demand for the types of applications that run optimally on these servers in the home? Or alternately, what applications do home users regularly use that would run more optimally on this setup?
 
In the future ?

It may be useful to smooth the transition for consoles between generation.

The computing power and high speed interconnect may be more suited for natural interface and AR games, basically games that require complex/huge input. The lag in these games will be shorter.

It'd be possible to design half a console too (e.g., separate Blu-ray drive and HDD from the compute unit so that we can have a cheap fully streaming solution, or can play PS* games on laptops, or can take part of your console outside the house like Vaio Z, etc.)
 
Do you see a sudden demand for the types of applications that run optimally
My point was obviously that there's already systems that handle distributed/scaleable processing well (NUMA architecture, implemented cheaply in hardware with AMDs high-speed point-to-point datalink interfaces), not that home users would suddenly have a need to run server applications.

Please try to remain rational when discussing, alright.
 
My point was obviously that there's already systems that handle distributed/scaleable processing well (NUMA architecture, implemented cheaply in hardware with AMDs high-speed point-to-point datalink interfaces), not that home users would suddenly have a need to run server applications.

Please try to remain rational when discussing, alright.

I think it's perfectly rational to expect you to make points that actually apply to the subject at hand.

I'm very well aware of AMD's history WRT NUMA and point to point processor interfaces (where Opteron was way ahead of Xenon for quite some time). MY point was that the thread is about consoles and you brought up servers. Unless you can provide examples of console or other consumer-use applications where a distributed-computing model is more efficient than a single device with the same (or even somewhat less) overall computing resources or even a basic client-server model you really haven't refuted my original point about limited applications at all.
 
I think it's perfectly rational to expect you to make points that actually apply to the subject at hand.

I'm very well aware of AMD's history WRT NUMA and point to point processor interfaces (where Opteron was way ahead of Xenon for quite some time). MY point was that the thread is about consoles and you brought up servers. Unless you can provide examples of console or other consumer-use applications where a distributed-computing model is more efficient than a single device with the same (or even somewhat less) overall computing resources or even a basic client-server model you really haven't refuted my original point about limited applications at all.

In general, a cluster approach won't be more efficient than a single device because there are communication overhead. However they answer different questions and are not mutually exclusive. ^_^

You can have both together in a consumer or enterprise setup -- as long as it's cheap and robust enough to implement. *If* indeed all Apple h/w will have Thunderbolt built-in, we may be able to see early implementations soon enough.
 
It'd be possible to design half a console too (e.g., separate Blu-ray drive and HDD from the compute unit so that we can have a cheap fully streaming solution, or can play PS* games on laptops, or can take part of your console outside the house like Vaio Z, etc.)

This doesn't work for retailers unless you provide higher margins than are currently typical. The "compute box" wouldn't otherwise make them any money since there's no followup software sales. Further, I'd guess that rtetailers have multi-SKU fatigue from this gen as it is. This would be even worse. And if it is sold with higher margins, either the manufacturer makes less money or the consumer pays more. Exectued poorly enough it may even be both.
 
Hmm... the retailers won't factor too much in this equation. They will have to face digital distribution anyway. If implemented correctly, there should not be too many SKUs. In fact, there may be more use for each unit.

If you look at Vaio Z, the dock is a built-to-order option. Something like this can be done in the early stage.
 
In general, a cluster approach won't be more efficient than a single device because there are communication overhead. However they answer different questions. ^_^

You can have both together in a consumer or enterprise setup -- as long as it's cheap and robust enough to implement. *If* indeed all Apple h/w will have Thunderbolt built-in, we may be able to see early implementations soon enough.

I believe no one looking to buy a console is asking the questions a distributed design is the best answer for.

The console market is MUCH more sensitive to initial cost than the traditional PC and exponentially more sensitive than the server market which conversely is much more sensitive to the continuing cost of operation during the product's operating life and scale-ability which I don't believe are considerations in the console market at all. The latter two considerations are addressed quite well by a distributed design, hence why they are a good fit in that market.
 
Hmm... the retailers won't factor too much in this equation. They will have to face digital distribution anyway. If implemented correctly, there should not be too many SKUs. In fact, there may be more use for each unit.

If you look at Vaio Z, the dock is a built-to-order option. Something like this can be done in the early stage.

I am extremely dubious that many will be looking to buy a console this way.
 
Most consumers won't ask technical questions. They will look at the content, entertainment value, price, utility, etc. to decide.

Who asked to swing an accelerometer in front of the TV ? Yet they bought Wii.

The model will make the most sense using cheap off-the-shelf parts.


I am extremely dubious that many will be looking to buy a console this way.

BTO ? Naturally not. But it's a good way to test water and address niches.
 
Muahaha... I'm actually day dreaming of a way to release half a PS4 console using our existing Blu-ray drive via the gigabit link. The existing HDD can be moved to the new drive bay if necessary. This half a console can sport a new CPU (that is not Cell) if Sony want. ^_^
 
Then why do PC developers support them when not every PC gamer will have the best hardware on the planet? Why do PC games only have minimum system requirements and not maximum?

Think Outside The Box.....

Your head is still wrapped around the Fragmentation Myth that doesn't apply to this type of model.

Look at games like BF3 and Rage...those are good examples of games that would benefit from this console business model.

Also allow developers to put a special stamp on their games just like they do today ie instead of stuff like 1080P and DD 5.1ch they could add "Enhanced Performance mode when used with X option".

PC games are wrapped around an API paradigm as much a hardware one. Graphics hardware aims for APIs as much as performance. Thankfully it means a vast amount of hardware can support specific software thanks to API compatibility, but it doesn't mean it will run the software at a pace necessary to facilitate smooth gameplay or responsiveness. Also, PC games are user configurable to certain degrees. While a game might not be able to run well at 1080p with 4x AA, with a specific video card, it can be reduced down to 720p with 2x AA or none at all, and then get good playable performance. Consoles don't have this issue, everyone gets guaranteed levels of performance, which can be taken as for better or for worse, depending on your views. At least Sandy Bridge (then hopefully Ivy Bridge) and especially Llano are pushing the IGP boundary, but adopting will take time, and even still most people with dedicated graphics cards probably don't even have 8600GT level of performance, which doesn't put a computer on parity with the 360 or PS3 when you consider console optimizations and API abstraction layer computation losses. Sandy Bridge is close to the 8600GT threshold though.
 
Last edited by a moderator:
Maybe because no one with at least half a brain wouldn't waste time and resources on things that most likely won't pay off? How many old games are there for PC that are half-decently "forwards compatible"? Sure, you can run them at ludricous resolution with highest AA but underlying art and shaders are stuck on whatever they were on release.
Crysis.

We're seeing something along the lines of "forward compatibility" on iOS devices. Upgrading from a 3GS or iPad to an iPhone 4 or iPad 2 respectively will net better IQ in stuff like RAGE and Infinity Blade.

Had the DVD drive not been the weak link against piracy, I'd expect the Wii U to keep the same drive as in the Wii and GC. The Wii U could have been marketed as an expansion to the Wii, with the USB ports providing enough bandwidth to simply use the Wii's disc drive, BT radios and GC ports, while housing an all new chipset, wireless G and WiDi/New Controller I/O radios. Cover the whole back panel of the device to re-route power to a new input, and video to a new HDMI output, and you have a system that maintains BC with the Wii and GC.
 
Has anyone ever posted anything on online passes? With Ubisoft going this route, I thought I'd see at least a thread on it or used-games sales. I just wonder how many here talk about this particular subject, and the future of gaming as a business.
 
Has anyone ever posted anything on online passes? With Ubisoft going this route, I thought I'd see at least a thread on it or used-games sales. I just wonder how many here talk about this particular subject, and the future of gaming as a business.
It's been discussed. ;)
 
Crysis.

We're seeing something along the lines of "forward compatibility" on iOS devices. Upgrading from a 3GS or iPad to an iPhone 4 or iPad 2 respectively will net better IQ in stuff like RAGE and Infinity Blade.

Had the DVD drive not been the weak link against piracy, I'd expect the Wii U to keep the same drive as in the Wii and GC. The Wii U could have been marketed as an expansion to the Wii, with the USB ports providing enough bandwidth to simply use the Wii's disc drive, BT radios and GC ports, while housing an all new chipset, wireless G and WiDi/New Controller I/O radios. Cover the whole back panel of the device to re-route power to a new input, and video to a new HDMI output, and you have a system that maintains BC with the Wii and GC.

Yeah reusing existing resources would be great. I have already paid for a Blu-ray drive and HDD that are still functioning. Would be great if they can put in lotsa memory in the compute unit and even reuse the existing CPU and GPU where appropriate.
 
I don't see how anyone would benefit from such a system.

Developers and publishers would only suffer an increased budget and longer schedules for their titles.
Higher resolution assets take more time; not everyone paints textures at twice the resolution, and even they don't paint textures that'd actually work at that resolution. Besides, keeping 1K textures at 2K would quadruple the memory requirements, and not just double it.
Significantly more Q&A would have to be done because the entire matrix is expanded. And they'd need to basically build and maintain two versions of their game, as moving to 60fps isn't just about rendering more frames, game logic and physics and AI and I/O and network and everything else would have to be modified as well. But even a simple resolution icrease would mean lots of extra work.

All these would cost a lot of money, but they wouldn't really sell that much more of the same game and it wouldn't really provide a competitive edge either. Most casual console gamers I know still think that COD Black Ops is actually rendering at 1080p...


Console vendors wouldn't really benefit either. You can't count on the majority of your user base to upgrade, as most people want cheap consoles, so the economics of the expansion set would not allow for a good price/performance ratio. Either it'd have to be hideously expensive (and also increase the cost of the base system even if it won't get upgraded) or it'd not have enough computing power to make a difference. It'd also take away engineering resources from more important projects, it'd make the replacement generation of hw somewhat harder to differentiate, it'd complicate hw revisions, it'd also mess with the initial design in terms of power consumption and cooling.
And most importantly, they wouldn't make any more money with such an upgrade either. If the kit is sold at a profit it'd be too expensive; if it'd be a loss leader it'd have to move significantly more games which it couldn't on its own, it'd also cost a lot of R&D money.


And finally, how would the average customer benefit from this? Noone likes to be forced to upgrade and as I've said most people wouldn't even notice small scale differences.
Sure there are like 5-10% of the market of the PS3/X360 who'd maybe even go out and buy this thing but they still wouldn't get enough games that'd be significantly upgraded, for all the reasons mentioned above.


Stuff like Move/Kinect or online features are far, far better investments for everyone in every possible way, whereas a hw upgrade doesn't make any sense at all...
Oh and connecting multiple consoles is silly as well, it has the same drawbacks on the dev/publisher/manufacturer side. Develop two different versions, market it somehow, modify the base design kust for at most 5-10% of your user base, and so on...
 
And they'd need to basically build and maintain two versions of their game, as moving to 60fps isn't just about rendering more frames, game logic and physics and AI and I/O and network and everything else would have to be modified as well.
They shouldn't. Time critical functions like physics and IO are typically decoupled from framerate already. Less demanding engine functions like global AI can be kept at 30 fps.
 
Back
Top