Predict: The Next Generation Console Tech

Status
Not open for further replies.
I see them putting less efforts into the cpu next gen. I bet it will just be a 2x18 cell chip with some improvements into each of the cores perhaps moer local store for the ppus .

That would put them at around 500m tranistors for cell 2 on a 32nm process mabye?

They would def go big on the gpu and I think they would switch to simply one large pool of ram.

Going with two pools will add to the cost

The original design for the Cell processor was 1+6, considering the trouble developers had in porting their code I can see them going with a more PPC heavy architecture in the next generation to simplify coding, perhaps going as far as reducing the proportion of SPEs to PPC cores from a ratio of 1:8 back to the original design specification of 1:6 and then implementing a more balanced 2:12, 3:18 or 4:24 CPU. It makes sense considering how little use the final SPEs seem to be getting at this point in time, even with Sony's own 1st party.
 
how can be good on a next-console?

Maybe the wii2... but i'm sure that outside is full of custom better solutions

That pretty much depends on their overall strategy, if it can "afford" two bigger discrete CPU, GPU and meybe an "APU" that will have some advantages from a tech POV.

Otherwise Fusion like will have its advantages too, and it with doesnt seems to be so underpowered (althought a discret CPU + discret GPU would be better).

Anyway in terms of custom solutions what is better? Meybe in the CPU cores, but what about the rest?
 
They would def go big on the gpu and I think they would switch to simply one large pool of ram.

Going with two pools will add to the cost

The only problem with having one pool of ram is that you may become bandwith limited. Your CPU and GPU will fight like dogs for access and from my limited knowledge one or the other is usually given priority, meaning one losses in the end potentially causing performance issues. And so to solve this you have to implement some sort of costly on/off die EDRAM based solution ala XBox360. Not saying that it is an inferior solution or anything, just that it means high cost eventually anyway.

In addition Sony know that any future Nvidia based GPU will be designed out of the box to work with it's own unique pool of graphics ram. This also means less development costs when getting said GPU to work with PS4, which of course will also have it's own pool of graphics ram as well... Less messing around to get the graphics chip to play nicely with the rest of the system.
 
The only problem with having one pool of ram is that you may become bandwith limited.
For next-gen, if promises from the RAM companies are to be believed, that won't be an issues. We'll have many hundreds of GB/s. I don't know if contention has any impact, but it doesn't look like BW should be a problem to me.
 
http://www.pcworld.com/article/156820/stream_this_amd_teases_cloud_computing_game_revolution.html/

Stream This: AMD Teases Cloud Computing Game Revolution

Matt Peckham

Jan 9, 2009 3:19 pm

Remember last March when I mentioned video games you could stream from a server to a thin client? Here's some of what I wrote:

Imagine buying or subscribing to a game online that pipes nothing more than visual information to your local view screen, reconfigures the interface dynamics of the game to match the size and interactive capacity of said interface, then lets you engage at whatever level you like without worrying whether you have the latest graphics processor or sound card or CPU.

"This is likely to never happen," responded one user.

"There is no way that there will be a company that can provide bandwidth of that magnitude," warned another.

Ready for infernal regions to get frosty and battalions of pigs with wings?

Looks like it is going to happen after all, courtesy a little cloud supercomputing wizardry at AMD.

At the Consumer Electronic Show presently unfolding in Las Vegas, AMD divulged plans for a client-server solution that would deliver "graphically-intensive applications" to "virtually any type of mobile device with a web browser without making the device rapidly deplete battery life or struggle to process the content."

According to AMD:

The AMD Fusion Render Cloud will transform movie and gaming experiences through server-side rendering -- which stores visually rich content in a compute cloud, compresses it, and streams it in real-time over a wireless or broadband connection to a variety of devices such as smart phones, set-top boxes and ultra-thin notebooks.

By delivering remotely rendered content to devices that are unable to store and process HD content due to such constraints as device size, battery capacity, and processing power, HD cloud computing represents the capability to bring HD entertainment to mobile users virtually anywhere.

What's that mean to gamers like you and me?

Streaming video games would upend gaming as we know it. For starters, the technology would challenge the need for offline retail sales, eliminate lengthy software downloads, spiraling local storage requirements, messy DRM software, expensive computer components, and reduce PC hardware driver and code compatibility quirks.

It would theoretically decrease game bugs (see again: "reduce PC hardware driver and code compatibility quirks"), scupper the distinction between "PC" and "console" games entirely, and arguably relegate standalone consoles to dumb set top boxes.

Imagine walking between electronic displays in your house, delineated only by the peripherals (keyboards, mice, joysticks, motion-controllers) you've plugged into them, each one capable of running whatever game you've elected to stream through an inbuilt browser.

What's more, every game could be a demo, allowing players to try any game on a time-limited basis. No download queues and lengthy waits, no more sardonic grumbling on message boards about a developer's lack of interest in your wallet because they couldn't be bothered to chisel a try-before-you-buy hunk of code off their product.

But the single most important super-secret undesignated feature of technology like AMD's Fusion Render Cloud? It's pirate-proof.

Why? Because it eliminates the very thing bootleggers need to do their dirty work -- physical media -- and adds an online requirement in the bargain. At best, you'd have a nominal number of illicit accounts in circulation, but we've already seen how simple it is for companies like Blizzard to wave a digital wand and topple thousands of felonious players like tenpins.

That's not to say there wouldn't be serious potential downsides to game streaming we'd have to sort out.

You'd need uninterrupted online access, to begin with, or minimally some sort of fault-proof mechanism to reconstitute ditched games if a router inopportunely coughs. You'd also be handing scads of personal information over to publishers, whether you want to or not. Do you care if a company's silently accreting data about your play habits and -- with your consent, because you'll have to give it to play the game -- passing it along to third-party vendors and/or using it to pester you about their latest Next Best Thing?

Of course there's also the incredibly complex mod scene to consider, say whether that crowd could ever buy into something like a "development" server farm that allowed tinkering with publisher code, but on 100% publisher-regulated terms.

And don't forget pricing and ownership issues. Buy a game today, pay once and hypothetically still play it two decades from now, whether in emulation or natively as long as you protect the media. Buy a streaming game and are you buying it outright? Paying a subscription? And what happens if the company you bought it from goes under?

What's likely to happen: Assuming AMD's solution has teeth (though whether it's AMD or someone else, I believe we're talking "when," not "if"), expect to see experiments with mobile devices that tentatively spread into other mediums. Slowly. This is test-the-water time, with all sorts of unseen hurdles. Don't expect radical changes to the way you game today -- "game-streaming" won't be arriving en masse before Nintendo Wii2 or Sony's PS4 or Microsoft's Xbox Whatever get here.

I don't think this can happen in time for next-gen Xbox3|Wii2|PS4 but perhaps the next-next generation Xbox4|Wii3|PS5.
 
http://www.pcworld.com/article/156820/stream_this_amd_teases_cloud_computing_game_revolution.html/



I don't think this can happen in time for next-gen Xbox3|Wii2|PS4 but perhaps the next-next generation Xbox4|Wii3|PS5.

Maybe once the next gen arrives they won't need to upgrade to the generation after...

One problem with this cloud rendering is that I don't see it as being able to deal with any more than 33ms of latency (return trip), otherwise they couldn't deliver 30fps. How can this work flow work? input -> transmit -> process -> transmit -> display in less than 33ms?
 
Seems doomed to me without dedicated connections. Lag in game is bad enough when characters pop about, but if the actual screen update were to suffer that sort of lag and jump..! Plus the footage we'd get back would be compressed I presume. No-one will be sending 720p+ uncompressed, or at least lossless, 30fps video very well. Case in point, I can't stream HD video files of gameplay off the internet in realtime, and that's with disgusting web compression! The end result would look terrible. Which I guess would save the developers - why fuss over texture detail if the end result will be all blotchy anyway :mrgreen:

IMO the concept should be limited to game processing on the server, graphics rendering on the client with an appropriate renderer.
 
@ Shifty, but even with processing on the server wouldn't still suffer the problem of latency?
 
Yes, but as gameplay lag just as is already being managed in online games. Assuming some improvements in network comms, it may be workable, and any lag will at least be less intrusive than pauses in update. Whereas rendering the visuals on the server seems to me totally impractical.

It still may be totally impractical in the long run. I can't see a mobile connection ever being robust enough to play fast-paced streamed games reliably. Plus the calculating gameplay seems the least demanding part of games anyway! Is there really much point?
 
Increasingly, I think Sony's going to be very conservative next gen in terms of increased technical headroom.

I think the argument that developers find it increasingly difficult to make tangibly beneficial use of an enormous amount of extra processing power are starting to win out. Just reading about Killzone 2..we see a very very sophisticated use of the hardware with appreciable results, but that level of hardware usage seems beyond the vast majority of devs, and even Guerrilla claim that they can do even more within PS3. Guerrilla may not even hit the ceiling with their second PS3 project!

When I look at PS3, and I consider that it seems only now in its third year to be hitting its technical stride - and thanks really to only a handful of developers - the argument that even the systems as they are now have enough power for devs to be going on for even longer into the future seems more and more convincing.

Some developers might run into the ceiling, but I'm not sure if Sony needs to raise the ceiling that high with their next system to give 90% of developers enough room to work within happily through to 2014/2015.

Looking at PS3, they spent so much money raising the bar as high as they could, but it seems few developers know how to really even begin to meet that bar in a tangibly beneficial way. Did they overshoot requirements for this gen (and I particularly mean on the CPU side), and if so, would they be inclined toward only shooting modestly beyond for PS4? I'm beginning to think so.

I think Sony may be seriously considering a very modest improvement to the amount of processing power available to devs, while investing more in improvements that are almost 'automatic' for developers and games - in things like IQ and so forth. Very strategic improvements, designed to be readily usable by all games without much effort from the developer's themselves. With the bulk of their new platform investment next time around going into tools and software designed to balance developer and hardware capabilities (which are arguably totally out of whack right now?), and directly leveraging developer experience with PS3 toward a higher average standard for PS4.

With that in mind, would a 12-16 SPU Cell + 1-2GB of RAM + a healthily powerful GPU remain 'overkill' for 90%+ of developers through 2011-2015/2016?

edit - and I am very concious of the fact that I would often scold people at the beginning of this generation for expressing this kind of thought :) And I do think the last transition was too early to think like this..there was a large enough improvement that was so clearly necessary, that it was hard to say what was 'too much', when to 'say stop'. I would scold people for lack of ambition, but now with the hardware here and entrenched, it's clear to see many developers are (through no fault of their own perhaps), not very ambitious with their usage of more processing power..that they could often suffice with much less, they seem almost paralysed by the abundance of it (again, I mean, particularly on the cpu side).
 
Yes, that really seems the balance. The sweet spot is as much useable power as the devs will make use of. There's no point providing more potential if it'll go wasted.
 
That is way too conservative. Secondly, I don't understand why we should have expected full utilization of these consoles at year 2 or 3 of their lives. That has never happened in any console generation so the expectation that it should happen now is something I don't understand.

Through 2011-2015 1-2 GB of ram would be pitiful IMO. RAM space will be crucial given the amount of data next gen assets will drag around with them. There is an explosion in the amount of meta data describing assets I see coming.
 
Last edited by a moderator:
I wonder if Sony is planning for the future already with some of its tech demonstrations. The use of networked PS3s for a 4k Gran Turismo 5 demo, for example. Quaz51 reckons that the stereoscopic 3D 1080p60 demos of WipEout HD and GT5 at CES were produced using the same technique and I reckon he's right.

How viable would this brute force parallelisation as the basis for the future platform? Literally 3-4 PS3s in a single box, each of them working on consecutive frames? I mean, conceivably, developers could be working on PS4 games *now*. Would I be right in assuming that this be a woefully inefficient use of the silicon?
 
Not if the engines are well implemented. Cell is intrinsically scalable an architecture designed exactly for this sort of parallelism. Unless far greater efficiences are found to speed up the general operation of processors, the only real way forward is to throw more transistors into processing. So a Cell 2 with better PPE, same SPUs near-enough, and lots of them, should be a reasonable use of silicon versus other systems out there at the time. As long as the code is efficiently written to use the architecture.
 
My concern about all these conservative estimations is that if the programming model and how you must craft your content pipeline doesn't change then how much processing power you get doesn't affect the cost of development for you.

If you still need to program SPUs and your content pipeline still needs to be able to generate high quality HD assets then getting less processing power or RAM to dump data into isn't a help to your cause at all. The complexity of the task has not been lowered yet the task must be accomplished with less -- which further complicates things.

Lesser hardware would help Sony's bottom line but not developers at all. What will help developers most are concrete expectations (for planning and early execution), little need to re-invent the wheel (so that the focus is on making games not learning how to again), and more to work with than they have now in "relative" terms (HW truly capable of handling what is being marketed - HD - and please don't invent some ULTRA HD crap to once again place the expectations beyond what the HW is capable of delivering by the "common" developer).
 
Last edited by a moderator:
I don't agree at all, plenty of power is not overkill the lack of proper tools to extract it is.
And I fell like devs are begin for more cpu power but what they would want would be an insane/not achievable amount of serial performances.

Edit I was answering Titanio looks like he deleted his post.
 
Last edited by a moderator:
I agree Scificube. My understanding of multi core programming is that once you have made the admittedly painful transition both physically (your tools, game engine etc) and mentally (a developer adjusting his thinking) scaling up from 4 - 8 cores to 16/24/32+ cores is not a major problem. It's just getting things started in the first place that seems to be the real headache. As time passes the tools and knowledge gained should get better and better, allowing developers to extract more from each core as well as scaling to an increased number of cores with little extra in development costs.
What developers appear to need now is time. And I think that is just what they are going to get... Microsoft nor Sony appear to be in any mood or rush to release PS4 or XBox720 any time soon. And when they do I think many will be surprised at just how closely related they are to the current generation in terms of basic hardware design, compatibility with existing software tool chain etc. Which leads nicely back to my original point about Sony going with and continuing to implement the 'platform' strategy.
 
I agree Scificube. My understanding of multi core programming is that once you have made the admittedly painful transition both physically (your tools, game engine etc) and mentally (a developer adjusting his thinking) scaling up from 4 - 8 cores to 16/24/32+ cores is not a major problem.
If the jump from single process to multicore was a complete 'best process' transistion, that'd be true. However, I think a lot of current development isn't working on virtualised, scalable engines, but instead writing specifically to the available cores. Thus with 6 SPE's, I think a lot of the PS3 codebase is divied up to say 'This SPU do this, that one do that' and adding more SPU's would gain nothing. We know people like Guerilla are writing an engine that grabs from available resources, and if the same game were to suddenly find itself running on a PS3 with 16 SPEs it would, I guess, scale to use them. But I doubt the number of game engines out there that do this at all well is that high. I wouldn't be surprised if going into many-core beyond our current multi-core model, there'll be just as many headaches. A lot depends on how much real paradigm-shift progress has been made this gen. In a few years time, the state on the PS3 should be a lot better.
 
Regardless of the HW MS, Sony or Nintendo provides scalable code is where everyone needs to heading if not there already. That is a separate discussion though. What I feel is inclusive to this discussion is tools.

Tools are console tech. -> Tool. Are. Console. Tech. <-

Good tech in tools will have just as much affect on common developers as the hardware itself. Good tools will help "everyone" by reducing complexity and/or recurring development costs.

Good tools are not an option - they are imperative.
 
That is way too conservative. Secondly, I don't understand why we should have expected full utilization of these consoles at year 2 or 3 of their lives. That has never happened in any console generation so the expectation that it should happen now is something I don't understand.


I'm not pointing at the state of the art today and saying 'that's it', in terms of development exploitation of the system..it'll obviously get better. But I kind of think it is slower on the CPU side this generation than last..and that ultimately, perhaps very few games will come close to using available CPU resources to the full extent, in a way that's actually appreciable to the game experience.

Even with PS2, mid-cycle, it was obvious what most developers might do with yet more processing power.

But now with PS3, I don't know if there's such an obvious roadmap? Maybe I'm wrong. But it seems like that.

It would be interesting to quiz developers on that. I'm sure Sony, MS et al are.

If you still need to program SPUs and your content pipeline still needs to be able to generate high quality HD assets then getting less processing power or RAM to dump data into isn't a help to your cause at all. The complexity of the task has not been lowered yet the task must be accomplished with less -- which further complicates things.

Are developers on average really making tangibly beneficial use of the SPUs, to a degree that really makes the processor sweat? Some might be, but I think they're probably in the minority..

I might be wrong, though, again, but it's just the impression I get right now..that there's something of a slowdown in how quickly developers are consuming processing resources.


Lesser hardware would help Sony's bottom line but not developers at all. What will help developers most are concrete expectations (for planning and early execution), little need to re-invent the wheel (so that the focus is on making games not learning how to again), and more to work with than they have now in "relative" terms (HW truly capable of handling what is being marketed - HD - and please don't invent some ULTRA HD crap to once again place the expectations beyond what the HW is capable of delivering by the "common" developer).

A mid or even low range GPU in 2011/2012 would likely be very capable of doing 'nice' HD. I don't see them upping resolution..but things like 3D may become next '1080p'. I don't think they'll hinge their 'novelty' on 3D though..unless they find a way to make it work as well on existing displays. I'm personally really excited by 3D, I wish I could see those CES demos for myself, but I have to be realistic about them making that a standard feature on games if it requires a display upgrade.

I also think on the GPU side, processing resources are becoming so abundant that developers don't really know how to make 'non-lazy' use of it. On the PC side, software seems to be lagging hardware more than ever. Consumers are left to up resolution, AA and AF to try and make use of their cards. How many developers know what they'd do with 1Tflop of processing power on the GPU, what to do with it where all the processing is making a tangible difference to a (720p/1080p) image onscreen? (i.e. not doing things inefficiently just to consume resources?)

I'd love to hear developer's thoughts on this, love for someone to lay out a roadmap of a compelling usage of an amount of processing power that would be 10 or 20 or 30 times PS3 or 360, in the same way PS3 was 35x PS2 or whatever. But I feel like in some ways, particularly CPU wise, PS3 even has overshot a typical developer's capabilities..if the next generation were do move the bar up to a similar degree, I think it'd overshoot to an even larger degree.

Again though, just the impression I get right now.. maybe in the next couple of years if we see many more developers using CPU well and hurting for CPU resources, my mind might change on that..

And quite asides from developer capability to consume these resources, there's the question of how much the consumer notices. I know we had these arguments at the beginning of this gen, and I still think they weren's too valid then..I do think Wii undershot in that regard..the consumer does see a difference there because of processing power vs the other systems..but in terms of what's simply on the screen, that's an argument that may become more convincing next generation. As a matter of strategy I think both MS and Sony have to be considering how they will make their experiences on their consoles universally different from the other systems, and I'm not sure how much that'll have to do with what's on your screen..
 
Status
Not open for further replies.
Back
Top