Predict: The Next Generation Console Tech

Status
Not open for further replies.
Developers and Sony survived the first two/three PlayStations, which back then had SDK's and tools that could be used to torture programmers. The only reason you see this in the spotlight this time around so much is because Microsoft is leaps and bounds beyond that "antique" stage of software or documentation. Also look at what Itagaki has recently said - it was a lot harder to make games for the old Nintendo systems than it is for the PS3, for example.

Things were always this FUBAR, it's just that Microsoft made a difference. In any case, I doubt you will see Sony come out and admit a bad decision, at least officially in a business sort of way.

Are you sure about the PS1?

If I recall correctly the PS1 had better tools than the Saturn. I think better than the N64 too if I remember clearly.

The PS2 though was a indeed a "disaster"
 
8-16 SP elements. less than that and you end up with high control overhead per flop.

I don't disagree with that, but it's a balancing act.
If the cores are higher throughput, you need to support them with higher memory bandwidth.
Disregarding bandwidth, the value of trading 1 or 2 cores for higher throughput on the remaining 6 cores (in the case of Cell) is also entirely dependent on the software that will be running on it. I can only really speak of my experience from games, and in this space there's definitely a good bunch of scalar code that needs to run on the SPUs to extract enough performance out of Cell.
On the practical side, another issue is that writing SoA style SIMD code isn't (or wasn't until recently) a common skill in the average game programmer's reportoir. Instead, having a 4 wide SIMD architecture has made it easier to port exisiting vector math libraries to the architecture. While it's not the optimial way of exploiting SIMD it has been more familiar to developers and eased the already difficult transition.

Looking forward to the next gen, it might make sense to go with wider vector units as the ratio of vectorizable to scalar code changes. For this to work on Cell v2 I think we need to extend the ISA with better scatter / gather support, and bandwidth needs to meet the throughput growth. Intel have stated that going beyond 16 wide SIMD started to yield diminishing returns (presumably measured on their Direct3D pipeline), so I think something like 8 sounds more reasonable for a next gen general purpose CPU.
 
NVIDIA approach with G8x showed that we can do a lot of cool stuff leaving the compiler (Shader or CUDA compiler in this case) messing with SOA-like processing. It's obviously not a coincidence that LRB uses the very same approach to shade vertices and pixels.

Average game programmers don't even know that SOA or AOS mean and I don't see hordes of programmers suddenly getting familiar with these concepts anytime soon.

It has to be stressed that wow succesful these new many cores architectures are going to be will be also determined by how easy (or complicated) is to write and debug code for them. Who has the better tools , libraries, IDEs and support might very well 'win' over faster hw.
 
Sony has the largest number of 1st party studios. These teams are building up knowledge on CELL. To throw all of that away for the PS4 would be borderline insane from a finacial perspective. While it's nice to have 3rd parties sell software, it is way more benefical for the bottom line of Sony to have their 1st parties dominate the sales charts.

When the PS4 launches with CELLv2, all the 1st party studios will be building upon their experience. It will give them an edge over 3rd parties. It is the nature of the beast. As 3rd parties struggle with CELLv1, that can be damaged controlled so long as Sony 1st parties deliver good titles. The same will apply with CELLv2.

To find evidence of how strong 3rd party support doesn't translate into dominace and profit just look at the XB360. RRoD certainly hasn't helped, but where is the domination?
 
Last edited by a moderator:
NVIDIA approach with G8x showed that we can do a lot of cool stuff leaving the compiler (Shader or CUDA compiler in this case) messing with SOA-like processing. It's obviously not a coincidence that LRB uses the very same approach to shade vertices and pixels.

Average game programmers don't even know that SOA or AOS mean and I don't see hordes of programmers suddenly getting familiar with these concepts anytime soon.

Exactly, most have no clue on how to vectorize code, or hide latency, or control caches either. So my guess is any Larrabee magic will be lost to most. IMO, Larrabee really needs something on the order of a GPU/shader programming model for GP and CPU code for it to really take off as a CPU+GPU console package.

Interestingly if Larrabee lacked general vector scatter/gather it would also easily be a dead end architecture with such wide SIMD units regardless of its programmer friendly cache coherency (the dedicated texture vector gather speaks volumes to this). Also cache coherency and fine grain locking comes at a bandwidth+latency cost and won't scale. Even with Larrabee one still needs to solve the same core problems as one does with either a LS arch or a GPU arch.

Somehow I still see a PS3 core developer advantage regardless of PS4 architecture, just because many of those developers have been forced to learn how to solve problems in a more parallel/future friendly way, while 360 programmers have lazily gotten away again with programming practices which won't fly on future hardware (including Larrabee). Ironically the SPU model applied to Larrabee (program managed locked local L2s) might end up being the ideal programming model for the platform.

I could really warm up to the idea of a Cell PS4 + NV GPU if Sony could scale up SPU performance/bandwidth and extend SPUs to support vector gather and scatter from the LS. Given that DX11 compute is NOT designed to generate GPU command buffers, DX11 + bad serial x86 code is bound to be the typical Larrabee 720 programming result. Where as SPU+gather/scatter still seems to me as a great way to build command buffers.. and this is something I had not considered in my previous post.
 
Exactly, most have no clue on how to vectorize code, or hide latency, or control caches either.

I got so happy reading that, just means its just not me that is totally clueless about it. Albeit in my defense I do not work as a programmer and any programming I do these days are web stuff with PHP or simple application in Python.
 
Exactly, most have no clue on how to vectorize code, or hide latency, or control caches either. So my guess is any Larrabee magic will be lost to most. IMO, Larrabee really needs something on the order of a GPU/shader programming model for GP and CPU code for it to really take off as a CPU+GPU console package.

I've been throwing that around in my head for a while: Currently, writing high-performance SOA-code is a bit of a pain. Basically - on Cell - you need to come up with a data-fetch/store loop, and some operation to perform on a data-block. Now, because you need to fetch ahead and because of alignment things, you need to add a prologue that usually takes more code than the actual loop. Same for the epilogue. Most of this is straight copy&paste with slightly different DMAs. Very redundant, but you can't just add some ifs into your loop for performance reasons.

Then you write the actual operation. So you have a pretty simple piece of vector math (say 3 component vectors), so you load the x,y,z components of four vectors into three vectors. This allows you to deal with 3 cycles latency in straight vector code. Not good enough, you need at least 7. So, you take 16 vectors at once, which gives you some more headroom, but forces you to write down every operation 12 times (4 source vectors * 3 components). Again, stupid c&p stuff. If it wasn't for VisualStudios ALT-select, this would be the moment you'd look for another job.

So what if I have some compression in there, mapping several source vectors into single destination vectors? Easy: more copy&paste.

Loops? Well, 17 cycles latency for a branch hint, so you need to unroll. Which causes.... more copy & paste.

Seriously, I have a nice little piece of SPU code, hellishly efficient, 95% load on the even pipe (those missing 5% are register dependencies). The main loop does a very simple job. Stuff I can write in 5 lines of shader code turned into 1003 lines of SPU intrinsics. Sure, it processes 64 elements at once, but why do I have to do all of this manually? We should have a way to tell the compiler to do these kinds of things. Manual loop unrolling is really, really stupid. All it gives you are copy&pate errors and totally unreadable code*.

In general, I'd like to see the ability to direct the optimization phase. For example if I have a branch inside a loop-nest, I'd like to be able to tell the compiler to move that up the hierarchy, replicating the inner loops for me (there is a name for this...).

But I'll stop ranting and get back on topic: If any of these architectures are to be used by the regular content-programmer, we need to find more elegant ways to write efficient code for them. One of the great advantages of Cell is that everything on the SPUs is so deterministic, so you know all of your latencies. But if the compiler was able to do the unrolling, the transponation and all that, you wouldn't have to know these things. You'd just have give it a parallelizable loop, like in OpenMP, to be honest, and it can go as wide as it wants.


(* OK, no it doesn't. A bit of naming convention can go a long way. Still, if you have to scroll down pages upon pages of similar looking stuff, try to find the moment where I switched an x and a y.)
 
While we still don't know what the future could be made of, it sounds that MS may have done its choice:
http://www.eurogamer.net/article.php?article_id=234458
By reading this article I can help but think that by now Ms already know where they are heading.
If Intel won the contract will should know soon.
Otherwise it will be ATi/IBM.

I believe this puts the next Xbox in 2011 or 2012. Recall Halo 3 (and most any big budget current gen game started from scratch AFAIK) was worked on for some 4+ years. And that time is certainly not going to go DOWN next time (if anything, up)

Depending on whether the "next Halo" team has already been working since 2007, or you count 2008 as the start year, 2011 or 2012..which is fine by me either way. Even if its 2011, thats two years longer than Xbox lasted, and I definitely think cycles need to lengthen a bit this time around.

Definitely the first concrete indication of next gen we have.
 
Would be very smart for Microsoft to know the exact specification so much in advance, aswell as very early exposure to the architecture. Hell, imagine if Microsoft bought some(alot) Intel stock, and then went together and build an entire Os around this larabee thing for computer based solutions. Their head start over AMD (not to mention better performance vs cost is likely) could result in very high marketshares for the pc cpu market.

Their dev tools will be nothing short of godlike !(as they are amazing compared to any other console right now!)
 
I can't get into specifics, because I'm not that technical. The difference I see between now and then is full native 1080P support in every game with graphics capable of being on par with Crysis.
 
I can't get into specifics, because I'm not that technical. The difference I see between now and then is full native 1080P support in every game with graphics capable of being on par with Crysis.

Thats a rather low estimate. Crysis in 3 years will be old, outdated and ugly.
 
The big revolution has to be on the software side here I think, regardless of whether it's going to be Larrabee or Cell+/Cell 2.0 whatever.

And that's the tricky part right there. But Nvidia and Cell/IBM/Sony working together may well be able to come up with something that will rival Intel's Larrabee efforts.

I have no idea what Microsoft/AMD will do, if anything.

It's going to be very exciting times again, that's for sure. But because the changes are potentially so radical this time around, I think the current generation of consoles may be around longer or not, depending of course on what the console makers choose to do. It could be a pretty good idea to do something 'light' inbetween, something experimental like Nintendo did, although it won't be easy.

It's going to be more about software than ever though, that's one thing that I'm 100% sure about.
 
It's going to be more about software than ever though, that's one thing that I'm 100% sure about.

I agree--but that was this generations mantra coming from MS and, in a twisted way, Nintendo.

I sometimes ponder the possibility of a console maker pushing out middleware, renderers, procedural generation tools, even full engines, "packaged" with the devkits. Not just demos, but full licenses where they say "we provide the tools, if you wish, now go make the games!" Considering the royalty fees they charge this could be bundled as part of the "platform experience" and a way to entice development, especially of indie devs. This may not make certain middleware devs happy, but look at Havok and AGEIA who are already hardware aligned so any HW design choices already bias them. Software, as a complete package from developers to user experiences, will be much more important than how many last flops you can juice from a processor, that is for sure.
 
I think the current generation of consoles may be around longer or not, depending of course on what the console makers choose to do.

Activisions CEO Robert Kotick predicts a long life cycle for the current generation. link.
"The better news for us right now is it's going to happen a lot longer from now than we've seen with prior generations, because the power and the capability of the hardware we have today is so strong, and the differentiation between the devices is so great that you're likely to see this cycle last a lot longer than we've seen in the past," continued Kotick.

I agree with you that it will be more about software than ever and there are still plenty of creative software to be made that is not bottle-necked by the current console hardware and then you can just add some new peripherals and you will have even more room to grow the business.
 
Activisions CEO Robert Kotick predicts a long life cycle for the current generation. link.

Define "long." Using US dates:

Xbox to 360 = 4 years (2001, 2005)
PS2 to PS3 = 6 years (2000, 2006)
GCN to Wii = 5 years (2001, 2006)

PS1 to PS2 = 5 years (1995, 2000)
N64 to GCN = 5 years (1996, 2001)
Saturn to Dreamcast = 4 years (1995, 1999)

SNES to N64 = 5 years (1991, 1996)
Genesis to Saturn = 6 years (1989, 1995; Sega CD and 32-X in there too though)

NES to SNES = 6 years (1985, 1991)
Master System to Genesis = 3 years (1986, 1989)

Of course you have a number of variables in there (divergent Japanese release dates, various other competitors, massive shortfall in shipping units, strategic surprise releases/blunders, etc), but in the end 4-6 years is typical with a number of market leaders tendering toward the longer span (i.e. if you are making money why slow up the gravy train and compete against yourself?)

With Moore's law slowing slightly (non-Intel companies seem to be having a harder time with this), design complexity increasing, diminishing returns in some areas, questions about future process reductions, basic design hurdles (how to get GB's off of a slow disk to memory for example), and so forth who is expecting a console from MS, Sony, or Nintendo in 2010? That would leave this and next holiday and BOOM! A new console.

And Activision has a point: why would publishers even support a new console right now? They are trying to recoup their current investments. Of course that is the very reason some would jump (if they feel it gives them a better position than the current hardware does... the same reason one of the big 3 may jump early with a new console as well). But are any of the console makers willing to ship without the support of the likes of EA and Activision?

I have been writing 2011-2012 for a while for this very reason: to get a design that will compell consumers to purchase as well as get developer support for 2010 seems nearly impossibly right now, especially with the market still gaining momentum. When total annual sales of consoles level or slow is when I would expect someone to jump.. unless someone is feeling a pinch and feels strategically ($) their best outlook is to jump early and invest losses in a fresh start.

Every generation is a gamble, but with process difficulties looking to get more difficult, and expensive, down the road a mistep in regards to design and partners could be very hurtful. It also makes it seem likely the N6/PS4/X3 consoles could be around even longer.
 
Thats a rather low estimate. Crysis in 3 years will be old, outdated and ugly.

Unlikely, seeing as nobody, but nobody, seems to target high end PC's anymore. Crysis seemingly was the last such gamble, as such it could retain the graphics crown for years. Heck it has already held it for what, one year? Nobody has yet matched Crysis and Crysis Warhead hits today.

I used to think Crysis in 1080P with 4XAA and 60FPS would be what next-gen consoles could do. After all, even the highest end graphics cards cannot meet those specs. But the engine is so poorly optimized, perhaps I was wrong.
 
If id Software hits 60fps with the quality they are showing at 720p on the consoles I think that would give a solid gauge, along with titles from other PC devs (like Epic) where the current handware stands and where future HW may take these sort of devs.
 
Remember when we saw the first demo from unreal engine 3 back in the NV40 launch period. Retrospectively that became the benchmark standard for todays generation of console graphics. It gave us an unprecedented early preview of what the future console tech will deliver.

I predict:

March 2010 at the Game Developer Conference, Epic will fully demo its UE4 system on cutting edge DX11 hardware. By then the next gen architectures including Larabee will have shaken out (Q4 2009). Console producers will have chosen their technology partners and be finalising specs. Work on next gen games will be in full swing for 2012 launch. (xbox possibly 2011)
 
Status
Not open for further replies.
Back
Top