Predict: The Next Generation Console Tech

Status
Not open for further replies.
I took the previous generation (2000-2001) and compared it to the current (wii/360/ps3). I then projected that onto the future.
Which ignores all the problems that are throttling back the progression of technology.
ps2 6.2 gigaflop, ps3 1 terraflop (single precision) so ps4 should (could) be about 125 terraflop, give or take.
That teraflop performance was actually claimed as 2 teraflops for PS3, 1 teraflop for XB360, both being total system FLOPS and both being rather meaningless numbers. Cell achieves 200 GFlops with 7 execution units at 3 GHz (give or take). The only advances that can be made is increasing the number of execution units and clockspeed. 16 SPUs at 4 GHz would be 3x the performance - not even a teraflop. Throw in some optimizations and if we're lucky it'll hit a teraflop. And of course that's theoretical peak. One of the keys to Cell's success is being able to sustain processing. So it's not all about peak figures, but how much you can actually do at a given moment.
 
thanks for the info!
i read about a 'new' cell a few years ago. You could say it's a modified design or something. same clock speed but about 10 times faster at double precision floating point. maybe they'll add a bit of that :) Plus branch prediction. Maybe add some quad hyper-threading. Anyway, it would be lame to just up the clockspeed, and add some more SPUs. But i didn't know about it being only 200 gflop.
Anyway, if it progresses in the same way as it would from ps2 to 3 (even though you say it's not possible) it could be 6.4 terraflop, which seems like a nice number.
 
thanks for the info!
i read about a 'new' cell a few years ago. You could say it's a modified design or something. same clock speed but about 10 times faster at double precision floating point. maybe they'll add a bit of that :) Plus branch prediction. Maybe add some quad hyper-threading. Anyway, it would be lame to just up the clockspeed, and add some more SPUs. But i didn't know about it being only 200 gflop.
Anyway, if it progresses in the same way as it would from ps2 to 3 (even though you say it's not possible) it could be 6.4 terraflop, which seems like a nice number.

The double precision enhancements that IBM has folded into their Cell chips for supercomputing wouldn't be particularly advantageous for gaming, really.

IBM has talked about a teraflop Cell chip with 2 PPEs and 32 SPEs, and maybe that'll come around, but I can't imagine Sony making the same mistake they made this generation with emphasizing CPU flops over GPU bandwidth.

I suppose it would depend on what the competition does, but I would think Sony could field an impressively 'next-gen' system by simply re-using a die-shrunk Cell with a much better GPU and more RAM.
 
Last edited by a moderator:
The double precision enhancements that IBM has folded into their Cell chips for supercomputing wouldn't be particularly advantageous for gaming, really.

Everybody says that (DP is useless for games), but is it really the case? I still see plenty of z-fighting and color banding (even on my trusty CRT).
 
also, if every game in 2/3 years on ps3 looks like killzone2, (ie all devs get some talent)
they could just "pull a wii" and only slightly update the system.
Some more ram (as it's cheap now) plus a better GPU. (more 1080p and more 60fps). et voila!
 
I'm expecting a beefed up PS3.

Cell 2 with around 16 SPU's, more LS per SPU and a couple of improved PPU's with more L2. All at something like 3.8Ghz. Freed of the burden of propping up the GPU and leveraging the experience devs will gain from Cell 1, I think that will be more than powerful enough to produce the kind of results expected from the next generation.

RAM will likely be 2GB with an outside chance of 3 or 4GB.

GPU will be a modifed version of whatever the latest PC architecture of the day is, most likely NV but possibly AMD. I think Larrabee is unlikely given my assumptions about using a Cell based CPU. I expect something more akin to NV2a than Xenos. Although I doubt it will be as relatively high end as NV2a was.
 
I see a smaller, more optimized nvidia gpu, similar to what ati did with the 4xxx series, they didn't compete for the biggest chip only the most optimized per area and production cost. It'd have maybe 1.5x-2x the performance of 4870 and cheap to produce. No expensive edram since aliasing is not so easy to notice at 1080p60, especially on the average 42" 1080p TV. Coupled with the Cell 2, you'd have all games at 1080p60 and have enough room for complex AI etc.
 
Does anyone want to hazard a guess at the power consumption per 100mm^2 of GPU/CPU one might expect on the 32nm process?

It may be one of the most important considerations because the silicon budgets didn't increase at all between this gen and last gen, and yet the power consumption increased by many orders of magnitude.

Thats actually a pretty smart way of looking at it. And as Shifty points out, by only looking at past trends doesnt take into account what is currently bottle necking chip production (heat, power)

So by early 2010 we will have the exact answer to that question and will be able to extrapolate out from that what the budget of transistors could likely be in a console within a power envelope.

Larrabee will actually become if nothing else a useful guide of power consumption on the 32nm process. So wecould take one of the standard dx11 GPUs (of 2010) and scale according to budget to guess performance. (bit hit and miss but reasonable I would say?)
 
Thats actually a pretty smart way of looking at it. And as Shifty points out, by only looking at past trends doesnt take into account what is currently bottle necking chip production (heat, power)

So by early 2010 we will have the exact answer to that question and will be able to extrapolate out from that what the budget of transistors could likely be in a console within a power envelope.

Larrabee will actually become if nothing else a useful guide of power consumption on the 32nm process. So wecould take one of the standard dx11 GPUs (of 2010) and scale according to budget to guess performance. (bit hit and miss but reasonable I would say?)

Thats very true, we should see some indication at least before the end of the year?

I would say that the consoles of the next generation may use far less silicon, but may be designed in other ways to maximise the efficiency and data throughput. If we assume the designs will have less silicon than the current generations does that make a generalised single chip design like Larrabee a winner as there is absolutely no duplication of silicon?
 
Does the next generation need to stretch? I would think that with the 1080p/60Hz wall, there just wouldn't be the need for as big a technical leap as the HD consoles made this generation.

I can see defaulting to 1080p/60, I can see having faster distribution media, more RAM, more storage, and especially more networked services, but I can't see the need to increase FLOPS by the same factor as last time, nor GPU.

The kind of GPU that can drive 150fps at super-HD resolutions today should be trivially affordable in 3-4 years for consoles, modulo bus pin count/cost, I guess.

What's left to demand a big jump? 120Hz 3d displays? VR goggles? Full ray-tracing? How many years would it take to master the kind of resources that would be brought about by another XBox->360/PS2->PS3 style jump?

I guess the answer is, 'see the rest of this site'. :LOL:
 
I think 60fps will never become a standard on console while more bling can be extracted from a 30fps refresh rate. However, I do agree that 1080p will become the natural standard, but I would be very surprised if today's GTX280/4870 level of GPU power will be bettered in the next gen consoles.
 
Everybody says that (DP is useless for games), but is it really the case? I still see plenty of z-fighting and color banding (even on my trusty CRT).

There are other limitations than the arithmetic calculations. Even for trusty CRTs. :)
A more relevant question regarding DP and games (and for that matter a lot of the GPGPU stuff) would be "Is it worth it?".
There are associated consequences in both gate count and complexity leading to higher cost, larger die size and higher power draw with the drawbacks that entails. If the benefit is too small, was it really worth it over a simpler/smaller/cooler/cheaper design?

If the target market is playing console games, I think the answer is clear.
 
Last edited by a moderator:
I think 60fps will never become a standard on console while more bling can be extracted from a 30fps refresh rate.

You are probably right that 60 fps will not be the minimum spec required for games in general, but I do hope it will become some sort of standard for racing games.
 
If *I* was developing a console for microsoft, I honestly couldn't care less about LRB/Cell. What's with the obsession to make sure your entire library and compiler need to be rewritten with every generation and poor millions into education and R&D and making sure you look arse-faced when your console launches and underperforms?

What's wrong with staying with PPC/x86 and nV/ATI graphics? Does a developer really care that a console runs the latest and greatest parallel processor? heck no. Portability, low development costs. Maybe I'm looking too much as a PC gamer but don't you want to see World of Warcraft on PC's AND Consoles? one codepath that runs on your latest console, an PC, whether it's mac of windows. Now I'm talking about that Phantom POS. but a Custom PC in a box with WinCE. that offers value for developers which in return allows you toharvest the royalties of a closs-platform gaming brand and network infrastructure.
 
If *I* was developing a console for microsoft, I honestly couldn't care less about LRB/Cell. What's with the obsession to make sure your entire library and compiler need to be rewritten with every generation and poor millions into education and R&D and making sure you look arse-faced when your console launches and underperforms?

What's wrong with staying with PPC/x86 and nV/ATI graphics? Does a developer really care that a console runs the latest and greatest parallel processor? heck no. Portability, low development costs. Maybe I'm looking too much as a PC gamer but don't you want to see World of Warcraft on PC's AND Consoles? one codepath that runs on your latest console, an PC, whether it's mac of windows. Now I'm talking about that Phantom POS. but a Custom PC in a box with WinCE. that offers value for developers which in return allows you toharvest the royalties of a closs-platform gaming brand and network infrastructure.


I agree with you. I've allways felt the next xbox would have a modified waternoose (more cores and more cache per core) and a new ati chip.

If they go that route ms can already have dev kits to use. If they go with a 9 core waternoose with improvements IBM could have actually delivered prototypes of the chip to ms and if they are goign with a dx 11 ati gpu or dx 11.X i'm sure ati could have provided dx 10 gpus and in a few months their first dx 11 parts.

It would give them a great advantage over say Sony going with labree. However if Sony were to go with a cell 2x16 or 2x32 and a new nvidia gpu they might be right at the same place MS are at.

Who knows even the internal team reconstruction MS is doing could be to create more dev teams and switch those to the next xbox platform
 
Well, the dev. hardware is not the problem. Heck, the first 360 devkits where G4's with a X1800. But if you want to actually pluck the fruits of your labor, you have to stop changing your complete environment every 5 years (psx/ps2/ps3) M$ now has a platform that's largely binary compatible with the Wii. Why waste that? You can change your video card without too much problems, but with G4WL just kicking in, why throw everything away?
 
If *I* was developing a console for microsoft, I honestly couldn't care less about LRB/Cell. What's with the obsession to make sure your entire library and compiler need to be rewritten with every generation and poor millions into education and R&D and making sure you look arse-faced when your console launches and underperforms?

What's wrong with staying with PPC/x86 and nV/ATI graphics? Does a developer really care that a console runs the latest and greatest parallel processor? heck no. Portability, low development costs. Maybe I'm looking too much as a PC gamer but don't you want to see World of Warcraft on PC's AND Consoles? one codepath that runs on your latest console, an PC, whether it's mac of windows. Now I'm talking about that Phantom POS. but a Custom PC in a box with WinCE. that offers value for developers which in return allows you toharvest the royalties of a closs-platform gaming brand and network infrastructure.


Althought I generally agree with you (because the costs and SW problems), a console does have some advantages that would be a crime to not exploit.

So I dont think one shouldnt go with to much diferences/works/costs to have better HW, but having costum work on the processors/borad can do marvels for the performance/cost/power ratio ( gekko for GC is almost a perfect example, althought a litle bit of more work wouldnt be bad) even giving oportunitys that you just dont have in the PC (eg Edram comes to mind).

So I dont think one should worry in having the most floating points/dot products... per transistor, but a straight core duo/i7/phenon would be a bad choice.
 
Does the next generation need to stretch? I would think that with the 1080p/60Hz wall, there just wouldn't be the need for as big a technical leap as the HD consoles made this generation.
If just getting up to 1080p/60 is enough for you, you are much easier to please than me!
 
Althought I generally agree with you (because the costs and SW problems), a console does have some advantages that would be a crime to not exploit.

So I dont think one shouldnt go with to much diferences/works/costs to have better HW, but having costum work on the processors/borad can do marvels for the performance/cost/power ratio ( gekko for GC is almost a perfect example, althought a litle bit of more work wouldnt be bad) even giving oportunitys that you just dont have in the PC (eg Edram comes to mind).

So I dont think one should worry in having the most floating points/dot products... per transistor, but a straight core duo/i7/phenon would be a bad choice.

Oh, sure. I'm all a fan of EDRAM it's not cheap (hardware wise) but great for performance. So no, you don't need a full pc in a mac like housing. throw in the eDram and a lean os and you're set. you don't need a $500 proprietary graphics processor for that.
 
Status
Not open for further replies.
Back
Top