Predict: The Next Generation Console Tech

Status
Not open for further replies.
Has there ever been a high end integrated GPU/CPU?

The next Xbox IS going to be a high end console, I promise. All this stuff, integrated, ARM chips, is non starter.


But I guess, it's a speculation thread, not so much realistic speculation :p

High end, yes. I don't think there are many people here whom would expect the system to draw greater than 150W at the wall however. Working backwards from there, realistically the CPU is probably not going to be drawing more than 100W at the utmost. There are many people whom would peg that figure much lower at 100W or 75W at the wall.

The major difference between an Xbox CGPU and say Llano is that the former will be more GPU dominated and will therefore be designed to maximise the performance/watt and utility of the GPU component, whilst the latter will be designed more as a desktop processor so will therefore be CPU dominated if you count the quantity of die space dedicated to one or the other. There may not be enough CPU really to even justify a separate die.

Suffice to say, I expect to see an attempt to reduce the quantity of off die memory transactions for the GPU to conserve power and boost efficiency, I would expect to see a certain quantity of on die frame-buffer.
 
I've been Googling on the subject but having little luck, so I'd like to ask in here since it has a large bearing on what's possible next generation.

What's the roadmap for the introduction of 4Gb GDDR5 chips? I know Elpida are just introducing 2Gb chips to mass production now but wondered if there's any recognised roadmap for the introduction of 4Gb chips as the availability of 4Gb GDDR5 chips could have a huge impact on the memory capacity of next generation consoles.

Oh, and since I'm a little confused, am I right to believe that if Microsoft or Sony were to use a 256 bit bus with GDDR5 it would require them to use 8 chips? So assuming the use of a 256 bit bus (quite a leap, I know) 2GB would seem the most they could currently pack in without exorbitant costs (16 RAM chips in a console isn't exactly realistic now is it?), right? If 4Gb chips are introduced in time that would raise the realistic maximum to 4GB.

When looking into this I noticed that RAMBUS have already got 4Gb XDR2 chips specified. Obviously going with a bit of an unknown quantity like XDR2 brings with it increased risks but the paper specs look very appealing. It'd probably be pricier "per chip" than GDDR5 since its not already in mass production but I could see the availability of 4Gb chips to be appealing as it opens up a lot of options.

It could allow the use of a cheap 128 bit bus and just 4 RAM chips and yet still provide a decent 2GB of RAM. If you're trying to drive down costs the decreased board complexity could be appealing and since XDR2 should provide more bandwidth than GDDR5 and along with the use of something like IBM's eDRAM L3 cache tech. having a wider than 128 bit bus to main memory may be enough.

It also gives the option of including 4GB without using a stupid amount (16) of RAM chips and if its felt 4GB RAM would offer a real competitive advantage then that obviously makes XDR2 a lot more appealing.

Of course if 4Gb GDDR5 chips are available in good supply well before the launch of these next generation consoles then these advantages that XDR2 has are negated somewhat.
 
I've been Googling on the subject but having little luck, so I'd like to ask in here since it has a large bearing on what's possible next generation.

What's the roadmap for the introduction of 4Gb GDDR5 chips? I know Elpida are just introducing 2Gb chips to mass production now but wondered if there's any recognised roadmap for the introduction of 4Gb chips as the availability of 4Gb GDDR5 chips could have a huge impact on the memory capacity of next generation consoles.

Well it currently seems that the new consoles are still pretty long way from coming in to the marketplace, so I'd say its likely that 4Gb chips are produced by then. It didn't take too long to go from 1Gb to 2Gb chips. I noticed that Elpida introduced its 1Gb only late last year and brief look at wiki says that 512Mb were produced in the middle of 2008, so the progress has been quite fast.
 
Well it currently seems that the new consoles are still pretty long way from coming in to the marketplace, so I'd say its likely that 4Gb chips are produced by then. It didn't take too long to go from 1Gb to 2Gb chips. I noticed that Elpida introduced its 1Gb only late last year and brief look at wiki says that 512Mb were produced in the middle of 2008, so the progress has been quite fast.

My main concern is Nintendo really as I'm expecting a late '11/early '12 launch from them. I can see them being drawn to a dual GDDR5 chip, 64 bit bus solution as that would still offer a lot more bandwidth than either RSX or the 360 has, which would make it "enough" for Nintendo, and it'd be super cheap. It'd be a decent enough solution if 4Gb chips are available in mass production by late '11 but if they're not then we'd be stuck with only 512MB of RAM.

Porting games from 2GB/4GB machines to a 1GB box that's restricted to 720p should be easy enough but squeezing them titles down enough to fit into 512MB of RAM could prove extremely difficult.
 
For 2013 release my guess would be 4GB total ram (8x4Gb chip configuration). I am not sure about the bandwith though. I am hoping for 256bit but 128 bit seems more likely to me. Let's say 7Gbit/s rated chips becomes widely available by then, which means the total GPU bandwith would be 112 GB/s. Quite a jump from Xbox360's 22.4 GB/s.
 
For 2013 release my guess would be 4GB total ram (8x4Gb chip configuration). I am not sure about the bandwith though. I am hoping for 256bit but 128 bit seems more likely to me. Let's say 7Gbit/s rated chips becomes widely available by then, which means the total GPU bandwith would be 112 GB/s. Quite a jump from Xbox360's 22.4 GB/s.

XDR2 that can offer >200GB/s over a 128 bit bus will be available by 2012 and that would represent a very nice upgrade while still remaining relatively cheap.

http://www.rambus.com/us/technology/solutions/xdr2/xdr2_vs_gddr5.html

It comes in 4Gb chips as well, which makes 4GB possible and 2GB genuinely cheap. I think developers would be able to do some pretty incredible things with even 2GB of RAM if it comes with >200GB/s of bandwidth.
 
My main concern is Nintendo really as I'm expecting a late '11/early '12 launch from them. I can see them being drawn to a dual GDDR5 chip, 64 bit bus solution as that would still offer a lot more bandwidth than either RSX or the 360 has, which would make it "enough" for Nintendo, and it'd be super cheap. It'd be a decent enough solution if 4Gb chips are available in mass production by late '11 but if they're not then we'd be stuck with only 512MB of RAM.
By the same logic, the Wii would have had 152MB RAM instead of 88, and twice the bandwidth to the GDDR chunk. Watch Nintendo go with a single 32 bit chip again. It's what they do.
 
By the same logic, the Wii would have had 152MB RAM instead of 88, and twice the bandwidth to the GDDR chunk. Watch Nintendo go with a single 32 bit chip again. It's what they do.

152MB in the Wii would have been a waste, 88MB was already a very healthy amount for a dedicated gaming system of its standing and the bandwidth provided by it was a huge increase compared to the ARAM used previously. 1Gb GDDR3 chips weren't available in time for the 360's launch (and even supply of 512Mb chips was super tight) so using a 1Gb chip could have also led to supply issues on launch, it just wasn't a good fit and the single 512Mb GDDR3 chip wasn't an example of Nintendo cheeping out, in fact it was the one area of the system that saw a major upgrade. They overspent in this area in comparison to the rest of the design if anything.

The Wii was an anomaly and with the 3DS they've demonstrated that going forward they'll pursue their traditional approach to system design. The Wii being so shitty has seriously distorted peoples memories of what their previous systems were like, the GCN was easily the best designed and most efficient system last generation and no slouch in the graphics department either, neither is the 3DS or the NDS and N64 before it.

They simply can't rehash the tired old design again and if they're going with a modern design they're not going to intentionally gimp it "just because." 512MB is the minimum memory amount they'll use but if 4Gb GDDR5 chips are available in time then 1GB is a very real possibility. Under no circumstance do I see them going with any of sized main memory pool.
 
Last edited by a moderator:
High end, yes. I don't think there are many people here whom would expect the system to draw greater than 150W at the wall however. Working backwards from there, realistically the CPU is probably not going to be drawing more than 100W at the utmost. There are many people whom would peg that figure much lower at 100W or 75W at the wall.

The major difference between an Xbox CGPU and say Llano is that the former will be more GPU dominated and will therefore be designed to maximise the performance/watt and utility of the GPU component, whilst the latter will be designed more as a desktop processor so will therefore be CPU dominated if you count the quantity of die space dedicated to one or the other. There may not be enough CPU really to even justify a separate die.

Suffice to say, I expect to see an attempt to reduce the quantity of off die memory transactions for the GPU to conserve power and boost efficiency, I would expect to see a certain quantity of on die frame-buffer.

The original xbox 360 pushed much more wattage at the wall

http://www.pcgameshardware.com/aid,...onsumption-put-older-models-to-shame/Reviews/

You can do alot with a 150 - 200watt envelope . the bobcat cores tested were 2x2 and used 18 watts. Assuming equal power usage for all the cores you can do a 12 core bobcat at 54watts. Leaving you 100-150 watts for the rest of the system. A 5770 with ram and everything else uses a maximum 110watts

http://www.tomshardware.com/reviews/radeon-hd-5770,2446-15.html


So you might be able to do a 12 core bobcat coupled with a 5770 at 28/40nm respectively.

a 5770 would give you over 100 fps at 1920x1200 in batman with settings far surpasing current gen systems running the same game at 720p. Games designed around the 5770 would simply be amazing. If you go with a full 28nm console you can most likely put a 5870 class hardware in the console without going over 150-175watts


I just don't see the point of intergrated video on a next gen console. It wouldn't make sense for anything but a wii 2 type set up. You can make something much better with a two chip approach.
 
Last edited by a moderator:
I just don't see the point of intergrated video on a next gen console. It wouldn't make sense for anything but a wii 2 type set up. You can make something much better with a two chip approach.
Because plenty of developers have state that's what they want ;)
The ugly latencies between the GPU and CPU seems the culprit.
 
High end, yes. I don't think there are many people here whom would expect the system to draw greater than 150W at the wall however. Working backwards from there, realistically the CPU is probably not going to be drawing more than 100W at the utmost. There are many people whom would peg that figure much lower at 100W or 75W at the wall.

The major difference between an Xbox CGPU and say Llano is that the former will be more GPU dominated and will therefore be designed to maximise the performance/watt and utility of the GPU component, whilst the latter will be designed more as a desktop processor so will therefore be CPU dominated if you count the quantity of die space dedicated to one or the other. There may not be enough CPU really to even justify a separate die.

The above makes a lot of sense.
Just about all of the better reasoning in this thread tries to optimize the the cost/performance/power draw triad. But that really requires that you set up some (case by case) limits before you can start to get to specifics. TIme of availability, and thus process technology needs to be set as well.

One thing speaking against a single chip solution is if console manufacturers go with IBM as a CPU supplier again. If so, we are talking about separate entities designing the CPU and the GPU, in which case having them as separate chips communicating via some bus protocol makes a lot of real life sense.

Console vs. general purpose computing means that main memory bandwidth will be higher which isn't much of a problem since you don't have to deal with socketable standard parts. As many here, I believe 128-bits worth of GDDR5 is likely to be utilized if the console is launched in the next couple of years. Realistically, any console from now on out will target 1980x1080p. No more though, and this is one of the hard limits in terms of next generation targets, so the above should be both relatively cheap and sufficient.

Squilliams 150W at the wall seems like a reasonable upper limit for MS and Sony unless they surprise us, most likely downwards. Which would indicate an upper bound of roughly 100W drawn by CPU and GPU combined. Lithographic process? Assuming 32/28nm would seem prudent if we assume decent release volumes at least up until and into 2013. So if these assumptions are correct, it's not too difficult to make a ballpark prediction of performance. Somewhere between a middle of the road CPU with a HD5770 and a high end CPU with a HD5870, simply going by power draws and available bandwidth. Which is quite decent for 1980x1080p3D and at $300.

Nintendo walks to the beat of a different drummer. They are interesting simply because they aren't very easy to predict.
 
I doubt they go back down to 100W. The present gen pushed 200W when released and while that has caused reliability problems with the both consoles I don't think they are going to cut that in half just like that. They will probably try to keep it around 150W this gen. Safer then before but not as restricting.
 
Because plenty of developers have state that's what they want ;)
The ugly latencies between the GPU and CPU seems the culprit.

The problem is that you'd end up with a huge chip and as we see from todays examples (bobcat , sandybridge) what you get is a sub par graphics portion. Unless you want a single huge chip which will drive up costs and bring down yields your going to be stuck with a two chip design.

You should also be able to remove bottlenecks between the chips by moving more off the cpu. For instance tessellation will hopefully be fast enough in an xbox next /ps4 that you wouldn't need to the cpu to focus on that aspect.
 
I don't understand what is big appeal on low power consoles. It is not like they are running on batteries. As long as the form factor is reasonably attractive and you can limit the noise level somehow, why should there be consumer appeal for lower power consoles?
 
The problem is that you'd end up with a huge chip and as we see from todays examples (bobcat , sandybridge) what you get is a sub par graphics portion. .

I'd hardly call the 400SP DX11 GPU in a Llano a "sub par graphics portion", its certainly a huge step up from both RSX and Xenos. Even with Ontario, a chip targetted at netbooks the GPU is a monster relatively speaking, low end discrete performance in a netbook is insane. Lllano will be limited by bandwidth in the real world of course, but in a console that's less of an issue because you can use faster RAM like GDDR5 and XDR2 and don't have to target particularly high rendering resoultions.

In both Ontario and Llano, roughly 2/3 or more of the die is being dedicated to the GPU portion, and in both cases the GPU portion is going to decimate everything else in their target market.
 
The reason people are worried about power is simply cooling that in the size of a console gets more and more expensive and leads to more failure prone consoles. So does things like lead free solder that they were working with for the first time on such a scale with the launch of the new consoles too. Also the more heat the produce not only the more expensive the cooling becomes but the louder it becomes as well. I think they are going to stay the say or cut back a little but not all the way to 100W. Too many people seem to be under the impression just because Nintendo succeeded building a low power cheep console console with no loss t launch that everyone's going to suddenly go that way. Nintendo has been doing that for years with varying levels of success and it hasn't changed Microsoft and Sony's strategy. Why would they go for it now especially when doing so could cede a significant power gap to their rival? While I assume the lessons from Sony's and Microsoft's console will be learned and especially in Sony's case they won't lose as much on initial manufacture I don't see them suddenly changing their business model just like that.
 
Status
Not open for further replies.
Back
Top