Predict: The Next Generation Console Tech

Status
Not open for further replies.
We should try and quantify what 4 x 5970 actually means, here is my attempt:

4x 2x 1600 Stream Processors or 12800 Stream Processors
4x 2x 80 Texture Units or 640 Texture Units
4x 2x 32 ROPs or 256 ROPs
4x 2x Gigabyte Frame Buffer or 8 GB Frame Buffer
4x 2x 725 MHz Cores or 8 725MHz cores
4x 2x 256bit Memory Bus Width or 8 x 256Bit Memory Bus
4x 2x 128 GB/s memory bandwidth or 1024 GB/sec
4x 2x 2.72TFLOPS or 21.76 TFLOPS

And at 40nm it would dissipate approximately 4x 300 Watts or 1200 Watts
And again at 40nm it would have a die size of 4x 2x 334mm2 or total area of 2672 mm2
It would also be 4x 4.3 billion transistors or 13.2 billion transistors

It is funny some Microwaves do not even output 1200 Watts.
Now even at 22nm I think we may have a slight problem. Slightly off topic is it possible to build a PC with Quad Crossfire 5970 (4x 5970 cards)?

Food for thought clueless?
 
Lol wait a minute, 4x5970!? This kind of powa won't even be available on massive PC chips for at least a couple of years...

Tahir, there are many reasons why it would not be practical to build a 4x5970 rig (would not fit on any motherboard, no processor powerful enough to feed it, would require astronomical amounts of power, would generate solar amounts of heat, etc.). So regardless of whether or not the drivers support it I see no possible way to build such a machine. You were being facetious?

Yes upon closer inspection I see that I just answered a rhetorical question :cry:
 
We should try and quantify what 4 x 5970 actually means, here is my attempt:

4x 2x 1600 Stream Processors or 12800 Stream Processors
4x 2x 80 Texture Units or 640 Texture Units
4x 2x 32 ROPs or 256 ROPs
4x 2x Gigabyte Frame Buffer or 8 GB Frame Buffer
4x 2x 725 MHz Cores or 8 725MHz cores
4x 2x 256bit Memory Bus Width or 8 x 256Bit Memory Bus
4x 2x 128 GB/s memory bandwidth or 1024 GB/sec
4x 2x 2.72TFLOPS or 21.76 TFLOPS

And at 40nm it would dissipate approximately 4x 300 Watts or 1200 Watts
And again at 40nm it would have a die size of 4x 2x 334mm2 or total area of 2672 mm2
It would also be 4x 4.3 billion transistors or 13.2 billion transistors

It is funny some Microwaves do not even output 1200 Watts.
Now even at 22nm I think we may have a slight problem. Slightly off topic is it possible to build a PC with Quad Crossfire 5970 (4x 5970 cards)?

Food for thought clueless?

God damn those are some great points esp the 300 watts part.

I was referring to possible specs by 2012 or maybe 5970 x2 in one chip (power tech wise whatever).

I def know by 2012 there has to be a leap over the 5970.
 
There's something that has been gnawing at the back of my mind and I finally figured out what it was:

With IBM not continuing to develop the Cell, but saying they will apply lessons learned and with Sony having a seemingly vested interest in continuing to get money out of their investment, is there any reason why SPEs couldn't be adapted to other processors?

Kind of a 'best of both worlds' approach?
 
I don't know why, but ever since that quasi-news about Cell/IBM came out, the forum is run amok with redundant posts and disparate thoughts.

Sony is not getting any money back on any investment - the investments made in the technology up until now may be what leads them to choose it once again vs an alternative architecture: that is the way to think about it. But the decision will be made on its own merits rather than to validate past decisions. If Kutaragi was in charge, well I think Cell would actually be something of a lock - it does have a lot of merit. But he was removed in part to hedge against that very form of "romantic" decision making, so I think it is going to be a straight cost/benefit analysis.

As for putting SPEs onto other chips, yes it would be possible, but in a sense that would be continuing the Cell architecture by default. I mean, I don't see Sony putting SPE's on a non-STI 'Cell' chip and not at the same time assuring B/C with the former iteration of the architecture, right? And if B/C were maintained, then it becomes de facto Cell '2,' regardless of factors related to its origins.
 
.

I def know by 2012 there has to be a leap over the 5970.

In the PC space sure, on consoles? Well I wouldn't hold my breath, I think you're setting yourself up for serious disappointment here. Microsoft have already shown they're unwilling to take heavy losses on hardware, and a bigger GPU than a 5970 puts us near the 5 billion transistor mark, even at 22nm (which is a long shot), that's still going to be a large, pricey and toasty chip. Have you seen the size of the 5870's stock cooler? That things huge and doesn't come cheap to boot. There's the fact that process shrinks after 22nm are less than a sure thing like they once were, so they've got to factor in the possibility of minor die shrinks as well. Xenos has only had one die shrink in three years for instance, and we can expect slower progress than that, post 2012.

I'd say a transistor budget around 1 billion for the CPU and 2 billion for the GPU is a lot more realistic and could still deliver some decent hardware for 1080p games. I'd expect the chip to be DX12 compliant (if the API is ready in time) and be based off ATI's latest generation, with a few tweaks, so it should be able to eke out more efficiencey than a 5870, but don't expect any great 2x performance leaps above it. 2GB of GDDR5+ UMA, 4GB if we're lucky. You're still getting an order of magnitude (just) leap forward with that sort of setup and with only a 2x resolution upgrade to contend with this time around, that's really not too bad imo.

that sucks but if it is big as the first xbox then i would have no problem.

Try as big as your refrigerator! :p

with Quad Crossfire 5970 (4x 5970 cards)?

No, you can only have 4 individual "GPU"s and since a 5870 is two 5870 chips on a single PCB, you can only crossfire two together, and the scaling is atrocious.
 
Last edited by a moderator:
In the PC space sure, on consoles? Well I wouldn't hold my breath, I think you're setting yourself up for serious disappointment here. Microsoft have already shown they're unwilling to take heavy losses on hardware, and a bigger GPU than a 5970 puts us near the 5 billion transistor mark, even at 22nm (which is a long shot), that's still going to be a large, pricey and toasty chip. Have you seen the size of the 5870's stock cooler? That things huge and doesn't come cheap to boot. There's the fact that process shrinks after 22nm are less than a sure and thing like they once were, so they've got to factor in the possibility of minor die shrinks as well. Xenos has only had one die shrink in three years for instance, and we can expect slower progress than that, post 2012.

I'd say a transistor budget around 1 billion for the CPU and 2 billion for the GPU is a lot more realistic and could still deliver some decent hardware for 1080p games. I'd expect the chip to be DX12 compliant (if the API is ready in time) and be based off ATI's latest generation, with a few tweaks, so it should be able to eke out more efficiencey than a 5870, but don't expect any great 2x performance leaps above it. 2GB of GDDR5+, 4GB if we're lucky. You're still getting an order of magnitude (just) leap forward with that sort of setup and with only a 2x resolution upgrade to contend with this time around, that's really not too bad imo.



Try as big as your refrigerator! :p



No, you can only have 4 individual "GPU"s and since a 5870 is two 5870 chips on a single PCB, you can only crossfire two together, and the scaling is atrocious.

Exactly what i was thinking for the Pc but the console part is some serious hope on my part.
I Did not know it would be that big !!

BTW "If" 5870 is 10x xenos that would mean 5970 is approx 20x in theory or in actual graphics ?

as in a leap seen from ps2 to ps3 or xbox to 360 ect..
 
Sony is not getting any money back on any investment - the investments made in the technology up until now may be what leads them to choose it once again vs an alternative architecture: that is the way to think about it. But the decision will be made on its own merits rather than to validate past decisions.

So you say I'm wrong by repeating what I said with more words?
 
I say we'd be lucky to get a single 5850 performance in the next gen. Semiconductor process isn't improving as fast as the previous 2 decades, and especially power consumption isn't improving much.
Based on how close to the limit 5970 is, (it's clocked slower than dual 5870's due to 300W limit) GPU design will have to get much smarter to keep up.
 
I say we'd be lucky to get a single 5850 performance in the next gen. Semiconductor process isn't improving as fast as the previous 2 decades, and especially power consumption isn't improving much.
Based on how close to the limit 5970 is, (it's clocked slower than dual 5870's due to 300W limit) GPU design will have to get much smarter to keep up.

I think we may get something better; i can recall an article on N4g in which some ATi scientist or tech guru was showing off some upcoming chip set (last 5 months or less) and remarked "if you think that is impressive wait till you see our next consoles" (or console chips).

I tried to find this article again but with zero results (was a line within an entire topic).
 
So you say I'm wrong by repeating what I said with more words?

Your post was phrased as a question - as is this one - mine was phrased as a statement. My point was to answer you that no, Sony has no "vested interest in continuing to get money out of their investment." But yes the SPEs could be ported to another processor (in theory). In reality I think it is going to depend on who the primary source of the PS4 CPU ends up being as to how likely that would even be, but in truth Sony probably already knows whether SPEs in the next system is something they are interested in, and is proceeding accordingly. And of course I'd like to point out that a true 'Cell 2' isn't 100% ruled out at this point either, nor is IBM's own participation in such a project.
 
[/B]It is funny some Microwaves do not even output 1200 Watts.
Now even at 22nm I think we may have a slight problem. Slightly off topic is it possible to build a PC with Quad Crossfire 5970 (4x 5970 cards)?

Food for thought clueless?

you can build such a kind of PC, it will be useful for GPGPU, though not for graphics.

have a look at this : the "fastra" computer from University of Antwerp was/is a phenom with four 9800GX2 cards, meant as a cheap "desktop supercomputer".
http://fastra.ua.ac.be/en/index.html

Here is the fastra 2, built on an asus mobo with seven PCIe 16x slots. It's a monster sporting thirteen GPUs, as six GTX295 boards for CUDA plus a GTX275 for display.
http://fastra2.ua.ac.be/
 
.

I def know by 2012 there has to be a leap over the 5970.

ATI in 28 months (May 2007-Sept. 2009) went from 500Gigaflops at 80nm, to around 3000 Gigaflops@40nm. If they keep this pace, by July 2012 (which is 28 months away) we should have 15000+ Gigaflops GPUs, and 30 Teraflops card. But i don't think it will happen. :D
 
ATI in 28 months (May 2007-Sept. 2009) went from 500Gigaflops at 80nm, to around 3000 Gigaflops@40nm. If they keep this pace, by July 2012 (which is 28 months away) we should have 15000+ Gigaflops GPUs, and 30 Teraflops card. But i don't think it will happen. :D

I am not expecting an increase of that size but large though.


Anyone care to explain Gpu vs Gpgpu and the importance of tech going down that route?
 
Anyone care to explain Gpu vs Gpgpu and the importance of tech going down that route?

Clueless I'm not trying to be rude, but I think you should probably spend some time searching threads and reading more while you familiarize with the landscape and some of what's going on in the console technology realm. ;)

All of the questions you are asking in my mind are being answered at this very moment, if not in this thread, then in others. I would also remind all posters that the console technology sub-forum is held to a slightly higher standard of discourse than the regular console forum, and is more in line with the larger B3D environment, which assumes basic knowledge of industry parlance and trends.

But to answer the question though, GPGPU (General Purpose GPU computing) is the trend of offloading traditionally CPU-bound tasks onto the GPU. GPUs increasingly have logic built in to support a more general instruction set, and are thus ideally suited for computational tasks that benefit from high degrees of parallelism. The way to think of it for those who keep their focus more squarely on the modern console space might be to say that the space for which Cell's SPE's were highly suited, is also the space that modern GPUs are also increasingly competitive in.
 
Clueless I'm not trying to be rude, but I think you should probably spend some time searching threads and reading more while you familiarize with the landscape and some of what's going on in the console technology realm. ;)

All of the questions you are asking in my mind are being answered at this very moment, if not in this thread, then in others. I would also remind all posters that the console technology sub-forum is held to a slightly higher standard of discourse than the regular console forum, and is more in line with the larger B3D environment, which assumes basic knowledge of industry parlance and trends.

But to answer the question though, GPGPU (General Purpose GPU computing) is the trend of offloading traditionally CPU-bound tasks onto the GPU. GPUs increasingly have logic built in to support a more general instruction set, and are thus ideally suited for computational tasks that benefit from high degrees of parallelism. The way to think of it for those who keep their focus more squarely on the modern console space might be to say that the space for which Cell's SPE's were highly suited, is also the space that modern GPUs are also increasingly competitive in.

Thanks. I will try to look around the site.

BTW i tried Wiki and it was a mess of tech jargon (imo).
 
i can make a supposition
in ati chips performance scaled about linearly related to number of simd and frequency, so soon they will hit a thermal / size wall, and first that this happens they will change architecture for one more efficient tha will make a jump accellerating the Gflops race
so maybe in more than 2 years from now 15Tflops can be realistic for a top of the line gpu, at least if all goes right...
 
I think we may get something better; i can recall an article on N4g in which some ATi scientist or tech guru was showing off some upcoming chip set (last 5 months or less) and remarked "if you think that is impressive wait till you see our next consoles" (or console chips).

I tried to find this article again but with zero results (was a line within an entire topic).

An ATI rep. pimped the Wii's graphical capabilities pre-launch; it means nothing. I'm in agreement with corduroygt, I don't think we'll see anything much beyond a 5850, at least in terms of die size, though I'm sure with specific tweaks and a newer architecture they can eke out some more efficiencey.

So 2 billion transistors and a ~ 3 teraflop chip is the sort of ballpark I'd be expecting, that's still going to be pretty big for a console chip, even at 22nm, expecting anything more out of a small $300 box is just wishful thinking.
 
Status
Not open for further replies.
Back
Top