Please Clear This Up - What PC GPU Does the XBOX 360 & PS3 Use?

Going off on a tangent, it seems to me that rather than stressing over silicon fabrication, whoever can get a breakthrough in board and sundry manufacturing to enable high-end performance on the cheap will have the upper hand. If for example Sony could get RSX on a 256 bit at little more cash than RSX now, they'd have a huge advantage. Going forwards if one console is price constrained to 128 bit bus, and another can double or triple that, the advantage would be huge. We keep hearing about node reductions and all that jazz, but the basic production techniques never get a word. Are they pretty static? Same track laying tech as yesteryear with no room for improvement?



HUh? I dont believe Sony can muck around with PS3 internals now, being it's out the door.

I too, really wonder what the console makers are going to do next time round to dodge this whole 128-bit bus limitation. EDRAM seems a given, but it's not going to solve your texure bandwidth needs. Perhaps, simply scaling RAM speeds+128bit bus+EDRAM will be enough in the next gen?
 
HUh? I dont believe Sony can muck around with PS3 internals now, being it's out the door.
Not PS3! Next-gen. PS4 and XB3000 etc. The mobo's seem the biggest restriction these days, and it'll be worse going forwards. 128 bit buses are going to look very long in the tooth next gen. Even 256 bit is going to be slow.
 
Not PS3! Next-gen. PS4 and XB3000 etc. The mobo's seem the biggest restriction these days, and it'll be worse going forwards. 128 bit buses are going to look very long in the tooth next gen. Even 256 bit is going to be slow.

Is there a way they could do like a "riser" board that only has the cpu gpu and ram on it?

If Sony were to say go with a form of the G80 wouldn't they need stupid amounts of bandwidth to feed it (or would they make all the stream processors run pixel code leaving vertex stuff to Cell)? I can see MS going with some (another) form of the R600, and probably utilizing even larger amounts of EDRAM to ease the bandwidth pain.
 
G80 will be ancient history by then. We'd be looking at G110, 120 probably. And looking at how BW is increasing, perhaps BW on PC GPU boards by then will be 200+ GB/s? I don't know what production limits might hit before then. At the moment, we've got 384 bit busses. 512 bit by 2011 seem probable. On PC GPUs you can charge for that. Consoles are more price constrained.
 
G80 will be ancient history by then. We'd be looking at G110, 120 probably. And looking at how BW is increasing, perhaps BW on PC GPU boards by then will be 200+ GB/s? I don't know what production limits might hit before then. At the moment, we've got 384 bit busses. 512 bit by 2011 seem probable. On PC GPUs you can charge for that. Consoles are more price constrained.

I was thinking that console are more price constrained. But it doesn't look like there will be any new media formats to worry about then, and storage is only getting cheaper. Sony will more than likely use Cell (or it successor) so those cost shouldn't be too high. It is about time the GPU gets all the loving in the next systems. Especially since 400-600 dollars has been (more or less) accepted by the market. I think both sides could afford to use (G120/R800) by then.
 
G80 will be ancient history by then. We'd be looking at G110, 120 probably. And looking at how BW is increasing, perhaps BW on PC GPU boards by then will be 200+ GB/s? I don't know what production limits might hit before then. At the moment, we've got 384 bit busses. 512 bit by 2011 seem probable. On PC GPUs you can charge for that. Consoles are more price constrained.

512bit GPU's should be with us in a matter of weeks in R600!

It would require a GDDR speed of 3.2Ghz to achieve 200GB/sec which I expect is achievable by 2011 but couldn't say for sure as I don't know what the roadmaps look like.
 
I'd think the next consoles will finally have 256-bit buses. And I think that they will, yet again, use a bit of EDRAM that's sized just right for their output resolution (1080p prolly) to get around bandwidth limits of the general RAM pool. Like Cube, PS2, Wii, and 360.
 
so rsx has more flops than g80 according to those specs.

Lets just say that the RSX (by itself) couldn't even hope to begin to compare the the G80. You do realize that one 8800GTX in most situations is faster than a 7950GT2 (which AFAIK is 2 7900's on one board in SLI).

There would be no sort of contest, if the PS3 had G80 in it. No one in their right mind would even look at the 360 (well at least no one here).

That is kinda why we are all hoping either MS or Sony goes with at least a 256bit buss. Although 384 or 512 would be nice as well.
 
There would be no sort of contest, if the PS3 had G80 in it. No one in their right mind would even look at the 360 (well at least no one here).

A PS3 with the high end G80 rather than RSX would be sweet! I wonder how it would change the dynamic of the relationship between Cell and the GPU though as the way I understand it, one of Cells primary benefits in PS3 is to make up for certain areas of weakness in RSX that G80 would overcome, e.g. vertex shading, geometry shading etc...
 
A PS3 with the high end G80 rather than RSX would be sweet! I wonder how it would change the dynamic of the relationship between Cell and the GPU though as the way I understand it, one of Cells primary benefits in PS3 is to make up for certain areas of weakness in RSX that G80 would overcome, e.g. vertex shading, geometry shading etc...

One things for sure i dout the G80 would be CPU limmited very often, theres not really anything Cell can help G80 with so it will have ALOT of spare cycles to force feed the G80 with :D
 
A PS3 with the high end G80 rather than RSX would be sweet! I wonder how it would change the dynamic of the relationship between Cell and the GPU though as the way I understand it, one of Cells primary benefits in PS3 is to make up for certain areas of weakness in RSX that G80 would overcome, e.g. vertex shading, geometry shading etc...

Well if it had a g80 instead of RSX then it would be able to compete with a high-end PCs! ;)

And I doubt that Cell would need to help the G80 as it has to help RSX. More CPU time for better physics and AI.
 
!!Warning noob question!! :D

What does more transistors mean for a GPU ?.. A lot of people say that RSX is close to 7600GT but RSX has 300 mil transistors beside 7600GT's 177 million ... And RSX even does not have Purevideo ... What are those extra ~130 140 million transistors for ?..
 
:???: Wow we're back to "if the PS3 had a G80..". That would be great but would make it it cost a 1000usd.
 
so rsx has more flops than g80 according to those specs.

And since when are FLOPS relevant for comparing GPU power?

X1900XTX: 554 GFLOPS

7900GTX: 250 GFLOPS

I wouldn't say that the X1900XTX is 2x as powerful @ rendering vs a 7900GTX. Would you?

I also would like to know how Sony calculated that 1.8 TFLOPS, when all i can get when i try to do the math is:

( ((27 FLOPS x 24 pixel pipelines) + (10 FLOPS x 8 vertex pipelines)) x 500 MHz ) = 364 GFLOPS
 
Last edited by a moderator:
And since when are FLOPS relevant for comparing GPU power?

X1900XTX: 554 GFLOPS

7900GTX: 250 GFLOPS

I wouldn't say that the X1900XTX is 2x as powerful @ rendering vs a 7900GTX. Would you?

I also would like to know how Sony calculated that 1.8 TFLOPS, when all i can get when i try to do the math is:

( ((27 FLOPS x 24 pixel pipelines) + (10 FLOPS x 8 vertex pipelines)) x 500 MHz ) = 364 GFLOPS

Did'nt Sony include all the fixed function stuff in there figure?
 
And since when are FLOPS relevant for comparing GPU power?

X1900XTX: 554 GFLOPS

7900GTX: 250 GFLOPS

I wouldn't say that the X1900XTX is 2x as powerful @ rendering vs a 7900GTX. Would you?

I also would like to know how Sony calculated that 1.8 TFLOPS, when all i can get when i try to do the math is:

( ((27 FLOPS x 24 pixel pipelines) + (10 FLOPS x 8 vertex pipelines)) x 500 MHz ) = 364 GFLOPS

Not sure were those figures came from but they are innacurate. Its:

X1900XTX: 426.4 GFLOPS

7900GTX: 301.6 GFLOPS

I think most would agree that the XTX has quite a bit more pixel shader power than the GTX but overall performance will be held back by other areas, e.g. texturing, bandwidth, vertex shading, drivers?

On the same scale RSX would be 232 GFLOPs and the 8800GTX would be 518.4 GFLOPs.

On its own I don't think FLOPs its a good measure of overall performance but its certainly an indicator and when looked at alongside other factotrs, a very relevant measure.

EDIT: BTW, the 1.8 TFLOPs figure included all the fixed function logic (and probably some very creative maths). I wouldn't be suprised if G80 was hitting 2 or 4 TFLOPs using the same calculations.
 
EDIT: BTW, the 1.8 TFLOPs figure included all the fixed function logic (and probably some very creative maths). I wouldn't be suprised if G80 was hitting 2 or 4 TFLOPs using the same calculations.

No it wasn't..

the 1.8 TFLOPS was for the entire system (Cell + RSX) as opposed to the GPU only..

Hence the reason the initial figure of 2 TFLOPS wen't down after Sony announced they would be making 1 SPE redundant..
 
No it wasn't..

the 1.8 TFLOPS was for the entire system (Cell + RSX) as opposed to the GPU only..

Hence the reason the initial figure of 2 TFLOPS wen't down after Sony announced they would be making 1 SPE redundant..

Wrong.

~2 TFLOPS was announced by Sony, 1.8 TFLOPS for RSX and 0.256 TFLOPS for Cell (E3 2005).

1 TFLOP = 1000 GFLOPS
 
Back
Top