RSX in the same league as Quad SLI?

Jaws said:
It's a matter of optimising a cost/ performance curve. See my reply above, it's not totally unfeasible...
What's the largest plausible chip you can make? I don't know of anything much over 300 million transistors and they're costly. Would a dual core 600 million transistor processor even be possible, at least for <$1000 a chip? Those 300mm wafers aren't cheap and if you can only fit a hundred chips in, and 3 quarters of those are dead because of defects, prices will be insane. And that's only for a dual-core. How much would be needed in a single core to attain the levels of a 4 G70 SLI rig?

Plus we've be given a transistor count of >300 million. That's unlikely to be much over, as they'd have said something like >400 million transistors in the presentation to play up the numbers. So, have Sony managed to get 2, 3 or 4x the performance of G70 into about the same number of transistors? Does the removal of PureVideo really free up enough silicon real-estate to fit another job lot of V and P shaders?!

I don't see how a chip of that performance can be made cost effective, nor in a package that works with a CE product that isn't liquid nitrogen cooled, in the given transistor count. Though I'll be happy to be proven wrong! ;)
 
Shifty Geezer said:
What's the largest plausible chip you can make?

I've heard of one of the Itaniums being north of a billion+ transistors...

Shifty Geezer said:
I don't know of anything much over 300 million transistors and they're costly. Would a dual core 600 million transistor processor even be possible, at least for <$1000 a chip?

To clarify, I was referring to dual 'die' not dual 'core' (implied single die). Also, I wasn't referring to 2x300 mil transistors, the optimal curve could well be 2x175 mil transistors being cheaper than 1x300 mil transistor die etc...

Cost wise I couldn't tell you, but I'm sure retail prices have a high mark up from manufacturing prices. A 600 mil transistor die should be possible, the yield/ clocking will determine a major cost/perforamnce factor...

Shifty Geezer said:
Those 300mm wafers aren't cheap and if you can only fit a hundred chips in, and 3 quarters of those are dead because of defects, prices will be insane. And that's only for a dual-core.

Again, I'm referring to dual 'dies' not 'cores'...

Shifty Geezer said:
How much would be needed in a single core to attain the levels of a 4 G70 SLI rig?

All things being equal, dual 'die' would be less efficient than 'dual core'... but the cost/ performance curve will be different. And you'll have to look at the 'whole' picture. E.g. dual dies may allow a GDDR pool off each die, therefore increasing bandwidth so that the saving can be applied to costs elsewhere...

Shifty Geezer said:
Plus we've be given a transistor count of >300 million. That's unlikely to be much over, as they'd have said something like >400 million transistors in the presentation to play up the numbers. So, have Sony managed to get 2, 3 or 4x the performance of G70 into about the same number of transistors?

I'm not talking about 2x, 3x, 4x G70 performance. I'm talking about cost/ performance curves being optimised for certain limits, not about absolute performance. There's a difference. Like I said 2x175 maybe better than 1x300...

Shifty Geezer said:
Does the removal of PureVideo really free up enough silicon real-estate to fit another job lot of V and P shaders?!

I think you've missed my point here... I'm talkind about dual dies and cost/ performance optimisations. The removal of purevideo would still apply to single/ dual dies...

Shifty Geezer said:
I don't see how a chip of that performance can be made cost effective, nor in a package that works with a CE product that isn't liquid nitrogen cooled, in the given transistor count. Though I'll be happy to be proven wrong! ;)

Just look at Xenos parent/ daughter die for an example of my point...
 
Shifty Geezer said:
No, no and no. That would be an insanely large and costly chip. Insanely. Could a multicore G70 actually be fabricated without defects? You'd be looking at 600+ million transistors


I don't think it would be 600+ million transistors. They would get rid of some of the things that are not required for PS3 in the G70. Not sure how much that would be and it may still not be plausible though.

Speng.
 
Strange

Slay said:
This is from ps3-live and here is the Google Transation.


Here is the link to to video interview from gametrailers
http://www.gametrailers.com/gamepage.php?id=2337

His statement of PS3 graphics is not so different from this machine is very strange. Maybe he thinks this machine is not so efficient.

Also, Tflop and fill-rate claims are very strange. If RSX is 550mhz G70 with 13.2Gpixels and 1.8Tflops, then for 5.2Tflops, quad-SLI must be 400mhz but for 41Gpixels, quad-SLI must be 430mhz. So someone has made mistakes or maybe RSX is not 550mhz G70 but maybe that is not so "likely". Maybe lack of "communication" between marketing people for both products.

This is crazy world of advertising no?
 
Jaws said:
To clarify, I was referring to dual 'die' not dual 'core' (implied single die).
...chop...
Also, I wasn't referring to 2x300 mil transistors, the optimal curve could well be 2x175 mil transistors being cheaper than 1x300 mil transistor die etc.
I appreciate what you're saying and agree with the logic of your points. However, the topic wasn't 'is a multi-core GPU possible?' but 'is RSX equivalent to a quad SLI'd G70 setup?'. So my questions were about how could that much power be squeezed in RSX in an affordable way? Even dual-core G70's would be around the 600 milion transistor mark.
 
Shifty Geezer said:
I appreciate what you're saying and agree with the logic of your points. However, the topic wasn't 'is a multi-core GPU possible?' but 'is RSX equivalent to a quad SLI'd G70 setup?'. So my questions were about how could that much power be squeezed in RSX in an affordable way? Even dual-core G70's would be around the 600 milion transistor mark.

I guess it doesn't have to be the highest end does it? You could go dual core 6600s and stay under 300million. :)
 
I wouldn't read TOO much into his comments..

Although from the GameTrailers vid, I did like that we got what I think was a tiny glimpse of footage of the PS3 Blu-ray demo (it seemed to be showing Memoirs of a Geisha video)...?
 
AlphaWolf said:
Which is mostly cache.

Yeah, it was Montecito, IIRC, 1.7-1.8 billion transistors and around 20MB+ of cache (around 1 billion trans.)

Shifty Geezer said:
I appreciate what you're saying and agree with the logic of your points. However, the topic wasn't 'is a multi-core GPU possible?' but 'is RSX equivalent to a quad SLI'd G70 setup?'. So my questions were about how could that much power be squeezed in RSX in an affordable way?

Okay, from your questions, it didn't sound like you got my point as it can also be extended to quad 'dies' too. And I stress 'dies' not cores. It's obviously nonsense for 4 7800 GTX to be economical in PS3.

Shifty Geezer said:
Even dual-core G70's would be around the 600 milion transistor mark.

G70 != 7800 GTX, so it can mean a different pipeline config as I mentioned earlier with less VS/PS ROPS etc... Though I don't expect the RSX net transistor count to be far from 300 million...
 
Just to point out, a dual-core GPU makes very little sense. GPUs are already massively parallelized, so if you can manufacture a dual-core version, you could just as well manufacture a single-core version with double the number of VS, PS, etc. at the same clockspeed. That's not true of CPUs, as the bottleneck there is the inherent parallelism in the code and not the number of execution units. The only advantage I could see for a dual-core GPU is if you wanted to drive two separate displays with equal power; then it makes slightly more sense, although you still have to share the same GPU memory, bus, and bandwidth, so there'd still be a pretty big bottleneck.
 
Sethamin said:
Just to point out, a dual-core GPU makes very little sense. GPUs are already massively parallelized, so if you can manufacture a dual-core version, you could just as well manufacture a single-core version with double the number of VS, PS, etc. at the same clockspeed. That's not true of CPUs, as the bottleneck there is the inherent parallelism in the code and not the number of execution units. The only advantage I could see for a dual-core GPU is if you wanted to drive two separate displays with equal power; then it makes slightly more sense, although you still have to share the same GPU memory, bus, and bandwidth, so there'd still be a pretty big bottleneck.

I pretty much agree with this, if you're referring to 'dual cores' on 'one' die. However, I'd like to point out that dual cores != dual dies (single core each)...
 
I pretty much agree with this, if you're referring to 'dual cores' on 'one' die. However, I'd like to point out that dual cores != dual dies (single core each)...

Absolutely. And dual-die is an extremely viable way for to get a lot of power into a transistor count that's actually manufacturable. There's obviously some overhead logic in divvying up the workload and then recombining the data, but nothing that hasn't already been explored in SLI technology. Of course, just increasing the execution units by itself doesn't help; you need the bandwidth the feed both those dies, but from what I've read it seems like the PS3 should have that to spare.

The biggest reason I'd be dubious about this approach is the costs. Most estimates put the initial Xenos costs at somewhere between $100 and $150 per unit, and I don't think it's unreasonable to use that as a starting point for the RSX. Given the BD drive and the Cell that are already going to cost a ton at launch for Sony, and given their cash flow situation, I can't see them throwing another extra $100 part in there, even if it will cost-reduce very well. Perhaps they could throw in 2 (or more) smaller GPUs, but they need to be fairly new to get the latest programmable shaders and other such goodies. I still rate it as unlikely, although certainly not impossible.
 
Sethamin said:
...
Perhaps they could throw in 2 (or more) smaller GPUs,...

That's the only alternative scenario that would be feasible, IMO. If it's cheaper than a single die, cost can be shifted towards more GDDR bandwidth off each die. Though, 2x 128bit bus and 256+256 GDDR seems way too much an offset...

Sethamin said:
...but they need to be fairly new to get the latest programmable shaders and other such goodies. I still rate it as unlikely, although certainly not impossible.

I'm not sure why they would need to be fairly "new"? It would still be based on G7x tech...

It was something DeanoC said about reverse engineering the G70 driver and NV use an evolutionary methodology that made me think along these lines...
 
When I say they have to be "new", I only mean that they can't just grab a bunch of NV30s and stick them in there since there is still a lot of fixed function pipelines in there. If they want SM3.0 or HDR or any other of the newer graphical capabilities, they'll have to grab something that's been designed in the past year, at least. That's all. It could absolutely be based on the G7x line, so long as they can make it cheap enough.
 
Back
Top