And you're not the first to put forth that "point", and not the first who apparantly does not understand the issues regarding such an undertaking. Just for starters imagine the board area that would be taken up by a dual GPU setup. Have you seen pics of ASUS and Gigabyte's twin 6600GT vidcards? They're absolutely enormous. That's just the space needed for GPUs, memory and power regulation. Add to that, Cell itself, its memory and power regulation, southbridge and auxiliary chips (of which there'll be a bunch in the first PS3 revision no doubt, seeing Sony's past hardware designs), and so on. Plus the XIO-to-XIO bridge needed to connect two GPUs rather than just one I might add (serial links aren't PCI buses you can just arbitrarily chain up devices onto).Bill said:The point is TWO 512's would offer a step beyond anything else currently achievable.
There's simply no way you're going to be able to fit all that into the current PS3 case. And no, before you think the thought, you can't put them on alternate sides of the mobo either because there isn't space in the board layers for all the extra traces to do that. Besides, on the reverse side of the mobo there'll be the BDROM, the harddrive bay and possibly the internal power supply. No room for such ideas.
You could say a 7800GTX is a "virtualized" SLI 6800 vanilla. That isn't SLI in any way shape or form though, it's just expanding the chip to include twice as many rendering pipelines, with the drawbacks that includes with regards to costs and such, manufacturing really huge silicon dies. So short answer to your question's going to be "no, not really". Not with an immediate-mode renderer anyway. Had Nvidia been making a tile-based deferred renderer for PS3, two or more GPUs might have been able to share memory at least somewhat efficiently. An immediate mode renderer accesses memory in a very chaotic manner (one of the reasons MS didn't go with a fully UMA design this time 'round), with two sharing the same RAM they'd end up stepping on each other's feet constantly, not to mention likely starving for bandwidth even assuming memory contention wasn't an issue.I know SLI in a PC uses redundant RAM. This COULD be a big enough issue to kill the idea. However, isn't there a way to virtualize two GPU's as one?
It's not limitations based on the PC per se. It's limitations inherent with rendering different pixels on different GPUs into different memory. It'd appear on any other platform other than PCs that was using SLI.SLI is just a term I'm using. It could be implemented differently. SLI has limitations based on the PC.
In the case of render to texture effects, there's no way to guarantee that the pixels you'll need on GPU A will actually have been rendered by GPU A. They might sit in GPU B's memory and do you absolutely zero good.
There are ways around that as I implied in my original reply, you can duplicate all render to texture work on both GPUs for example, but that would destroy the efficiency you claimed would magically appear from doing SLI in a console.
Your reply shows you simply did not understand the point I was making, as the non sequitur you present has no bearing on any of what I said.And Guden Oden, funny, I just read Nvidia CEO claiming "SLI will become more important going forward", not less.
You might also want to stop and think for yourself for a second there before you blindly accept the word of the CEO of a company that has a vested interest in seeing SLI become prevalent. That's not saying the tech doesn't have a future, but rather just training yourself in recognizing propaganda when you see it.
And my point is you merely theorize that the problems of SLI are bound only to the PC platform (and present no evidence to support that claim), when in actuality they're not.I agree SLI has problems in a PC setup, again that's not really my point.