Josh378 said:
Each of the Realizm 800’s visual processors interfaces with a 256-bit memory bus populated by 256MB of GDDR-3 memory, forming what 3DLabs likes to call a 512-bit bus.
...
So this means that were getting a TRUE 512-Bit GDDR3 or what? This will be my last question...I apologies if I'm wasting your time guys...
I think you are confusing memory density (Mbit) with memory interface width (bits). 3DLabs is saying that this graphics processor, which is really two of them, each having a 256-bit memory interface, combine to form what they call a 512-bit memory interface. So this is not a true 512-bit interface as we know it. This would be something like Nvidia claiming a 512-bit memory interface when using SLI. It's not really true in practice. However, it is true that the number of bits used (wide) in all (in this case 2) memory interfaces on the device amount to 512. (* See EDIT)
Just to iron this out for you, when DRAM manufacturers speak about a 512-Mbit memory chip they are talking about how many bits it has. So divide this by 8 (8 bits/byte) and you get Mbytes.
512Mbit/8 = 32MB
In a 256-bit memory crossbar, with each 'channel' or 'lane' constituting 32 bits, you need 8 (8*32 = 256). So multiply 8 * 32MB and you see the maximum configuration with this 512Mbit chip turns out to be 512 MB.
So GDDR3 chips are 32 bits wide and you need 8 of them to fully populate a 256-bit memory interface. For a 512-bit interface you would either need 16 of these DRAM chips or some form that is wider, like 64-bit wide modules (you could imagine each chip being dual channel).
EDIT:
It should probably be considered that the modern 3D chip is sectioned up to some extent in groups of 4 pixel pipelines called quads. So, one could imagine that the memory interface could also be designed to service these quads (where Radeon 9700 has 8 and Geforce 6800GT/Ultra have 16). So, the fact that the chip is two 'units' 'glued' together may not be important when speaking about the width of the bus. I was a bit hasty and rough comparing this situation to Nvidia SLI, because SLI has lots of other issues that may not be present on the Realizm. So, it's probably best to concede that this is a 512-bit interface unless there is some information particular to this design that would reveal that it never really acts like a unified memory interface with 512-bits would. I'm considering the possibility that a non-unified memory interface may be just as good, or perhaps even better, than a unified interface when it comes to these moder, highly scalable architectures.