Threaded rendering

It's not the memory BW that's the issue as much as the BW between the GPUs. Making a 50-100 GB/s connection between two GPUs or having a "northbridge" with 100GB/s connections to the RAM and each GPU isn't particularly easy, and the latter would waste gobs of silicon. You need a lot of pins running at a very high speed to get that kind of a connection.

I do think it will eventually happen, though. Maybe we'll see fibre-optic GPU interconnects in a few generations.

I think it's technically possible to use the gddr protocol between mem controllers on adjacent GPUs. It may also be possible to put switch chips between the DRAM channels of each GPU to create a UMA. Either of these solutions could potentially solve the persistent surfaces issue
 
either of the solutions I suggested wouldn't use any more pins then a standard 256bit bus would take. The switch chip solution would add a lot of cost and power draw to the design though.
 
Yes, but if you are already limited and don't have the space for a 512 bit bus, then you still won't have room for 2x256 bit buses.

Is there a possibility of going with a high speed serial interface instead of the wide parallel interface that's now in use?

Regards,
SB
 
Is there a possibility of going with a high speed serial interface instead of the wide parallel interface that's now in use?
Yes, if you're willing to give up bandwidth. ;)

GDDR5 at 5Gbps per pin is already closing in on what's physically possible on PCB's. If your question is: can we replace it with a serial bus that runs at 40Gbps, then the answer is clearly 'no'.
 
This is what I said they should try to do ages and ages ago. Basically so they could end up with a package like intels first dual cores where the die were seperate. Then you could have better yields on monster chips, but apparently it is a poor idea.
 
Back
Top