I was reading a Comdex article at Tomshardware and stumbled upon this:
So, my question is... is this true? Does this mean the GeForce FX could use 4-bit DDR II interface in the future? Could the NV35 or even a refresh of the NV30, provided the memory becomes available sometime next year, be released without much reworking of the original core? So perhaps a 128 bit interface could be overkill for now?
"There was a lot of confusion about the memory bandwidth of the DDR2 Modules NVIDIA is using on their new GeForce FX card and how to calculate it. Most information available on DDR2 explains that DDR2 has twice the data-bandwidth than DDR - made possible by a 4-bit prefetch instead of 2-bit used with DDR.
Now let´s face the GeForce FX. The card is using DDR2 memory which means it´s using a prefetch of 4 and doubles the amount of data transfered again - in theory. If a card is running with 1GHz DDR2 datarate, the modules can be run at a quarter of that: moderate 250MHz. That´s what people mean when they say that DDR2 is a cheap solution with a lot headroom. You can also read that in this Jedec whitepaper on page 6.
But NVIDIA is using Samsung DDR2 modules with a dram cell frequency of 500MHz - only half the data frequency. This means that the DDR2 memory on GeForce FX behaves just like DDR memory with just higher clock frequencies.
So here we go:
16 Bytes * 500 MHz * 2 = 16 GB/s
This goes common to a Samsung whitepaper on the DDR2 modules NVIDIA is using for the GeForce FX. It says that one module (32Bit) has a single Bandwith of 4GB/s. This means 16GB/s for 128Bit. GeForce FX is using 2 banks with 4 modules each - if you wondered after counting the number of chips on the card."
So, my question is... is this true? Does this mean the GeForce FX could use 4-bit DDR II interface in the future? Could the NV35 or even a refresh of the NV30, provided the memory becomes available sometime next year, be released without much reworking of the original core? So perhaps a 128 bit interface could be overkill for now?