Is this true?

bdmosky

Newcomer
I was reading a Comdex article at Tomshardware and stumbled upon this:

"There was a lot of confusion about the memory bandwidth of the DDR2 Modules NVIDIA is using on their new GeForce FX card and how to calculate it. Most information available on DDR2 explains that DDR2 has twice the data-bandwidth than DDR - made possible by a 4-bit prefetch instead of 2-bit used with DDR.

Now let´s face the GeForce FX. The card is using DDR2 memory which means it´s using a prefetch of 4 and doubles the amount of data transfered again - in theory. If a card is running with 1GHz DDR2 datarate, the modules can be run at a quarter of that: moderate 250MHz. That´s what people mean when they say that DDR2 is a cheap solution with a lot headroom. You can also read that in this Jedec whitepaper on page 6.

But NVIDIA is using Samsung DDR2 modules with a dram cell frequency of 500MHz - only half the data frequency. This means that the DDR2 memory on GeForce FX behaves just like DDR memory with just higher clock frequencies.

So here we go:

16 Bytes * 500 MHz * 2 = 16 GB/s

This goes common to a Samsung whitepaper on the DDR2 modules NVIDIA is using for the GeForce FX. It says that one module (32Bit) has a single Bandwith of 4GB/s. This means 16GB/s for 128Bit. GeForce FX is using 2 banks with 4 modules each - if you wondered after counting the number of chips on the card."

So, my question is... is this true? Does this mean the GeForce FX could use 4-bit DDR II interface in the future? Could the NV35 or even a refresh of the NV30, provided the memory becomes available sometime next year, be released without much reworking of the original core? So perhaps a 128 bit interface could be overkill for now?
 
Could the NV35 or even a refresh of the NV30

wouldn't the NV35 *be* a refresh of NV30?

a faster/tweaked NV30, even with faster memory, would still be an NV30.

I thought a refresh is an architecturally enhanced core like GF2 is over GF1 and GF4Ti is over GF3/Ti200/Ti500 or TNT2 over TNT, or Riva128 ZX over Riva128.


speed bumps or tweaks like GeForce2 Ultra, GF3 Ti500 and NV28 aren't refreshes (correct?) even though Nvidia would like everyone to believe they are.
 
Whoever wrote that piece is very confused.

There are no "different versions" of DDR-II. I don't know the best way to explain it, but I'll try in a simplistic way (and someone correct me if I'm really off base here...)

Think if DDR-II having three different frequencies:

1) "Internal" frequency (Or DRAM Cell frequency).
2) "Interface" frequency. (Or "frequency of the module")
3) "Effective" frequency. (Referred to as "data frequency" in that article)

For both the original DDR and DDR-II, effective frequency is double that of interface frequency. (Hence the "double" in "double data rate".)

The difference between DDR and DDR-II, is that the "internal" frequency of DDR-II is 1/2 the interface frequency, wherease for DDR-I, internal frequency=interface frequency.

So, 500 MHz DDR-II (ALL 500 Mhz DDR-II) runs "internally" at 250 Mhz, and has a "data rate frequency" of 1 GHz. Including the Samsung memory that GeForce FX uses. You never see the 250 Mhz (internal) figure ever quoted in specs though. You either see 500 Mhz (the "interface") or 1 GHz (the "effective".)

Did that clear things up?
 
Well, we're splitting hairs here on word choice. I meant a refresh to basically be an clock bumped faster card. The NV35 I expect to be a new core with the same basic infrastructure as an NV30, but with a few enhancements on design and features. I think in the past a refresh did refer to a new core based on the older design ala TNT 1 to 2, Geforce 1 to 2, and Geforce 3 to 4. Now though I think refresh is being used to describe the bumps from the same basic core, just improving clock and memory speeds ala Geforce 2 to Geforce 2 Ultra, Geforce 3 to Geforce 3 Ti 500. Anyways, who really cares?
 
That makes sense, although I haven't read up on the white pages, mostly because I probably couldn't understand them.
 
Interestingly enough, the Samsung whitepapers refer to it as GDDR II SDRAM. I remember some discussion on a few subtle differences between this version and the GDDR3 that ATI was touting. I wasn't really aware that the DDR II that nVidia is using is actually GDDR2.

*Edit*
Oh, and from quickly thumbing through the Samsung white papers, I did see a 4-bit prefetch mentioned in there as well, so it appears you are right Joe.
 
Back
Top