900mhz DDR

Actually the clock rate of the memory is 450 MHz. Double Data Rate means that information is passed at two different points each clock cycle, and does not mean the clock rate is doubled.

I believe data is passed on the rising and falling edges of the clock cycle?

:D
 
yup your right
but this is the memory 900 DDR thats rumoured to be on the nv30
whats the geforce4 got 700 ? or 350x2
so this is 450x2 , should be what the nv30 needs right?
 
it only does 3.6GB/s?

Yes, but isn't it 32-bit?
450 MHz * 2 (DDR) * 4 bytes = 3.6 Gb/s.
Put 4 of them in parallell for a 128-bit interface, and there you are - 14.4 Gb/s.

Or have I misunderstood something?
To be honest, I'm not quite sure exactly what "arranged as 4M x 32" means.
 
horvendile said:
?
To be honest, I'm not quite sure exactly what "arranged as 4M x 32" means.
I would assume 4 "million" (where million is 2^20) rows each 32 bits wide.
 
So if the NV30 has 256 bit memory interface does it mean 28,8 GB/sec bandwidth +LMA ???
Wowww. :)
 
Could somebody actually explain "the rising and the falling edge" of the clock sycle? (Is the bit coded into voltage differences, what's the waweform like, etc.) Every hardware website and their niece was hasty to mention these edges when DDR came about, but nobody took the trouble to make it clear enough to actually understand... and I don't mean any EE grade stuff, just a layman explanation of how the bits are written to and read off the current. Hope you get what LOD I'm targeting here :)
 
opy said:
So if the NV30 has 256 bit memory interface does it mean 28,8 GB/sec bandwidth +LMA ???

Hm... to be finicky, I'd say not 28.8 + LMA, but rather approaching 28.8 with the help of LMA.
Whether the NV30 will have what amounts to a 256 bit interface is still up in the air (our air, not nVidia's), I believe.
 
Whether the NV30 will have what amounts to a 256 bit interface is still up in the air (our air, not nVidia's), I believe.

I was under the impression that Kirk had already stated that nVIDIA weren't looking for wider busses but to 'increase the quality of pixels' - that was a pretty good indication that they weren't going 256.
 
Gunhead said:
Could somebody actually explain "the rising and the falling edge" of the clock sycle? (Is the bit coded into voltage differences, what's the waweform like, etc.) Every hardware website and their niece was hasty to mention these edges when DDR came about, but nobody took the trouble to make it clear enough to actually understand... and I don't mean any EE grade stuff, just a layman explanation of how the bits are written to and read off the current. Hope you get what LOD I'm targeting here :)

When two electronics devices are communication some method of synchonization is required so that you know when data is valid. If you look at the data lines at the wrong time then the data on them may not be what you are expecting. This is handled using a clock signal which toggles between high and low. In a single edge clock scheme then you know that when the clock changes from low to high (or high to low depending on the way things are wired up) that is when it is safe to look at the data. In a dual edge clock the data is safe to look at on both transitions.
In memory the clock is probably switching as fast as the signals can change so the data is valid on only have the switches so your data rate is half the switching rate of the memory device, if you can use both clocks then you get a data rate equal to the switching rate.

There is probably a bit more to it than this, but this is the way I understand it.

CC
 
phynicle said:
DDR 2 running at 1 gigahertz could only 4 gigabytes per second?????

I really don't know anything about DDRII (Perhaps someone else here could enlighten us?), but it is still DDR, as in Double Data Rate, isn't it? Not QDR. I suppose the difference the end user will see is higher frequencies.
 
horvendile said:
I really don't know anything about DDRII (Perhaps someone else here could enlighten us?), but it is still DDR, as in Double Data Rate, isn't it?

Yep, it's still DDR. The main difference is a change in the signalling technology used between the Memory and Chipset/Graphics chip. The net effect is greater frequencies become possible.
 
Back
Top