Bigus Dickus
Regular
After seeing calculations done here and everywhere, I have a couple of questions.
This is my understanding of bandiwidth calculation, using 100MHz SDRAM as an easy example:
Clockspeed * bus width = bits per second, bits per second/8 bytes per bit = bytes per second.
So, for 100MHz SDRAM on a 128-bit bus, this gives: (100e6 cycles/sec * 128-bit) / 8 bytes/bit = 1.6 million bytes per second.
Now, this is where I'm a little confused. It seems common practice to call this 1.6 GB/s, but I was under the impression that 1024 bytes = 1 kilobyte, 1024 kilobytes = 1 megabyte, etc. Therefore, the 100MHz SDRAM on 128-bit bus would give 1.5 GB/s, not 1.6 GB/s.
I noticed in a calculation Tom Pabst made that he specifically noted using 1024 megabytes = 1 gigabyte, but he neglected all the 1024's when converting B to KB to MB, using 1000 instead.
So, what websites are correct? Are they all using a standard way of calculating memory bandwidth? Is it really correct to say that 100 MHz = 100 Mcycles/second = 100 Mbits/second = 12.5 MB/second instead of 100,000,000 bits/second = 97656 kilobits/second = 95.4 megabits/second = 11.9 MB/second? I would think the latter is the correct way of doing it.
So, if the R300 uses 620 MHz memory on a 256-bit bus, is it:
(620 Megahertz * 256) / 8 = 19840 MB/s = 19.8 GB/second
(620 Megahertz * 256) / 8 = 19840 MB/s, 19840 MB/s / 1024 = 19.4 GB/s (Tom's method)
[(620 Megahertz * 256) / 8 ] / 1024*1024*1024 = 18.5 GB/s (my method)
Second question: what actual clock speed is DDR-II 1GHz running? From reading several technical descriptions, it seems that DDR-II actually doubles fetch size to 4 bytes instead of 2, and has twice the bandidth per pin as standard DDR memory. It also seems accepted that DDR 400 and DDR-II 400 have the same bandwidth. This would lead to the conclusion that DDR-II 400 is runnin an actual clock of 100 MHz. This would make DDR-II 1GHz an actual clock of 250 MHz.
It seems rather common for people to refer to DDR-II as being equivalent to DDR but retooled to run at higher clockspeeds. I find this to be rather misleading, unless I am simply misinformed. It is running at a higher effective clock, due to the doubling of bandwidth per pin, but running at a slower actual clock. Yes, this does have the effect of making it easier to reach higher effective clocks due to the lower actual speed required, but I don't think that = retooled to run at a higher clock. It is retooled to double bandwidth at an equivalent clock, and initial offerings are clocked below DDR.
Or have I really missed something vital here?
This is my understanding of bandiwidth calculation, using 100MHz SDRAM as an easy example:
Clockspeed * bus width = bits per second, bits per second/8 bytes per bit = bytes per second.
So, for 100MHz SDRAM on a 128-bit bus, this gives: (100e6 cycles/sec * 128-bit) / 8 bytes/bit = 1.6 million bytes per second.
Now, this is where I'm a little confused. It seems common practice to call this 1.6 GB/s, but I was under the impression that 1024 bytes = 1 kilobyte, 1024 kilobytes = 1 megabyte, etc. Therefore, the 100MHz SDRAM on 128-bit bus would give 1.5 GB/s, not 1.6 GB/s.
I noticed in a calculation Tom Pabst made that he specifically noted using 1024 megabytes = 1 gigabyte, but he neglected all the 1024's when converting B to KB to MB, using 1000 instead.
So, what websites are correct? Are they all using a standard way of calculating memory bandwidth? Is it really correct to say that 100 MHz = 100 Mcycles/second = 100 Mbits/second = 12.5 MB/second instead of 100,000,000 bits/second = 97656 kilobits/second = 95.4 megabits/second = 11.9 MB/second? I would think the latter is the correct way of doing it.
So, if the R300 uses 620 MHz memory on a 256-bit bus, is it:
(620 Megahertz * 256) / 8 = 19840 MB/s = 19.8 GB/second
(620 Megahertz * 256) / 8 = 19840 MB/s, 19840 MB/s / 1024 = 19.4 GB/s (Tom's method)
[(620 Megahertz * 256) / 8 ] / 1024*1024*1024 = 18.5 GB/s (my method)
Second question: what actual clock speed is DDR-II 1GHz running? From reading several technical descriptions, it seems that DDR-II actually doubles fetch size to 4 bytes instead of 2, and has twice the bandidth per pin as standard DDR memory. It also seems accepted that DDR 400 and DDR-II 400 have the same bandwidth. This would lead to the conclusion that DDR-II 400 is runnin an actual clock of 100 MHz. This would make DDR-II 1GHz an actual clock of 250 MHz.
It seems rather common for people to refer to DDR-II as being equivalent to DDR but retooled to run at higher clockspeeds. I find this to be rather misleading, unless I am simply misinformed. It is running at a higher effective clock, due to the doubling of bandwidth per pin, but running at a slower actual clock. Yes, this does have the effect of making it easier to reach higher effective clocks due to the lower actual speed required, but I don't think that = retooled to run at a higher clock. It is retooled to double bandwidth at an equivalent clock, and initial offerings are clocked below DDR.
Or have I really missed something vital here?