First off, please excuse the piss poor thread title...I couldn't think of anything much better A while ago, I was tidying up a little chart of chips specs of mine when I thought popped into my head about how bandwidth figures are calculated for the local memory on video cards. Just bare with me on this one...
Ignoring all the stuff about latencies, instruction lengths and so on - I'm talking the "standard" theoretical maximum memory bandwidth that's calculated on the basis that so-many-bits of data can be transferred per clock cycle of the memory bus. Here is one such example:
NVIDIA's GeForce4 Ti4600
From their own site (http://www.nvidia.com/view.asp?PAGE=geforce4ti), the bandwidth is given as 10.4GB/sec.
Although ATI are careful not to display the theoretical bandwidth on their website for the 9700 Pro, most review sites aren't so proud and happily show 19.8GB/sec. Now the 4600 has a memory bus speed of 325MHz and can transfer 128-bits of data twice per cycle. NVIDIA are getting 10.4GB like this - (325M x 2 x 128)/(8 x 1000) = 10.4GB/sec. The 9700 Pro has a bus speed of 310MHz and can transfer 256-bits of data twice per cycle - (310 x 2 x 256)/(8 x 1000) = 19.8GB/sec.
Anyone else spot the oddity that caught my attention? That's right 1GB = 1000MB according to NVIDIA et al, not 1GB = 1024MB. They seem quite happy to count bits and bytes properly when it comes to "256-bit GPU" or "32-bit colour" but not with bandwidth. It's just like with hard drive manufacturers and capacity figures. I guess that in the past it's been no big deal but it has surprised me that nobody else seems to be bothered by it, considering the hooharr we see over such things as pipeline configuration or number of vertex pipelines. Calculated properly (ie. dividing the 10^6 Hz business by 1024 x 1024 x 1024 to correctly display it in GB) the 4600 has a bandwidth of 9.69GB and the 9700 Pro 18.48GB/sec....yes, yes I know that I'm being highly pedantic; after all, it's only a loss of 7% but hey - I like things to be right!
I have a rather odd feeling right now...kinda like the one where you step into a party and suddenly everyone stops, stares right through you for 10 seconds and then carry on as before... ...I'll just go and linger in a corner somewhere (with my Smirnoff Ice)...
Ignoring all the stuff about latencies, instruction lengths and so on - I'm talking the "standard" theoretical maximum memory bandwidth that's calculated on the basis that so-many-bits of data can be transferred per clock cycle of the memory bus. Here is one such example:
NVIDIA's GeForce4 Ti4600
From their own site (http://www.nvidia.com/view.asp?PAGE=geforce4ti), the bandwidth is given as 10.4GB/sec.
Although ATI are careful not to display the theoretical bandwidth on their website for the 9700 Pro, most review sites aren't so proud and happily show 19.8GB/sec. Now the 4600 has a memory bus speed of 325MHz and can transfer 128-bits of data twice per cycle. NVIDIA are getting 10.4GB like this - (325M x 2 x 128)/(8 x 1000) = 10.4GB/sec. The 9700 Pro has a bus speed of 310MHz and can transfer 256-bits of data twice per cycle - (310 x 2 x 256)/(8 x 1000) = 19.8GB/sec.
Anyone else spot the oddity that caught my attention? That's right 1GB = 1000MB according to NVIDIA et al, not 1GB = 1024MB. They seem quite happy to count bits and bytes properly when it comes to "256-bit GPU" or "32-bit colour" but not with bandwidth. It's just like with hard drive manufacturers and capacity figures. I guess that in the past it's been no big deal but it has surprised me that nobody else seems to be bothered by it, considering the hooharr we see over such things as pipeline configuration or number of vertex pipelines. Calculated properly (ie. dividing the 10^6 Hz business by 1024 x 1024 x 1024 to correctly display it in GB) the 4600 has a bandwidth of 9.69GB and the 9700 Pro 18.48GB/sec....yes, yes I know that I'm being highly pedantic; after all, it's only a loss of 7% but hey - I like things to be right!
I have a rather odd feeling right now...kinda like the one where you step into a party and suddenly everyone stops, stares right through you for 10 seconds and then carry on as before... ...I'll just go and linger in a corner somewhere (with my Smirnoff Ice)...