Guys, get a load of this Q&A segment I just saw on nvNews....OMG...*this* is the kind of PR nVidia is doing these days...?? BTW, "A" below comes from an official nVidia spokesperson...
Q: We all know by now that the GeForceFX is equipped with 128 bit DDR II. What exactly are the technical benefits of running 128 bit DDR II over 256 bit DDR?
A: From a technical design complexity point of view, fewer pins are better. The wider the bus, the more pins are required on the GPU, the more traces you have to route across the board and the more often you have a memory granularity issue. A 128-bit bus requires fewer connections than a 256-bit bus. Of course, bandwidth is important too. For situations where you cannot raise your clock rates or cannot improve your data compression to get more effective bandwidth, going to a wider bus is a clear method to increase bandwidth. [empahsis mine] However, if you can run a narrower bus at a faster clock rate, you can get just as much raw bandwidth. This is exactly what we did with GeForce FX. We chose to use DDR2 because we could run it at 500MHz! Here’s a pop quiz question --Which is faster….half the width at twice the speed or twice the width at half the clock rate? Mathematically the raw bandwidth is the same for those two cases.
*chuckle*....Unfortunately, 500MHz wasn't enough to catch the 9700P's raw bandwidth--they'd need to be running @620MHz (1240MHz DDR) to be able to state that. But of course with nVidia they'll state it anyway...
I love the " in situations where you cannot raise your clock rate or you cannot improve your data compression you'll go to a 256-bit bus"....drivel...ah...don't tell me that some people actually *listen* to garbage like this???? Remarkable--I had no idea it was this bad...(...first time I've paged to nvnews in years, I admit)....whew! Reading this stuff would make me feel like a part of the bubblegum and skateboard generation, if you know what I mean.
Q: We all know by now that the GeForceFX is equipped with 128 bit DDR II. What exactly are the technical benefits of running 128 bit DDR II over 256 bit DDR?
A: From a technical design complexity point of view, fewer pins are better. The wider the bus, the more pins are required on the GPU, the more traces you have to route across the board and the more often you have a memory granularity issue. A 128-bit bus requires fewer connections than a 256-bit bus. Of course, bandwidth is important too. For situations where you cannot raise your clock rates or cannot improve your data compression to get more effective bandwidth, going to a wider bus is a clear method to increase bandwidth. [empahsis mine] However, if you can run a narrower bus at a faster clock rate, you can get just as much raw bandwidth. This is exactly what we did with GeForce FX. We chose to use DDR2 because we could run it at 500MHz! Here’s a pop quiz question --Which is faster….half the width at twice the speed or twice the width at half the clock rate? Mathematically the raw bandwidth is the same for those two cases.
*chuckle*....Unfortunately, 500MHz wasn't enough to catch the 9700P's raw bandwidth--they'd need to be running @620MHz (1240MHz DDR) to be able to state that. But of course with nVidia they'll state it anyway...
I love the " in situations where you cannot raise your clock rate or you cannot improve your data compression you'll go to a 256-bit bus"....drivel...ah...don't tell me that some people actually *listen* to garbage like this???? Remarkable--I had no idea it was this bad...(...first time I've paged to nvnews in years, I admit)....whew! Reading this stuff would make me feel like a part of the bubblegum and skateboard generation, if you know what I mean.