I'm suggesting that around the time ATI was deciding the fundamental specs of R600 (I'm guessing about R420 launch time?) that ATI didn't expect to see GDDR3 go as high as it did by the time R600 launched, nor for GDDR4 to be around until the (smaller process, smaller MC) refresh.
There were posts here at B3D from ATI guys complaining about how core processing power was vastly outstripping the scaling of memory around then.
It appears they were out by a factor of 2. It's such a ludicrous amount that "didn't understand the memory market" doesn't explain it all.
Particularly as little of R600 is 2x the performance of its predecessor - it really is a feature inflection GPU (D3D10) not a performance inflection.
Though fp16-based data is a
major focus of the architecture, and that does like bandwidth. HDR+AA render targets chew 2x the bandwidth of non-HDR+AA. And fp16 texture formats (which aren't compressed like their int8 brethren) consume way way more bandwidth per texel (but are barely used - e.g. only for render target tone-mapping, a fraction of the total frame-rendering workload). So there are spikes in R600 usage that could well chew through
all its instantaneous bandwidth.
I agree that memory has been surprisingly kind in its capabilities these last 2-3 years. RV530 had a hell of a lot of extra bandwidth, too (I guesstimate X1600XT would have performed the same with about 16GB/s, instead of 22.4GB/s, i.e. it had "40% surplus").
HD2600XT should provide a control point in this argument. Its design is barely younger than R600, yet with GDDR4 it's not got "100% surplus bandwidth". I haven't tried to work out what kind of surplus it appears to have, but at a rough guess, 20-40%?
Jawed