Ok, nice interview with interesting comments especially the one about the 512bit memory interface. This is clearly one advantage of the R600 compared to the competition but my question is, where will all this bandwidth be used for? Before the release, specualation was that there was some new form of AA or IQ increasing feature that would use all this bw. Was it a checkbox feature? or is there something in the future that we can expect from AMD/ATi that is going to surprise us?
Moving onto a more serious manner, picking up my two round objects packed in a sack in the process i want to ask you on a subject thats so delicate that ive seen threads burn to smithereens.
Now, im not sure if this is going to get answered or not, but i cant resist in asking directly to the lead of the R600 project on this one. (and thats you Sir Eric)
When Lindholm and his team produced what we now know as G80, what was your initial impressions? what was the reactions from you guys? what is one of those ".. congrats to the other camp.." kind of reaction, or all hell broke loose on yourside of the camp?
To be honest, what did ATi expect from nVIDIA? did you guys have had any idea that G80 would bea unified shader architecture?
Looking at the benchmark, the 2900XT is very competitive to the 8800GTS 640mb often winning mostly than losing. However, everybody on this board
knows that the R600 wasn't aimed at the GTS, but rather the full fledged G80 i.e 8800GTX/Ultra. The R600 doesn't consistently outperform the last gen by x2 the performance. The interview makes R600 all nice and dandy, but real life concrete benchmarks/performance quite don't illustrate it.
Maybe ill stop there, but hope it gets answered.
Although its not flawless, well done to you guys! competition is great.