Mintmaster said:
As for NVidia's much touted 128-bit rendering, that will need even more bandwidth.
...
In the end, ATI has a very good point. NVidia's pipes are quite unbalanced and bandwidth starved nearly all the time.
I'll ask my question again, since no one answered last time. Is GFFX aimed primarily at performance desktop computing (eg games), or is it equally (or moreso) aimed at digital cinema (Pixar, Square)?
All the debate has so far focused on GFFX's performance relative to 9700 in real-time games where having enough bandwidth to maintain high fps is crucial. As many have pointed out, GFFX has some nice new features, but appears to lack the bandwidth to use any of them at high enough frame rates for games. ATI has said that the GFFX appears poorly balanced, a little heavy on the features and too light on the bandwidth.
Now correct me if I'm wrong, but CGI creation does not require high fps, and therefore does not require extremely high bandwidth. Of course, production time and costs are probably ~ inversely proportional to bandwidth, but CGI can still be created without it.
However, doesn't digital animation greatly benefit with some of GFFX's new features? For example, doesn't color and image quality improve with full 128bit fp pipeline? Won't digital animation CGI quality improve with the use of some GFFX's other unique features, bandwidth limited as they may be? I'm no animator, so I won't guess, but hopefully someone can make some informed comments on this.
I think Nvidia knew this was going to be an issue, but wanted to go ahead and make a card that included all the features necessary to power the next generation of CGI and digital animation. If you were Pixar, and were in the process of upgrading your entire rendering farm, which solution would you pick: a farm based on GFFX or one based on 9700, all else being equal?