Joe DeFuria said:
Of course, there are too many unknowns to do much of any real speculating. Maybe nVidia IS doing a "gigapixel" like design, at which point the age old question of "can deferred renderers really compete in the PC market" might finally be answered. Maybe nVidia IS planning on multi-chip boards for the high-end gamer. Maybe they invented their own brand new memory type that delivers 300 GB/sec bandwidth. Who knows.
I consider this very likely (like that would mean much to anyone
)...I've been thinking working out issues in this is why nVidia has been delayed ever since there was so much uncertainty and lack of concrete information from the nVidia hype machine as the R300 approached. I don't see them having wasted their cash inflow on reinventing the IMR with more brute force, and ignoring the resources they acquired from 3dfx. It seems to me that the NV30 is most likely:
A deferred renderer with caches out the ying-yang (focused on the high end, and perhaps necessary for high speed triangle set up)...in other words a "brute force" (comparitively to Kyro for example) deferred renderer.
Trying to capitalize on the benefits you and others have listed as possible for a deferred renderer (AA, alternate tile for multichip, and some other enhancements and features based on the architecture shift).
It seems to me that doing the above with DX 9 specs would explain the delays on the nVidia engineering team executing, and why they have been stuck on the brute force enhancements to the base GF 3 technology for so long.
I think this is the "revolution" nVidia intends (the fact that ImgTech has been advocating deferred rendering wouldn't matter from their perspective, ImgTech has failed to deliver a high end focused part so far).
I think this would explain how they could compete performance wise with a 128-bit bus, and why there has been no noise (that I know of) about having a 256-bit bus from them. My understanding is that a deferred renderer could open up the bandwidth of even a 128-bit DDR interface to some pretty flagrant pixel pushing accomplishments.
That said, I could be Completely Wrong (Duh!), but this is the specific theory I'm sticking with based on my admittedly limited knowledge about 3D technology...I simply can't think of anything else that better fits the past performance of nVidia engineering-wise, their behavior and lack of innovation in the NV2x family, and even beginning to fulfill their claims and the hints of their cheerleaders (Anand's comments come to mind, but to be fair he did mention ATi's upcoming part in his GF 4 preview, though positive statements for nVidia are always emphasized more than for ATi in
both previews).
So, tell me why I'm wrong, and maybe some better theories will arise.