Johnathan256 said:
If Nvidia felt that they neeeded the 256-bit bus on NV30, then I think it is safe to say that they would have used it(if they didn't that is). Though it might be only speculation until next week, NV30's effective bandwidth will more than likely exceed R300's due to some type of bandwidth saving technique and memory speed will not matter for Nvidia as much as it will for Ati. My guess is that David Kirk doesn't want to let any detailed info slip at this point.
You are forgetting that aside from the raw bandwidth numbers, ATI is also using "bandwidth reducing" technologies, but that's really beside the point. Everyone "needs" a 256-bit bus, but not everyone wants to pay for one, is the problem. They cost more because of using more pins--but still I doubt the cost is prohibitive (obviously.) What nVidia's doing is clear--they want the fastest off-the-shelf components (read DDR II) to drive their chip on the least expensive PCB they can design as a reference for OEMs. There's nothing wrong with that except and unless a competitor comes along and decides to invest some time in designing a 256-bit bus and can include it at a price point relative to nVidia's 128-bit products--at that point I'd say it would be a problem from a competitive viewpoint.
As far as "letting things slip," it seems that Kirk has let an awful lot of things slip lately...
Like making it crystal clear nv30 is using a 128-bit memory bus, will boast a 128-bit color pipeline and declare that ATI "only" has a 96-bit color pipeline (and try to make a difference out of it), and declaring that its shader pipelines can use "thousands" of instructions while the R300 is limited to much less, etc. While I think it's possible nVidia might include some sort of bandwidth-saving technique I don't look for it to be any more earth-shattering than ATI's in that regard, although I do expect that nVidia will market it with the expected hyperbole.
Basically, it sounds to me that Kirk knows full well that because of the 9700 Pro the nv30 won't have near the impact it would have had in a competition-free environment (which I honestly think nVidia thought it would have.) Therefore, he's already started nitpicking long before the nv30 will ship. I mean, frankly, I can't see what good nVidia thinks "thousands of shader instructions" will be for DX9 software, since the R300 supports the official limit of instructions allowed by DX9. Maybe in custom software? But all told these strike me as extremely weak defenses to be making about a product which is a few months from shipping. And his comments concerning the difference between 96-bit and 128-bits in the color pipeline are not worth repeating. Ditto his comments on DDR II--because ATI can use it as well and still maintain a 256-bit bus--or does Kirk think people will be so enamored of the ram's clock they will forget the width of the bus...?
Any way you slice it, though, for high color depths and FSAA and AF, you need *real* bandwidth. Not assumed, effective and guesstimated bandwidth (which is only good for marketing.) And right now based on what Kirk has let slip, it sure looks like ATI is to remain the bandwidth king for the foreseeable future.
I know of course that I could be all wrong here and that nVidia could have some other-than-brute-force technology up its sleeve--be that as it may, I just don't believe that to be the case, however. nVidia has little experience doing anything other than brute force--indeed their entire GPU line to date has been brute force based. 3dfx didn't really give them much, and I recall that at the time nVidia said it would never use any of the GigaPixel technology as it felt it already had better alternatives on the burner. I guess we'll all know in a couple of weeks or less...