FrameBuffer
Banned
Not trying to sound like an @ss but really !?? No shit Sherlock in comparison to the g200 that has yet to deliver any semblance of a real derivative at best the g80/92 was nv's best/last successful full generational part. The G200 has got to be one of the worst parts to compare a future part Against, in the meantime ATi is on their 3rd generTion of top to bottom derivatives.What I meant is that the GF100 architecture leads itself more to scaling down to lower end designs in a timely, cost-effective, and efficient manner once the high end GPU is ready compared to GT200. After all, each GPC in GF100 is reportedly nearly a full GPU in and of itself, right? Wasn't it ATI/AMD who decided in recent years to move away from monolithic GPU's in part because time to market for lower end derivatives was very poor compared to introduction of new GPU's at the high end? With GF100, it seems (at least on the surface) that once the high end GPU is ready, then time to market for the lower end derivatives stands to be significantly better than before. Also, if NVIDIA can make a balanced high end GPU, then by definition the lower end derivatives should be balanced too. Is it really balanced to have lower end derivatives with the same geometry throughput as the higher end models? Of course, ATI/AMD's strategy will always have some merit. NVIDIA cannot easily get around the fact that monolithic GPU's take a long time to come to market and are very difficult to engineer. That said, the proof is in the pudding, and the results later this year will speak for themselves. I guess we'll learn a lot more in the coming months in seeing how everything plays out.