Sorry, I replied in the wrong thread. I was replying to the "tape out" thread where someone said NVidia will be shipping a "barely working" NV30 this christmas, which seems like a ridiculous comment to make. I mean, do we have any figures on Radeon 9700 PRO yields?
And for the record, I don't think ATI's decision to use .15um is any better than 3dfx's decision to use SDR memory and older processes. ATI really pushed the limits of .15um. Their design is a huge, customized die and power hungry beast. Clearly, they are going to ship a .13um R300 and follow on products that use less power, have higher clock, and are cheaper.
NVidia has always targeted memory technology and processes ahead-of-time and made big bets. Their schedule might be slightly delayed, but I think it was fundamentally the right thing to do. They gained valuable experience doing it. Eventually, the design and process will mature, and NVidia will benefit from it, the same way they benefit from the DDR market when it matured.
Back in the 3dfx days, NVidia was accused of the same thing. DDR was a "risky" bet, multichip was better, more reliable. DDR was expensive. Supply was low, etc etc. Now only a few years later, DDR is everywhere and racing towards 1Ghz.
6 months from now, all of this whining about NVidia's .13um yield problems will seem like a thing of the past, and in hindsight, it will be seen as a bold move.
(Precedent: On B3D, there were oodles of rumors about GF3 yield problems, and rumors that GF3 boards would cost as high as $800 because of bad yields and still expensive DDR)
And for the record, I don't think ATI's decision to use .15um is any better than 3dfx's decision to use SDR memory and older processes. ATI really pushed the limits of .15um. Their design is a huge, customized die and power hungry beast. Clearly, they are going to ship a .13um R300 and follow on products that use less power, have higher clock, and are cheaper.
NVidia has always targeted memory technology and processes ahead-of-time and made big bets. Their schedule might be slightly delayed, but I think it was fundamentally the right thing to do. They gained valuable experience doing it. Eventually, the design and process will mature, and NVidia will benefit from it, the same way they benefit from the DDR market when it matured.
Back in the 3dfx days, NVidia was accused of the same thing. DDR was a "risky" bet, multichip was better, more reliable. DDR was expensive. Supply was low, etc etc. Now only a few years later, DDR is everywhere and racing towards 1Ghz.
6 months from now, all of this whining about NVidia's .13um yield problems will seem like a thing of the past, and in hindsight, it will be seen as a bold move.
(Precedent: On B3D, there were oodles of rumors about GF3 yield problems, and rumors that GF3 boards would cost as high as $800 because of bad yields and still expensive DDR)