GF100 gives the appearance of needing a B refresh to achieve decent performance/yields. Such a refresh (if it happens) makes it 3 to 4 quarters late.
Anyway, I'm not counting chickens till the damned thing has been on the market a while. Demand will be "insane" if it's at all good, so it'll be a while before we know whether NVidia can keep up with demand. Then we'll get a feel for whether it's yielding well.
Of course if the reviewed chips are as bad as Charlie asserts then the case will be closed. I don't believe the "5% on current games" thing.
(I don't think texturing capability is going to kill performance, though being 59% of HD5870's theoretical does cause some qualms - I'm assuming NVidia's managed a monster boost in efficiency there and most games seem to show little dependency on texturing. Also, ROP performance - Z rate specifically - appears to be considerably better in GF100, and current games tend to indicate this is where most pain lies.)
NVidia's architecture, with its hot clock, seems to require custom implementation for those parts of the die at TSMC. Though I'm not sure of the extent of that. That's more difficult than going fully synthesisable is it not?
G94 is the only chip from the last few years that NVidia's apparently delivered "on time". NVidia has also cancelled two chips (GT212 and GT214 - a third if we count G88 which I'm still not sure about). The hot-clock based architecture appears to be making things quite difficult for NVidia. In the same period ATI chips with greater feature increments (D3D10.1, two variations of LDS, GDDR5) and higher performance have shown considerably less susceptibility to delays - with RV740 having the worst problems.
You like to assert there's no causation. Well feel free to provide an argument against the repetitions, rather than hand waving.
I never said it was unmanufacturable, I said Charlie's theory appears to hold some water, emphasis on "some".
I'll ask again: feel free to explain why NVidia has consistently struggled with chips that aren't feature increments (e.g. GT200b is A3), let alone the feature incrementing chips, in the same period on the same fab's nodes that AMD has executed on, usually in advance of NVidia.
Apart from the difficulties of custom design the other factors I can think of include packaging-related stuff (bump-gate) and NVidia's apparent reticence to be first to a node (or inability). Though NVidia did boast that it would be first to 40nm, I'm not quite sure why. Unless it was an attempt to assuage rumblings that 40nm was going to be a problem and NVidia wanted to keep Wall Street off its back by saying it was ahead of AMD for 40nm.
Jawed