Charlie, why do you insist on taking the most dire interpretation of every scrap of information? Rackmounted computing devices are limited by heat and power. You do not always want to install the highest performing part in a rack. It's a function of rack cost, power density, and cooling. Sometimes it is more cost efficient to buy two less powerful boxes than to buy 1 powerful box. Even if NVidia could manufacture 512SP devices without issues, they would not be my first choice for sticking in a rack.
Yes, one can draw conclusions that Nvidia is having problems with Fermi, but the hyperbole and sheer negativ conclusion you draw from every bit of information is thoroughly intellectually dishonest.
Lets look at these in order:
1) I totally agree, it is much saner to use a lower clocked wider part, or potentially two lower clocked wider parts when looking at power use. That is my argument. Razor1 seems to be arguing that NV disabled two clusters on Fermi for power efficiency reasons, not manufacturing, something that doesn't really mesh with the physics of the situation.
Assuming a linear relationship between clocks and power use, a 448SP Fermi running at XMHz would be the same performance as a 512SP Fermi running at 14/16ths XMHz. (assuming there were no problems feeding the extra SPs etc etc). Now we know the relationship between clocks and power is not linear, so it wouldn't be much of a stretch to say that the slower 512 shader part is more power efficient.
If the claims from NV/Razor1/others about the castration being for power reasons, it doesn't make sense to do so vs downclocking a 512SP part.
2) The arguments put forth by many are that it is downclocked AND has shaders disabled. Given the volumes of high bin Fermis compared to consumer parts, and the margins that one brings in vs the other, I would suspect you could make a VERY strong case for picking low leakage, 'perfect' chips for even the low end Fermis.
Think they did this, or gave out the rejects to the GPGPU team? I would bet that the Fermis are both binned for low leakage and have shaders disabled for manufacturing reasons. Do you disagree? If not, binned 14/16ths Fermis consuming 190W 'typical', 225W TDP is quite alarming, don't you think?
3) The fact remains that the chip is hugely late, hard to manufacture, and consumes a ton of power. NV promised AIBs that they would have cards on Oct 15th 2009 last spring. They didn't. When the parts come out, lets see what they can manage to make in volume.
Everything I have said about them I can back up, although some I chose not to do publicly. I have explained several times why I find NV impossible to work with, you can search them out here if you are bored, but I am not going to type it in again. Most of the people countering what I say can't come up with a decent argument, much less a technical one.
-Charlie