Obviously, the reason why I gave that low-k example though is that presumably NVIDIA concluded the extra performance/power efficiency wasn't worth the cost. Whether that's right or not is another question, but they certainly took their sweet time to adopt it - in fact, they only did when it TSMC made it mandatory!
You're perfectly right that the performance advantage has to be included in the cost-efficiency calculations though, which I forgot to do in my previous post (oops!)
I mostly agree with your skepticism there, so let me take another example: the 111mm² Allendale used in the $113 and $133 E4400 and E4500 SKUs. AFAIK, the E2xxx Series uses a >=80mm² chip with even less cache, so I don't think there are many (if any) other SKU using Allendale.
Assuming ASPs of $120 and gross margins of 60% (the latter is highly optimistic, I suspect it's nearer 50%, but so be it...), that gives us 40% lower costs. Which is still ~3x higher...
I agree with the ~1.3x frequency advantage (although I'm not sure 'at the same power is accurate but no matter that), but I'd wager a 2x cost benefit from better defect management is really optimistic in this case.
With Allendale we're talking of a relatively small chip on an extremely mature process (more so than TSMC's 65nm) with still a fair bit of cache and it isn't really pushing the envelope in terms of clock speeds or TDP binning. In addition to that, my 2x estimate was (as I mentioned when I gave it) probably substantially too optimistic.
All in all, I'd certainly expect the cost difference to be much smaller than my on-the-back-of-the-envelope 5x calculation. Perhaps in the '20-50% less expensive' kind of range for TSMC wafers, excluding the performance disadvantage. This is all highly approximative and it's really hard to tell though, so take this with a lot of salt and please don't kill me if you disagree with those numbers!