You're fixating on TFLOP count, but you know that increasing the GPU clock would improve the performance of everything related to GPU performance except main memory bandwidth - but since it'd probably also improve the performance of eSRAM it'd scale pretty well.
The eSRAM would be affected by the clock speed in terms of power as well. The Jaguar cores half-clock the 2MB of their SRAM for power reasons. The activity level of the eSRAM is on the order of the L2s.
These numbers are completely arbitrary. Nobody knows what the actual impact to yield is, if 800MHz is really at any kind of inflection point. I've seen products where pretty much all of them will clock significantly higher than what they're shipped at without any problems. Initial specifications can be overly conservative. You can't know the exact manufacturing limits years in advance.
My pessimistic reading of the rumors concerning Durango's warm beta kits and the rumored downgrade in the various interconnect speeds for Orbis is that things went a little less than perfect.
Hell, let's look at your argument in reverse - if fusing off a couple CUs increases yields by 1% for Sony and no one cares anyway why shouldn't they do it? Because they already released specs? Guess they should have kept it open then, huh? But I don't think Sony agrees with you.
Taking things back after promising them has burnt Sony before.
That and fusing things off just because doesn't eliminate their die size contribution, which impacts the number of candidates for wafers and can hurt you with intra-die variation.
I suppose you also think that AMD wasted resources with their 1GHz edition discrete GPUs, which just featured a modest clock bump (much more modest than what we're considering here). Yes, I know it's not the same since those naturally come out of the binning, I'm not saying they're the same, but I am saying that if only a few hundred forum readers cared about THAT that it wouldn't have even been worth making the bin for.
The GPUs are priced much higher than the probable price Microsoft is paying for a console component. AMD's free to sell its cards and the PC platform takes care of the rest. It offers no software, no exclusives, no services, and no other features other than providing the GPU.
Then there's getting the 3dmark scores and benchmarks, which enough gamers care about in the PC space, apparently.
The nature of console comparisons is a little more sparse, and countless gamers go against the performance grain.
For the Xbox One, I'm more concerned by the alleged thicker API layer and complex memory setup than the incremental clock adjustment.
More than they respect them for being completely coy about specs, and certainly more than they'd respect them if they tried to outright pass off that the two consoles have basically the same capabilities when everyone in the gaming media is saying otherwise. But that seemed to be what you were suggesting they do.
Many of the people who care enough to have respect for such actions don't all pay money for respect, sadly. Way more people haven't a clue either way.
And until we see actual games with severe discrepancies, the functionality governed by core measures like ALU throughput and bandwidth can be considered equivalent.
I read the graphics comparison articles, but too many of the people I know don't see the difference. Too many other factors get in the way.
Do you have a source for this? I couldn't find anything.
I added an edit for that. I used too much certainty beyond the history of creating eventual salvage SKUs that don't register in western markets.
I don't think GF's 28nm is going to have anywhere close to the density of TSMC's, but Bonaire's die size fits in with the others exactly where you'd expect it to. The very idea that a refresh chip like this would be made on a different process would be mind boggling if not for AMD's massive obligation to GF.
That was poor wording on my part. My question is whether the console chips use GF's process versus the TSMC process used for Bonaire.
They may not be going for thermals, like you say this may just be down to yields only.
This is for parametric yields, not defect yields.
Functional chips that don't hit spec below the desired TDP get rejected. That's a bigger problem these days than defects.
Dont forget it is not just ps4 being 50% more compute power, it has rumoured double the rops and 64 compute threads, not to mention likely more system wide bandwidth.
The shared GDDR bus is a double-edged sword. It's way easier to thrash a DRAM bus in certain situations.