Precisely my point.
ATI's design doesn't generate reams of salvage parts.
Jawed
And now something to wake everyone up...
[turtle=major digression AND friendly jest]
Actually, I didn't want to go backwards when you first mentioned it, but you keep bringing it up...there
IS a 4750 based on the rv740. You could call it a salvage part...I call it a "fixed" part, and 4770 was the salvage part, and I'll tell ya why.
4750 didn't come along that long ago, and really is just a fixed 4770, not requiring a power connector and using the same clocks (including gddr5). It even overclocks the same (havn't seen it at higher voltages though) and AFAIK uses the same stock voltage as 4770. I imagine after TSMC got their leakage under control, AMD was happy to release a leakage-stable part with "4770's" true TDP of ~50W (per Xbitlabs' review of 4770) and get rid of a part from the BOM.
Granted, it's only available in China...but it could just as well exist elsewhere, and probably would've had 40nm been fixed earlier.
It doesn't because you know as well as I, trying to wrangle perception of rv740 back from the weirdness of it's initial (sometimes?) leaky part, with oddly-needed power connector would be counter-intuitive so close to the launch of it's replacement that will fetch a slightly better margin.
The point is, Rv740 didn't have a salvage part because it initially WAS a salvage part. It's rarity because of yield, corresponding price, and leakage requiring the power connector made it a redundant, if not a worse option than the 4850, not to mention less-than-ideal TDP for notebooks, hence why we never really saw a mobile version. Now that it's fixed, it's irrelevant because of the impending Juniper (also on the fixed 40nm process) and clearing of Rv770 stock.
I STILL stand by the notion that the initial parts were to be what the 4750 is, with the 4770 having higher clockspeeds and 5ghz GDDR5, but the time for that argument is long passed. I believe Rv740 became a (much-needed) experiment on 40nm rather than the truly competitive (and replacement) part it was meant to be.
At any rate...yeah. 4770, in short, is a salvage part because it needed >75W to guarantee 750mhz on a die that is only 137mm2. Hardly outstanding for what it was, looking in the rear-view mirror and ahead at Juniper.
Juniper is supposedly what, 60W TDP and 181mm2. That alone should tell you something. While that is for the mobile parts, and may be for the 'pro' version, I'll be surprised if the "xt" version is much above 4770's listed TDP of ~80W.
Granted, your argument of 'salvage' might be completely defective units...Mine includes units that may require a (comparatively) shit ton of power to work reliably. While you could say the TDP that came about was impressive for the transistors, it was not impressive for the 40nm process, and I believe it was a failed product for AMD, especially when you think about how it's now-completed 4750 ~50W brethren would've competed in the mobile and performance marketplaces...and especially how a higher-clocked version could've effectively replaced rv770 and made oodles for the company.
[/major digression and bitching]
On a related notion, I still can't help but wonder if those rumors from Nappy @ Chiphell about the buses had a hint of truth. If you think that Juniper and Cypress were originally meant to be 960sp and 1920sp with 192 and 384-bit buses, it's not unreasonable to think that those parts would be replaced on 32nm (GloFo) with 128-bit and 256-bit parts using 7gbps GDDR5. They would seem to fall into place together.
This makes sense because not only is (256*7000)/8 ~ (384*5000)/8, but because it would allow ATi to essentially do an optimized version of R600->R670 or G80->G92. Start off not quite so big (205mm for Juniper, as was rumored, which became ~180mm, which fits), and end up with a superior product not quite so small, probably around the size of Rv740, on 32nm, with higher core clocks. This would've meant an ~400mm2+ Cypress that could've met/exceeded GTX295/G300, with a refresh part around the size of rv670-rv770. That'd make sense, no?
I'm starting to think this was AMD's game plan, and it could've been a good one had TSMC fixed 40nm earlier.