Salvaged Nextgen APU's

500mhz gpu + 1000mhz cpu with turbo modes for heavy single thread apps.

As I said, jaguar is already a netbook/tablet cpu, and pitcairn (8970M) is already squeezed down into laptops.

I don't think TDP would be a reason for defective chips not being used in mobile/sff x86 designs.

You're talking about using parts that fail to meet a performance level and then using as a replacement for parts that are binned for good performance. (mobile chips aren't failed desktop parts, they are the best performance bins in terms of tdp).

It might be possible AMD could laser some of these parts and use them rather than chuck them, but it's not likely you'll see them in a high perf laptop. And that's assuming the contract even allows them to salvage them.
 
You're talking about using parts that fail to meet a performance level and then using as a replacement for parts that are binned for good performance. (mobile chips aren't failed desktop parts, they are the best performance bins in terms of tdp).

It might be possible AMD could laser some of these parts and use them rather than chuck them, but it's not likely you'll see them in a high perf laptop. And that's assuming the contract even allows them to salvage them.

There are many reasons one of these apus might not be ok for console use, but would be fine for typical compute uses.

Consoles cannot have variance in performance whereas a 750/1500 mix would be rather high end for a mobile part. Same holds true for CU count, cpu core count, edram size, shape processor, etc.


There's a lot of wiggle room between what will go into ps4/xbone and a typical jaguar setup.

2 cores all the way up to 8 cores, but only 8 will work for console. Anything less than that would still be usable in a typical compute environment.


Same for GPU. If only 11CU's work, and xbone needs 12, well that is still a hell of a lot better than any other apu on the market. Same for Sony's 18cu setup.

Point is, if they can get some value for a part which previously had zero value (any one of these that fails 100% of the designed spec), then that revenue can offset the losses and mitigate the "need" to alter the spec for yield purposes.
 
Last edited by a moderator:
James if the ps4 apu is a 100w then how do you get down to say 15 or 30 watts most want for a laptop ? Your going to have to cut the power in more than half and your already talking about parts that are poor bins.

Then add that sony's part requires gddr 5 which is extremely expensive. Who is going to make a laptop based on that ?

And the last part is sony/ ms want yields to go up so your looking at a tiny window of sales here
 
James if the ps4 apu is a 100w then how do you get down to say 15 or 30 watts most want for a laptop ? Your going to have to cut the power in more than half and your already talking about parts that are poor bins.

Then add that sony's part requires gddr 5 which is extremely expensive. Who is going to make a laptop based on that ?

And the last part is sony/ ms want yields to go up so your looking at a tiny window of sales here

1600/800MHz = 100w

1300/650MHz = ~50w

eeb06330735b1e13969c9b0549e6a7f4.png


"...the power consumed by a CPU with a capacitance C, running at frequency f and voltage V is approximately..."
link

Obviously it will depend on the chip, but there is room to scale down and with lower frequencies, voltage might also be scaled down too which would further reduce TDP.

I'm not saying it's ideal, I'm saying that there should be an avenue to salvage chips where frequencies missed or where not all of the compute resources function properly.

That is one of the few advantages of sticking with x86 architecture. If I were in Sony/MS shoes, I'd be sure to have taken advantage of this fact either be re-purposing the apus myself, or accounting for it in the contract with AMD.

In either case, it should offset what would otherwise be a very questionable decision to go with such large dies. Especially in MS' case where the performance/mm2 doesn't quite inspire.
 
That's an equation for active power, which ignores a sizeable leakage component for chips of this size.
The static leakage component is sort of an unknown for the GPU and Jaguar modules and we don't know the process details, but it should be noted that another large AMD chip with a 100W TDP is AMD's Bulldozer, with 30% of its non-power gated TDP eaten by leakage.
 
The static leakage component is sort of an unknown for the GPU and Jaguar modules and we don't know the process details, but it should be noted that another large AMD chip with a 100W TDP is AMD's Bulldozer, with 30% of its non-power gated TDP eaten by leakage.

Running at 2-4x higher clocks, using an older process...
 
Static leakage isn't directly affected by clocks, and the process is older but it's pretty much the same generation as the 28nm node in question.

The leakage allocation across generations has been relatively constant, as they size their large chip bins to put 30% into leakage. Mobile chips may permit a smaller allocation.
 
Why not simply do the obvious and release a console SKU without the OS features with a disabled core and some disabled CU's?
 
Microsoft is deadly serious about the revenue stream from the additional services and media portal functionality those OS functions enable.
Even if the stripped-down console chip sold well, it would wind up costing them money if it displaces too many full-service devices.
 
Microsoft is deadly serious about the revenue stream from the additional services and media portal functionality those OS functions enable.
Even if the stripped-down console chip sold well, it would wind up costing them money if it displaces too many full-service devices.

They can release a more 'pure' gaming SKU which doesn't need as many background processes, no HDMI in, no multi-tasking which in function is identical in every way except that it cannot do as many of the 'nice to have' features that the full version has. They can then sell it for $100 less than the full version. From a developers perspective it would be functionally identical to the full version and it could meet the needs of some markets which cannot benefit from the extras they offer for a variety of reasons.
 
Dave seemed to (unofficially) semi-confirm that, at least MS, won't be responsible for fabbing these themselves. So depending on said contract, there might not be any defective parts for them to worry about in the first place.

Even with his two heads, everybody seems to be ignoring Beeblebrox. Why would Sony or MS be responsible for failed chips? How would that not be on AMD to eat the cost of providing chips that actually function according to the specifications of their customers?

Now, what AMD is going to do with all those chips that are defective might be interesting, but I don't see how it would effect MS or Sony. I doubt either one would enter into a contract where they had to pay for chips whether they met the designated performance threshold or not. Especially considering how aggressive AMD was in order to secure the contracts.
 
Well if AMD was entirely responsible for both the yield and design of the chips then they would have good incentives to create a design which is robust and hence profitable.
 
Well if AMD was entirely responsible for both the yield and design of the chips then they would have good incentives to create a design which is robust and hence profitable.

And haven't they? Isn't that why both the PS4 and One are using essentially the exact same chips, and chips that we can already mark their performance because they are available in other products?
 
And haven't they? Isn't that why both the PS4 and One are using essentially the exact same chips, and chips that we can already mark their performance because they are available in other products?

Well there is a difference in incentives between being only a chip designer and having the designs fabbed by your customer and doing the fabbing contract yourself in how you deal with the product. In this scenario AMD would be responsible for the product whilst it is in continual production.
 
Even with his two heads, everybody seems to be ignoring Beeblebrox. Why would Sony or MS be responsible for failed chips? How would that not be on AMD to eat the cost of providing chips that actually function according to the specifications of their customers?
That would depend on how the deal is structured. If, as some allege, one chip has additional features that raised the design's complexity and increased the difficulty in manufacturing, it would be prudent for the chip provider to either put that into their per-chip price, or try to negotiate for terms taking into account the increased ramp-up time.

There may not be a direct payment for every bad die or chip that doesn't meet spec but can't be binned, but it should be factored into the overall agreement.

It may not insulate against unforeseen serious yield problems, but it also means decent money can be made if yields can be improved beyond the curve put on paper.
 
They can release a more 'pure' gaming SKU which doesn't need as many background processes, no HDMI in, no multi-tasking which in function is identical in every way except that it cannot do as many of the 'nice to have' features that the full version has. They can then sell it for $100 less than the full version. From a developers perspective it would be functionally identical to the full version and it could meet the needs of some markets which cannot benefit from the extras they offer for a variety of reasons.

What you have just described is a QA nightmare.
 
Why? The system runs 3 OS' so if it supports yields why not make a system which only runs 2 of them at once? I.E. Game OR multimedia and not both.
 
Back
Top