Fusion die-shot - 2009 Analyst Day

density and efficiency sort of go hand in hand and the idea that Intel's processes are superior to <anyone else> simply because "they are Intel" after all is a pitfall. IIRC there was a lengthy discussion at SA forums centering around Fusion v Atom that highlighted this, in particular Hans and Krumme posts.

edit:
What does density have to do with power/performance? The discussion was on TDP of Atom vs Bobcat

A problem in comparing TDP of Atom v Bobcat (Fusion) also happens to be that Intel and AMD measure TDP differently. IIRC Intel (and nV I think) uses TDP as an average and AMD uses TDP as absolute maximum. (again IIRC)

edit edit:

(if I may add a borrowed pic)
AMD_Ontario_Bobcat_vs_Intel_Pineview_Atom.jpg


TSMC is able to pack over 2x as many transistors into a smaller area (higher density) while offering 2-3x as much performance over Atom at less than 2-3x TDP (efficiency v performance/power), while much of the performance might be attributed to the GPU differences, bobcat core is 1/2 as small as each Atom core. These in large are more architectural between Atom and Fusion than TSMC v Intel until you look more at more basic structures such as L2 where even given the (TSMC's "inferior") smaller process is able to match (Intel's "superior") larger one.

I respect the RWT articles and unbiased reporting, they just use a different means the calculate process effeciency where as most (end users/customers) care about perf/therm/cost.
 
Last edited by a moderator:
Is it certain that AMD's TDP definition hasn't changed?
I think their definition is not quite the same as it once was, and while it is potentially more conservative than Intel's, I don't think it is the same as a maximum.
 
Is it certain that AMD's TDP definition hasn't changed?
I think their definition is not quite the same as it once was, and while it is potentially more conservative than Intel's, I don't think it is the same as a maximum.
intel's definition of TDP isn't really the same as it was neither (well maybe it is on paper). I think people are still thinking of Pentium 4, which could exceed their TDP very easily (to the point that you'd get thermal throttling if you had a cooling solution which was designed to just be good enough) even with "normal", though not what intel considered "typical", workloads. But those times are long gone.
 
If a chip throttles to stay under TDP, it is still valid.
TDP is a useful proxy for determining efficiency normally because throttling excessively is undesirable for a product.
Both companies, to my knowledge now define TDP in terms of a maximum output for a thermally signficant period, using some amorphous set of commercially available programs. The exact parameters may vary, but that is not the same as a maximum on the part of AMD.
 
Is it certain that AMD's TDP definition hasn't changed?
I think their definition is not quite the same as it once was, and while it is potentially more conservative than Intel's, I don't think it is the same as a maximum.

I think AMD's current definition of TDP is something like "maximum power draw that can be sustained for a thermally significant amount of time", which makes sense.

And as far as I recall, it's not very different from Intel's current definition.
 
I consulted AMD's Phenom datasheet:

• TDP. Thermal Design Power. The thermal design power is the maximum power a processor can
draw for a thermally significant period while running commercially useful software. The
constraining conditions for TDP are specified in the notes in the thermal and power tables.
 
A problem in comparing TDP of Atom v Bobcat (Fusion) also happens to be that Intel and AMD measure TDP differently. IIRC Intel (and nV I think) uses TDP as an average and AMD uses TDP as absolute maximum. (again IIRC)

AMD TDP has never been Pmax and is not Pmax. AMD simply doesn't report enough information publicly to generate values for Pmax.

TSMC is able to pack over 2x as many transistors into a smaller area (higher density) while offering 2-3x as much performance over Atom at less than 2-3x TDP (efficiency v performance/power), while much of the performance might be attributed to the GPU differences, bobcat core is 1/2 as small as each Atom core. These in large are more architectural between Atom and Fusion than TSMC v Intel until you look more at more basic structures such as L2 where even given the (TSMC's "inferior") smaller process is able to match (Intel's "superior") larger one.

The picture and hence any data based off the picture are making many assumptions which cannot currently be proven true or untrue. Certainly trying to do a discovery based off a full stack potentially photoshopped die shot is conjecture at best.
 
The GPU-less parts are… intriguing to say the least. What's the point of those? It means you have to use discrete graphics, that's going to kill power-efficiency. You could argue that it could be a more performance-oriented solution, but the fastest GPU-less part is clocked at 1.4GHz, while the fastest APU is clocked at 1.6GHz.

I guess AMD wants to salvage defective parts this way, but who's going to buy them? Maybe they're going to try to sell them for $5 or something…

Also, there's no mention of any kind of "Turbo" technology. Do you guys think it might still be there?

you can say what you want, but 18W@1,4GHz it's a lot too much for an atom killer :(

Yeah, but 1.0GHz dual-core + GPU at 9W sounds perfect.
 
a 45nm dual core 2x1MBL2 Athlon II K325@1,3GHz is listed as 12W

I know that there's the gpu too, but from a new architecture was expecting something better and holding up before buying my netbook, but at this point an atom it's sadly good enought
 
Guys... when they said 90% of mainstream performance, did they mean overall performance or clock vs clock? I.E. Is Bobcat that much better on a per clock basis when compared with AMD Athlon X2 or were they merely implying that if the chip were clocked comparably it would have similar performance?
 
Guys... when they said 90% of mainstream performance, did they mean overall performance or clock vs clock? I.E. Is Bobcat that much better on a per clock basis when compared with AMD Athlon X2 or were they merely implying that if the chip were clocked comparably it would have similar performance?

Clock for clock, most likely. Given what we know about Bobcat's architecture, I can't see how it could be faster than Stars, clock for clock.
 
a 45nm dual core 2x1MBL2 Athlon II K325@1,3GHz is listed as 12W

I know that there's the gpu too, but from a new architecture was expecting something better and holding up before buying my netbook, but at this point an atom it's sadly good enought

Add the 10-20W from integrated gfx and you're quite a bit higher than these already ;)
 
I suppose (but don't know for sure) that the igp at 55nm is more close to 10w than 20w
so for a total of ~25w you have something faster than ontario, shrink the igp at 40nm and shred the bus, and you are in the Nile range
so where's the revolution? it's barely evolution at this point
sure ontario has better gpu and overall lower tdp, but from a new architecture developed with the atom in mind its a little delusional :(
 
desktop HD5450 @650 MHz has TDP of 19.1W (DDR3)
mobile HD5470 @750 MHz has TDP of 15W (GDDR5) or 13W (DDR3)
mobile HD5450 @675 MHz has TDP of 11W (DDR3)
mobile HD5430 @550 MHz has TDP of 7W (DDR3)

what's the Ontario's GPU clock?
 
gpu clock are still undisclosed

that tdp are related only to the gpu or to the entire desktop board / mobile gpu + memory?
 
GPU specs can't be that high if we look at that table.

Didn't anyone have back of the mind thoughts that some of the specs seemed way too good to be true?

Perf/W is still competitive with the Atom which is a vast improvement from how AMD chips are now.
 
Back
Top