Fusion die-shot - 2009 Analyst Day

My statement "IGPs don't sell platforms" means customers don't shop for systems based upon what IGP they use, generally speaking. Sure there are a few informed buyers out there looking for a system on a budget that may choose an NV or AMD IGP over Intel, but the vast majority of the market does no such thing.

You ask most people what an IGP or a GPU is and they won't have a clue about you're talking about. I used to sell computers, I went through that crap all the time. Such information just flies over customer's heads when you're trying to explain to them why they should get a computer (back in Fall 2007) with a Intel Q6600, 4 GB DDR2, and an 8500GT as opposed to a the crappy Compaq with an E21xx, 2 GB DDR2, and Intel GMA when it's video editing they have in mind. Disaster abounds! :LOL:
 
Exactly. People know name brands and price. Performance isn't even a concern for the vast majority of users, let alone 3d performance.
 
Yep a friend of mine chooses all her electronics based on whether it comes in pink and looks nice. It's surprising how little people care about the awsome IQ of some new camera technology when all they want to do is take random pics of their lives to upload to facebook. Same goes for computers.
 
My statement "IGPs don't sell platforms" means customers don't shop for systems based upon what IGP they use, generally speaking. Sure there are a few informed buyers out there looking for a system on a budget that may choose an NV or AMD IGP over Intel, but the vast majority of the market does no such thing.

Consumers don't, but OEM's do. Intel (and more recently AMD) have a bit of an upper hand here as they can assure OEM's that packaging their CPU with their IGP is "guaranteed" to work without compatibility problems. Nvidia was facing an uphill battle especially with both CPU makers increasing moving more and more stuff including entry level graphics onto the CPU.

So yes and no, that the market doesn't care what brand the IGP is. Consumers buy from OEMs and OEMs buy from the hardware vendors. And while the end consumer doesn't care or have much impact (other than enthusiasts or hobbyists) the OEMs most certainly do...

Regards,
SB
 
Consumers don't, but OEM's do. Intel (and more recently AMD) have a bit of an upper hand here as they can assure OEM's that packaging their CPU with their IGP is "guaranteed" to work without compatibility problems. Nvidia was facing an uphill battle especially with both CPU makers increasing moving more and more stuff including entry level graphics onto the CPU.

So yes and no, that the market doesn't care what brand the IGP is. Consumers buy from OEMs and OEMs buy from the hardware vendors. And while the end consumer doesn't care or have much impact (other than enthusiasts or hobbyists) the OEMs most certainly do...

Regards,
SB

OEMs care about the bottom line. Brand name, perceived product quality, reliability, ability to supply parts on-time and in quantity are key considerations for OEMs more so than performance ever will be, particularly at the entry-level.
 
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3736

Today AMD is announcing that the first Llano samples, built on Global Foundries 32nm high-k + metal gate, SOI process will be sampling to partners in the first half of this year.

http://www.pcauthority.com.au/News/166727,amd-details-fusion-innovations-at-isscc.aspx

Codenamed Llano, the first Fusion APU will sample in the first half of this year and be commercially available in 2011, AMD said. This is set to have four CPU cores plus a GPU supporting Microsoft's DirectX 11 APIs, and will also be the firm's first 32nm processor.

"We are not using a low-end GPU, we're taking our leading state-of-the-art GPU and integrating it on the same chip so it shares the high-bandwidth DDR3 memory and features a high-speed communications channel between all the cores," he said.

Core size including 1MB L2 cache seems to be 16,5 - 17 mm²
 
soi + bulk? :unsure:

amd can manage to have athlon/phenom, bulldozer, bobcat, and this built a completely different process?
 
Hmm? Of those only Bobcat is rumoured to be bulk ... and then only because they are going to license it for SOCs. Considering they are chatting up their SOI based power gating it would seem strange if their lowest power x86 cores don't use it though. They could always license it for bulk and do their own in SOI.
 
~225 mm² for the whole APU die.

really? Why so big?

I have read in the anandtech forum [page4 of the article comments] that it could be ~169mm² (13 x 13 mm).

That would work out perfect, I think:

4 cores with 16,7mm² each = 66,8 mm²
1 GPU with 480 Shaders (going with the big size) ~ 75mm² [ Redwood @ 32nm x 20% bigger ]

if the GPU has only 320 Shaders then Llano will be even smaller.

Llano has no L3 cache, so 145 - 170 mm² should be possible.
 
GPU's (especially the lower end ones) have a lot more than ALU's.

Doesn't matter. :)

Redwood has 100mm² @ 40nm => at 32nm it should be only 64mm² (with perfect scaling)

Therefore 75mm² for a GPU with 400 - 480 shaders @ 32nm should be possible.

Also:

Llano has, according to AMD ~1 billion transistors (or slightly more according to c't 's news section).

4 Cores with 1MB L2 cache each should have 340 Mio transistors. This leaves ~660 Mio transistors for GPU + xxx. Redwood has 627 Mio transistors. So this works quite good, or?

So maybe not 480 Shaders but only 400, but IMHO all informations point to a midrange GPU.
 
Llano has, according to AMD ~1 billion transistors (or slightly more according to c't 's news section).

4 Cores with 1MB L2 cache each should have 340 Mio transistors. This leaves ~660 Mio transistors for GPU + xxx. Redwood has 627 Mio transistors. So this works quite good, or?
And what about the pads?

So maybe not 480 Shaders but only 400, but IMHO all informations point to a midrange GPU.

All info except the die shots we have seen so far. While 480 ALU's will be better than Redwood, don't forget that this part is likely to be mem bandwidth starved, as it will likely use just DDR3 and share it with CPUs.
 
The alu frequency is still unknown and it could be quite high so alu count isnt the most important.
On the other hand the question is how usefull can it be with discrete graphic. It should be clear that with 32/28 nm discrete cards they cant take the fight on.
If it will have something like combined 2d/3d with standalone card or if it can use it in 2d mode and put to sleep the other card.
And of course if cpu/gpu OpenCL code will run much faster on Lliano than similar discret card and cpu. (actualy it should and could use the same memory space as cpu)
The question is if nvidia will permit such multi gpu with the integrated graphic :).
 
Last edited by a moderator:
Maybe the difference in size is accounted for with a large pool of on-die memory to substitute for the terribly pooor bandwidth the GPU will suffer from? A 320+ Stream processor GPU sharing the same memory bandwidth with a CPU as an 80SP card, the 5450 has the luxury of having. In terms of an embedded solution, they don't need eyefinity and they don't need massive resolution. If they are targetting at maximum 1920 by 1080P in a laptop then surely devoting 40-50MM^2 of the die area to this would be worthwhile!
 
Llano has, according to AMD ~1 billion transistors (or slightly more according to c't 's news section).

4 Cores with 1MB L2 cache each should have 340 Mio transistors. This leaves ~660 Mio transistors for GPU + xxx. Redwood has 627 Mio transistors. So this works quite good, or?
4 cores is >440M trannies according to the presentation.

I think they want to design a ~200 sq. mm die, which points towards Redwood too, but with a 4MB L3.

4 cores = 440M trannies & 71 sq. mm.
4MB L3 = 200M trannies & ~36 sq. mm (based on L2 density with the power gating ring added).

107 sq. mm for the CPU, which leaves ~93 sq. mm for the NB and the GPU, so with the 128-bit bus Redwood has (so, I count it and discard the CPU's one) and adding the NB logic, all sum up to ~1.3G trannies in a ~200 sq. mm die.

L3 seems mandatory with the GPU using the same RAM pool, even if they didn't advertise it.
 
it's confirmed that fusion gpu will run at similar cpu frequency?
i recall that it will use the base clock, so ~800MHz
 
Maybe the difference in size is accounted for with a large pool of on-die memory to substitute for the terribly pooor bandwidth the GPU will suffer from? A 320+ Stream processor GPU sharing the same memory bandwidth with a CPU as an 80SP card, the 5450 has the luxury of having. In terms of an embedded solution, they don't need eyefinity and they don't need massive resolution. If they are targetting at maximum 1920 by 1080P in a laptop then surely devoting 40-50MM^2 of the die area to this would be worthwhile!

I don't seen any useful amount of on die gpu cache on the die shot.
 
Back
Top