Llano IGP vs SNB IGP vs IVB IGP

But it's on the same process as Llano so that extra performance won't come for free - it will be a pretty big chip. I wouldn't be surprised if AMD kept selling Llano for quite some time.

Agreed. Trinity probably isn't going to 110->150€ price points. When it comes out, Llano will probably lower to 80->130€ and Trinity will fill the gap between Llano and high-clocked AM3+ Bulldozers, so maybe it's more like 140->180€.
 
But it's on the same process as Llano so that extra performance won't come for free - it will be a pretty big chip. I wouldn't be surprised if AMD kept selling Llano for quite some time.

Also Trinity i presume is a two module, four "core" design. Im not sure they have a four module, 8 core Trinity planned as that would have an extremely large die size. Llano is available in a "true" four core design. Now while AMD Marketing might say both have four "cores", at the same clocks Llano might actually be faster than Trinity in situations that dont take advantage of BD's cores. In that case Llano might actually be a better alternative!

And as you said i expect they will continue to sell Llano for a while until Trinity production ramps up, and as the process matures they'll be able to up the clocks as well. But in Terms of compute, Trinity should be a LOT faster and given how heavily they're promoting GPGPU/Fusion, you'd expect that they'd want to ramp it up fast.
 
But it's on the same process as Llano so that extra performance won't come for free - it will be a pretty big chip. I wouldn't be surprised if AMD kept selling Llano for quite some time.

That's a valid point, but on the other hand, Llano is the very first GPU on a GloFo, SOI process ever. So I wouldn't rule out some density improvements in Trinity.

Also Trinity i presume is a two module, four "core" design. Im not sure they have a four module, 8 core Trinity planned as that would have an extremely large die size. Llano is available in a "true" four core design. Now while AMD Marketing might say both have four "cores", at the same clocks Llano might actually be faster than Trinity in situations that dont take advantage of BD's cores. In that case Llano might actually be a better alternative!

4 Bulldozer cores > 4 Stars cores, AMD has been pretty clear about that. And clocks don't matter, only power does.

And as you said i expect they will continue to sell Llano for a while until Trinity production ramps up, and as the process matures they'll be able to up the clocks as well. But in Terms of compute, Trinity should be a LOT faster and given how heavily they're promoting GPGPU/Fusion, you'd expect that they'd want to ramp it up fast.

I think dual- and quad-core versions of Trinity should be able fill the entire notebook and mainstream desktop market without any need for Llano. Of course they'll need some time to ramp up production, but that's always the case with new chips.
 
Llano could end up being like the Core Duo - replaced by a far superior chip after a short period of time on the market. Llano does feel like a semi-failure in notebooks because of its ineffectual Turbo Core. Maybe Trinity will be much more capable in that respect.
 
Llano could end up being like the Core Duo - replaced by a far superior chip after a short period of time on the market. Llano does feel like a semi-failure in notebooks because of its ineffectual Turbo Core. Maybe Trinity will be much more capable in that respect.

Have you seen any review of retail laptops to back that statement up?
I know judging Llano CPU performance by SB performance will paint this picture, but no one expected Llano to win there.
If Turbo is as 'broken' in retail parts as it was in review samples then I'm with you on that for 35W parts. 45W parts are clocked much higher which gives a bit of hope for reasonable single thread performance.
 
4 Bulldozer cores > 4 Stars cores, AMD has been pretty clear about that. And clocks don't matter, only power does.

Source? Ive never seen a claim like that. IIRC it was upto 80% improvement for a BD module over a single Phenom core for optimal workloads. And far less for non-optimal workloads

Have you seen any review of retail laptops to back that statement up?
I know judging Llano CPU performance by SB performance will paint this picture, but no one expected Llano to win there.
If Turbo is as 'broken' in retail parts as it was in review samples then I'm with you on that for 35W parts. 45W parts are clocked much higher which gives a bit of hope for reasonable single thread performance.

Whatever hardware was shipped to reviewers is the same as the retail hardware. Its not going to magically change between the review and retail availability. Its likely been in production for months already. But all the reviews ive seen so far are for the 35W A8-3500M. Would have been better for AMD to send out the higher performance 45W A8-3510 MX CPU's to the reviewers (of course the battery life would have been worse so maybe thats why they didnt)

And the turbo cant be "broken" in either, its a hardware based turbo which measures power consumption in the chip.
 
Yeah I don't think the Turbo is broken. I think that with the 35W chips it is so restricted that it can barely do anything. Hopefully they can get more from 32nm with new chips down the road. Maybe dual cores are the answer or perhaps Trinity will be more power efficient.

But since I haven't seen a review of a Llano with both Turbo Core and higher TDP, maybe the Turbo is not very efficient/effective. It's possible. The Anandtech desktop test uses the top end Llano that lacks Turbo Core and has a 100W TDP.

Another question is why did they stop at 2.9 GHz? And why not give the 2.9 GHz desktop chip Turbo?
 
4 Bulldozer cores > 4 Stars cores, AMD has been pretty clear about that.

About as clear as mud. Their design targets sure as hell don't imply that (speed racer, reduced FO4, knee of the curve in IPC etc.). Some of the stuff they've unveiled would hint at BD being better in the cases in which K8L was really crap (branch-prediction, coherence mechanisms, prefetching, front-end etc.), but how that impacts the average case is unclear. So at this point, you might say...maybe it is indeed better per clock, or maybe it's not.
 
Source? Ive never seen a claim like that. IIRC it was upto 80% improvement for a BD module over a single Phenom core for optimal workloads. And far less for non-optimal workloads

No. It was 180% of single BD core versus single BD module. AMD never gave any performance estimate of BD vs. Thuban other than on server side you will get 50% throughput with 33% more cores.
What John Fruehe said privately on one of the forums was that BD IPC > Phenom II IPC. Nothing more, nothing less.

Whatever hardware was shipped to reviewers is the same as the retail hardware. Its not going to magically change between the review and retail availability. Its likely been in production for months already. But all the reviews ive seen so far are for the 35W A8-3500M. Would have been better for AMD to send out the higher performance 45W A8-3510 MX CPU's to the reviewers (of course the battery life would have been worse so maybe thats why they didnt)

And the turbo cant be "broken" in either, its a hardware based turbo which measures power consumption in the chip.

Nitpicking, but no again :p.
What do you mean by hardware based turbo? That it can't be set to behave in specific way? Is it hard locked for some reason? It must be programmable as Intels solution is.
Most of the reviews openly said AMD told them test platforms are not perfect. They had bugged BIOS causing problems and corruptions with Cross Fire modes for instance.
Besides Llano Turbo behaviour is also regulated by BIOS. If for instance AMD wanted to look good in tests where it's previous mobile platforms failed miserably, battery life, they could tweak BIOS Turbo settings to conserve as much power as possible and run very conservative Turbo.
Other possibility would be that Turbo core was not considering fGPU load into calculating available thermal headroom. It can be observed from tests where fGPU is disabled and system is running on dGPU. CPU performance is not affected.
I'm not expecting all this to be the case and retail systems might behave identically turbo wise to preview systems, but simply saying what we saw in previews is what we will get from retail machines is not wise.
 
That's a valid point, but on the other hand, Llano is the very first GPU on a GloFo, SOI process ever. So I wouldn't rule out some density improvements in Trinity.
GPUs are synthesized blocks. So I wouldn't expect any non-trivial density improvement.
 
But it's on the same process as Llano so that extra performance won't come for free - it will be a pretty big chip. I wouldn't be surprised if AMD kept selling Llano for quite some time.

Why not sell both? Just replace the A8 Llano with Trinity.
 
That's unfortunate. If Trinity does indeed have 512 VLIW4 shaders, unless it runs into severe memory bandwidth limitations, it might actually widen the graphics performance gap with Ivy Bridge.

Then again, 22nm may allow for significantly higher clocks.

They need to figure out how to break past the bandwidth barrier with Trinity.
I'm hoping for 896VLIW4.

No one should really care if the CPU power of Trinity is barely more than Llano.
 
I wonder, if stronger graphics are really worth the hassle. I mean, at least you have to tape out different chips, plan supply accordingly and have a multitude not only of SKUs but also production lines/wafer allocations. All of that costs money - and APUs are all about reduced cost and TCO, so far.

edit:
To be more precise: You'll never catch up to discrete graphics and CPU in terms of performance, so that market is not in danger from APUs. You'll have to carter for the peeps requiring "just enough" potential in CPU/GPU to browse the web and accelerate maybe a few things like HTML5 - and that's a pretty small target group that's sitting between what is sufficiently handled by a traditional IGP and what requires a dedicated GPU (like gaming outside of facebook). Just because most customers are not (willing to be) educated enough to do product research before buying - at least that's my experience outside of tech-savvy people like mother-in-law and the like.
 
I wonder, if stronger graphics are really worth the hassle. I mean, at least you have to tape out different chips, plan supply accordingly and have a multitude not only of SKUs but also production lines/wafer allocations. All of that costs money - and APUs are all about reduced cost and TCO, so far.

edit:
To be more precise: You'll never catch up to discrete graphics and CPU in terms of performance, so that market is not in danger from APUs. You'll have to carter for the peeps requiring "just enough" potential in CPU/GPU to browse the web and accelerate maybe a few things like HTML5 - and that's a pretty small target group that's sitting between what is sufficiently handled by a traditional IGP and what requires a dedicated GPU (like gaming outside of facebook). Just because most customers are not (willing to be) educated enough to do product research before buying - at least that's my experience outside of tech-savvy people like mother-in-law and the like.
I think you are asking the wrong question, it should be "if stronger CPU is really worth the hassle." ;) For quite some time CPU's (even cheap-medium class) were good enough for majority of users. However even though integrated GPUs were sufficient for a part of the market (namely office), but it was barely good enough for something more demanding than Flash games.

Its changed now, you can buy APU which covers 80+% of users needs, including gaming (outside of Crysis fan base :smile:), GPU acceleration will be handy too. Even 1st generation APU's made low-middle(!) discreet graphics obsolete, few generations later, APU's will cover 95% users needs. Discreet graphics will be only for high-end gaming and professional market.
 
Its changed now, you can buy APU which covers 80+% of users needs, including gaming (outside of Crysis fan base :smile:), GPU acceleration will be handy too. Even 1st generation APU's made low-middle(!) discreet graphics obsolete, few generations later, APU's will cover 95% users needs. Discreet graphics will be only for high-end gaming and professional market.

It depends what you mean by high-end gaming. Llano is insufficient on the graphics side for most modern console ports at higher graphical settings. Even the CPU side of it is going to struggle with a lot of games.

I would define high end PC gaming as gaming with a discrete GPU which is powerful enough to max out virtually every game available at 1080p.

There's still a big gap between that and flash gamers consisting of people that want to be able to play most PC games and get them looking/running good without maxing everything out and spending a ton on hardware.
 
I think you are asking the wrong question, it should be "if stronger CPU is really worth the hassle." ;) For quite some time CPU's (even cheap-medium class) were good enough for majority of users. However even though integrated GPUs were sufficient for a part of the market (namely office), but it was barely good enough for something more demanding than Flash games.

Its changed now, you can buy APU which covers 80+% of users needs, including gaming (outside of Crysis fan base :smile:), GPU acceleration will be handy too. Even 1st generation APU's made low-middle(!) discreet graphics obsolete, few generations later, APU's will cover 95% users needs. Discreet graphics will be only for high-end gaming and professional market.

80%+ of users needs or the needs of 80+% of the users? :)
Anyway, I've seen nice HTML5/WebGL-Benchmarks showing 50+ fish animated on screen, I've seen nicely animated preview-walls of pictures displayed as demos of HTML5. But when it comes to what kind of a web browsing experience real people are having, it mostly looks like facebook, youtube and your news-portal of choice.

So, while I won't argue your point of 80+ to 95+ percentages I am still questioning how large the percentage is that's already content with what a traditional, but modern IGP-style (DX9/10-capable) graphics delivers - which I reckon to be north of 80% as well. And that would really narrow the window of opportunity down for AMD if I am right.

What AMD needs for the APU to be a great product is nothing less than a massive change in the software landscape - they've realized that themselves, having established their own developers conference now for the first time. But things change slowly in software; Nvidia for example pushed GPU-Computing really hard now for - what four, five years? And still, despite heterogeneous computing now crawling into supercomputers, for the end user all those efforts are basically exhausted with your stereotypical video conversion implementation via the drivers built-in libaries.

The problem is: As long as chipzilla decides to do nothing about it, developers won't be as tempted to work on complicated heterogeneous computing models for their products as long as only one quarter of the market will profit from it.
 
It depends what you mean by high-end gaming. Llano is insufficient on the graphics side for most modern console ports at higher graphical settings. Even the CPU side of it is going to struggle with a lot of games.
Consoles themselves have weaker graphics than faster Fusion APUs, and those can run most of the games just fine at 1680x and even 1920x. Not on high-details or with cranked up AAx, but good enough for majority of casual gamers, let alone office/home needs.

I would define high end PC gaming as gaming with a discrete GPU which is powerful enough to max out virtually every game available at 1080p.

There's still a big gap between that and flash gamers consisting of people that want to be able to play most PC games and get them looking/running good without maxing everything out and spending a ton on hardware.
Few generations later APU's will make inroads in higher-end gaming too, but for the most hardcore players there will always be discreet option as I mentioned. Low-middle-end discreet GPU's will be dead.
 
80%+ of users needs or the needs of 80+% of the users? :)
Anyway, I've seen nice HTML5/WebGL-Benchmarks showing 50+ fish animated on screen, I've seen nicely animated preview-walls of pictures displayed as demos of HTML5. But when it comes to what kind of a web browsing experience real people are having, it mostly looks like facebook, youtube and your news-portal of choice.

So, while I won't argue your point of 80+ to 95+ percentages I am still questioning how large the percentage is that's already content with what a traditional, but modern IGP-style (DX9/10-capable) graphics delivers - which I reckon to be north of 80% as well. And that would really narrow the window of opportunity down for AMD if I am right.

Sorry for my not perfect English, I meant "needs of 80+% of the users" :smile:

For office/internet of course weak/average CPU + IGP was enough, even ARMs will be :devilish: Thats why I mentioned majority of users dont even need faster CPU, if anything - IGP was the weak link. Have you ever bough PC or notebook with IGP and said "thats enough for my gaming needs"? I seriously doubt it.

APUs changing that. There is a massive demand for low-middle-end graphics (actually, its the highest volume discreet GPU market), and their needs will be met by APUs from now on. If 80% of market would be satisfied by IGP, there wouldnt be such demand for discreet cards.

What AMD needs for the APU to be a great product is nothing less than a massive change in the software landscape - they've realized that themselves, having established their own developers conference now for the first time. But things change slowly in software; Nvidia for example pushed GPU-Computing really hard now for - what four, five years? And still, despite heterogeneous computing now crawling into supercomputers, for the end user all those efforts are basically exhausted with your stereotypical video conversion implementation via the drivers built-in libaries.

The problem is: As long as chipzilla decides to do nothing about it, developers won't be as tempted to work on complicated heterogeneous computing models for their products as long as only one quarter of the market will profit from it.
Thats why AMD is working closely with Microsoft, Apple, etc. to make it happen. MS already presented tools for Fusion. Nvidia failed to make CUDA mass-spread because it was closed and exclusive for NV cards, nobody of big sharks cared about it. However MS and Apple are mass pushing OpenCL/DirectCompute, and will make sure they'll become standards for the whole industry.

AMD or NV by themselves cant do that, if anything, AMD's limited endeavors will just help to transition smoother in the direction market already is moving.
 
It depends what you mean by high-end gaming. Llano is insufficient on the graphics side for most modern console ports at higher graphical settings. Even the CPU side of it is going to struggle with a lot of games.

I would define high end PC gaming as gaming with a discrete GPU which is powerful enough to max out virtually every game available at 1080p.

There's still a big gap between that and flash gamers consisting of people that want to be able to play most PC games and get them looking/running good without maxing everything out and spending a ton on hardware.


I think Llano's main target is to be on ~1366*768 13-15" laptop screens, and not 23" 1080p desktop screens.
From what we've seen, even mid-priced 15" laptops (~750€) will bundle discrete Whistlers for Crossfire goodness.
Performance is just right for the targetted resolution, as a "HD6755G2" will be able to max out most games - especially console ports - at 720/768p (they just need to get DX9 compatibility working, fast!).


For desktops, Llano is replacing Athlon II, so the iGPU won't be of much more use than the current HD3200\HD4200 IGP on most motherboards: an integrated GPU that provides "flawless" video playback and browser acceleration. At least until APP really starts to spread some wings, of course.
 
Back
Top