Haswell vs Kaveri

I don't know how ULV GT3 will perform since low power models are usually TDP limited but the higher TDP GT3 variants from Haswell are surely much faster than Richlands mobile top GPU.

I doubt that very much. The graph shows the A10 5757M being only 16% slower than the 6800K.

i3xbth.png
 
I hadn't realised GT3 was mobile only. That's pretty crap and basically eliminates Intel as any kind of competitor to AMD on the desktop APU front.

EDIT: put into perspective GT2 will probably be competitive with Llano on the desktop which in turn will likely only be around half the speed (maybe a little more) of Kaveri but without the full HSA implementation. Blugh!

If there's GT3 without dram and GT3 with dram, then the former will be probably seen on desktops, on limited select chips like those you can find HD4000 on today.
Then, the majority of i3/i5 chips would run GT2 and GT1 maybe is the denomination for Celeron/Pentium chips.

Even today, i3 3225 is sort of competition for the desktop Trinity, slower graphics but better CPU and lower watts.
 
I doubt that very much. The graph shows the A10 5757M being only 16% slower than the 6800K.

i3xbth.png


That score is just 22% above the A10-4600M. We can't expect more than +20% average from the fastest mobile Richland. It means the fastest Richland might be 50-60% faster than the current fastest mobile Ivy Bridge and this is not enough for GT3 Quad.
 
Well I haven't seen any benchmarks of GT3 to be sure however I have my doubts. By OBR's numbers the 5757M is 72% faster than the 3770K, which is faster than any intel mobile chip as far as I'm aware.

It's unlikely that any Haswell GT3 mobile chip will double performance over the 3770K with both being on the same process and a lower TDP figure to be met. Even the 55W GT3 is unlikely to best the 5757M in games that are GPU bound.
 
Well I haven't seen any benchmarks of GT3 to be sure however I have my doubts. By OBR's numbers the 5757M is 72% faster than the 3770K, which is faster than any intel mobile chip as far as I'm aware.

It's unlikely that any Haswell GT3 mobile chip will double performance over the 3770K with both being on the same process and a lower TDP figure to be met. Even the 55W GT3 is unlikely to best the 5757M in games that are GPU bound.


3dmark11 ! = average

3dmark11 isn't representative for the average of gaming performance, this is just one benchmark. 3dmark11 tends to run on AMD better than most games. And maybe you didn't know, HD4000 is faster on mobile than desktop. Fastest desktop clocks with 1150 Mhz while the fastest mobile clocks with 1350 Mhz.
 
3dmark11 ! = average

3dmark11 isn't representative for the average of gaming performance, this is just one benchmark. 3dmark11 tends to run on AMD better than most games. And maybe you didn't know, HD4000 is faster on mobile than desktop. Fastest desktop clocks with 1150 Mhz while the fastest mobile clocks with 1350 Mhz.

And the igp clock speeds are meaningless as maintaining turbo is harder on the mobile chips. I'm well aware that there are differences in the average gaming benchmarks, which is why I said "gpu bound" games. The GT3 will look really good in some games and will win "on average" but it'll mostly be down to the cpu cores, and it will still struggle to beat the 5757M in graphically bound games at higher settings.
 
And the igp clock speeds are meaningless as maintaining turbo is harder on the mobile chips.

Not for higher TDP variants. This is only an issue for ULV models.

The GT3 will look really good in some games and will win "on average" but it'll mostly be down to the cpu cores, and it will still struggle to beat the 5757M in graphically bound games at higher settings.

No chance, GT3 will easily beat that. Even a conservatively 75% inrease over IVB GT2 would be enough.
 
@Paran - Only if you believe 75% is conservative. It's a ridiculous argument regardless as we're pitting a 35W AMD chip vs a 55W Intel chip. If AMD had a 55W mobile APU and 2400 DDR3 it would be a clear win at 1/10th the price.
 
You call +75% conservative?

Most rumours seem to be pointing at that. Bare in mind you're going from 16 EU's to 40 and memory bandwidth from 25.6GB/s shared with the CPU to 64GB/s dedicated to the GPU plus system memory bandwidth.

If anything 2x seems conservative but I expect it'll be clocked lower than HD4000.
 
LV/ULV Trinity was much better placed against Ivybrige for ultrabooks - since it had superior GPU performance and good-enough CPU - and all it got was a couple of design wins that don't even appear in most stores.
I'd be interested in seeing better benchmarks on this front. I've seen almost no 17W Trinity benchmarks, and the ones that I've seen has it being pretty much neck in neck on GPU performance with 17W Ivy Bridge. i.e. the lead that Trinity has at high TDPs seems to disappear the lower it goes (which is not too surprising giving process tech).

I hadn't realised GT3 was mobile only. That's pretty crap and basically eliminates Intel as any kind of competitor to AMD on the desktop APU front.
Even if true, who cares? There really no reason to buy a desktop chip, care about graphics and not get a discrete card. Basically if you are form factor/TDP limited, it's gonna be all mobile variants, and if you're not, a discrete card is always going to be a much better deal. I think Trinity desktop confirms this pretty well.

The "desktop APU" market is made-up IMO. Really the only interesting race here is mobile/similar TDPs.
 
Last edited by a moderator:
I'd be interested in seeing better benchmarks on this front. I've seen almost no 17W Trinity benchmarks, and the ones that I've seen has it being pretty much neck in neck on GPU performance with 17W Ivy Bridge. i.e. the lead that Trinity has at high TDPs seems to disappear the lower it goes (which is not too surprising giving process tech).


Even if true, who cares? There really no reason to buy a desktop chip, care about graphics and not get a discrete card. Basically if you are form factor/TDP limited, it's gonna be all mobile variants, and if you're not, a discrete card is always going to be a much better deal. I think Trinity desktop confirms this pretty well.

The "desktop APU" market is made-up IMO. Really the only interesting race here is mobile/similar TDPs.

One good reason to do it is cost. The desktop APU is a good solution for cheap gaming rigs.
 
Even if true, who cares? There really no reason to buy a desktop chip, care about graphics and not get a discrete card. Basically if you are form factor/TDP limited, it's gonna be all mobile variants, and if you're not, a discrete card is always going to be a much better deal. I think Trinity desktop confirms this pretty well.

The "desktop APU" market is made-up IMO. Really the only interesting race here is mobile/similar TDPs.

I think the relevance of APU's may come from the new generation of consoles. If they make heavy use of gameplay effecting GPGPU then PC CPU's and discrete GPU's may not be able to compete (due to the overheads of GPGPU on a discrete GPU).

So the only option in the PC space may be to farm off time sensitive or even all GPGPU calcs to an on die GPU like GT2/3 and AMD's equivalents.

There have been slides from AMD at least suggesting the GPU portion of Kaveri could be treated as a dedicated GPGPU processor in combination with a discrete GPU.
 
One good reason to do it is cost. The desktop APU is a good solution for cheap gaming rigs.
Meh, it never has been any time that I've looked. You still end up with a better overall gaming system by getting a cheaper CPU with a discrete CPU. And you end up *way* better if you just spend an extra $50-100... at those low price points even $10 shows big increases.

I think the relevance of APU's may come from the new generation of consoles. If they make heavy use of gameplay effecting GPGPU then PC CPU's and discrete GPU's may not be able to compete (due to the overheads of GPGPU on a discrete GPU).
That's a valid point, but there are still two factors working against it. First, PC CPUs are still going to be able to handle significantly more number crunching than the console ones, so even in places where they offload bits to the GPUs there, it may not be necessary on PCs. (Remember, quad-core Haswell is going to be capable of somewhere near 500Gflops just on the *CPU*...) Second, it's not clear that the APIs on PCs are going to be ready for that sort of low(er)-latency interop in the relevant time frame. You sort of have to choose to design an API around discrete memories or shared, and the reality is right now they're all designed around discrete. I think that'll shift long term, but probably not in the next few years.
 
Second, it's not clear that the APIs on PCs are going to be ready for that sort of low(er)-latency interop in the relevant time frame. You sort of have to choose to design an API around discrete memories or shared, and the reality is right now they're all designed around discrete. I think that'll shift long term, but probably not in the next few years.

AMD do suggest that Kaveri can be used as a dedicated GPGPU processor in combination with a discrete GPU fr graphics in one of their HSA presentations though.

In theory that could work the same way that NV can send physX work to a dedicated secondary GPU.

Assuming a game has GPGPU features that require lower latency than a discrete GPU can provide then you could simply have a player controlled setting in the game that lets users select between "software compute" and "hardware compute" (on systems that support HSA or a similar level of CPU/GPU communiation). If you have a high end Haswell with GT1 then software will be much faster while a high end Kaveri would be considerably faster using the hardware option. On the other hand, If you only have a Sandybridge or older CPU with no GPGPU capable GPU on die then you are stuck with software compute only at lower performance levels than what would be available on the consoles and thus you have to reduce quality/detail settings to maintan performance.
 
Meh, it never has been any time that I've looked. You still end up with a better overall gaming system by getting a cheaper CPU with a discrete CPU. And you end up *way* better if you just spend an extra $50-100... at those low price points even $10 shows big increases.

The enthusiast mindset just doesn't really understand it but the facts speak for themselves, look at this :-

share.png


You can see that 30% of the entire market in 2008/2009 was buying gpu's < $50. They could get much better performance for another $50 sure, but they're not paying it for one reason or another.

If you want a step up from Trinity you'd buy an i3 3220 ($130) + a 6670 DDR5 ($85) as your entry point...but you're spending $85 more than you would on a 5800K alone. If people were willing to pay that then there would be a lot more sales above the $50 mark with discrete gpu's.

As it happens they just want something that works playably for as cheap as they can, and they swallow down "QUAD CORE" and "GHz" way more than anything else at this price range. A 4.2 GHz quad core with 7660 graphics looks so much better than some 3.3 GHz i3 "dual core with HT" + 6770 graphics. Given the choice 95% of people would take the 5800K at the same price, let alone $85 cheaper.
 
Back
Top