Haswell vs Kaveri

128 Watts full system load under Furmark, which is a power virus. There's no way that cpu is drawing anything near 130W.

Edit - woops my bad, that's under F1 2011. Still, 128W under full system load means it must be pretty close to its 100W TDP.
 
Last edited by a moderator:
A well-thought machine is made of parts that have been specifically designed to work together with certain trade-offs. If you take a piece off in order to compare it to another one you often reach the wrong conclusion. You can't simply take the eDRAM and pretend the rest was not architected to use it.
That's a good point. But think about it. If an archtecture is not scaling without an EDRAM, and the other is good to go without any help, which is the better solution at the price of the transistor usage?
Also ... Intel can't make an EDRAM solution to the LGA1150 package, because there are not enough space for the memory. But they can make a GT3 design without the EDRAM, and they don't do it. Why don't they prove it if they design is faster? Face to face, with an A10-6800K, and a Core i7 with GT3 iGPU without the EDRAM.
 
128 Watts full system load under Furmark, which is a power virus. There's no way that cpu is drawing anything near 130W.
It is specified what those APU are running at the time of measurement. The measurement are made "at the plug" for all the systems.

Anyway as it seems it is that tough, the highest power consumption in that test is not measured furmark (GPU only) but when playing F1 2011 but that doesn't require to understand French.
 
Sorry I'm wary about AMD power figures, those 100Watts part burns close to 130 Watts, I will wait for review and yes the Next Atom aims at higher perfs per watts enough to fit in a phone (review will tell if INtel succeeded).
Intel and AMD TDP are definitely not the same. You can read here.
The next Atom IGP won't be faster then the Temash IGP. It only has 4 EUs. The first testchip is run at ~300 MHz, so the compute capacity is 19.2 GFLOPS, if we assume that the co-issue can be utilized.
The Radeon HD 8180 compute capacity is 57,6 GFLOPS.
 
The HD 4760 pushes 432 GFLOPS, trinity Richland 50% (640GFLOPS), FLOPS only say that much about a GPU. Not too mention that is far from x10.

Anyway now I'm sure that the point of that discussion has nothing to do with architectural detail and where Intel is heading. I don't want to get trap in a useless discussion about comparing different implementations of an arch, without actually caring for the architecture in question be it GCN or Gen6/7.

I'll wait for something constructive like sebbbi answer to my post wheter it made sense or not :LOL:
 
The HD 4760 pushes 432 GFLOPS, trinity Richland 50% (640GFLOPS), FLOPS only say that much about a GPU. Not too mention that is far from x10.

Anyway now I'm sure that the point of that discussion has nothing to do with architectural detail and where Intel is heading. I don't want to get trap in a useless discussion about comparing different implementations of an arch, without actually caring for the architecture in question be it GCN or Gen6/7.

I'll wait for something constructive like sebbbi answer to my post wheter it made sense or not :LOL:

I find your point of view intellectually dishonest. Sure they are comparable products but your not comparing comparable technology. If you where to compare to technology which is what you are discussing you would be looking at GCN not VLIW but that doesn't support your position so you don't.
 
That's a good point. But think about it. If an archtecture is not scaling without an EDRAM, and the other is good to go without any help, which is the better solution at the price of the transistor usage?
You are making the same mistake again, you can't take it away and then say that it doesn't scale it without it. You are implicitly assuming that without it the the rest of the machine would have been identical, which is not likely to be the case :)
But they can make a GT3 design without the EDRAM, and they don't do it. Why don't they prove it if they design is faster? Face to face, with an A10-6800K, and a Core i7 with GT3 iGPU without the EDRAM.
Not sure I follow you. There are multiple (lower power) GT3 SKUs that don't use eDRAM.
Also talking about performance disjointly from power hardly makes any sense these days.
 
IT'S ALL ABOUT TEH GAMING.

That's the playing field where Kaveri can destroy Haswell (and Nvidia). It's what AMD's Unified Gaming Strategy is all about.

What AMD wants is a handful of the hottest next gen games HIGHLY optimized for Kaveri and 8xxx GPUs, which are releasing concurrent with the consoles, that they will make sure are in the hands of Kaveri reviewers.

Kaveri was developed on the same timeline as the PS4 and Xbox One processors, so AMD had every opportunity to tweak Kaveri for maximum compatibility and ease of game porting. All AMD led HSA, AMD middleware, AMD toolsets ... if you think about it, it's a quite stunning hat trick.

I would guess the primary focus of the optimization for 'AMD hardware' is on Kaveri and 8xxx GPUs and I would also guess what AMD is doing with EA/Dice and the Frostbite engine is being done with all the major developers and game engines. All of which appear to already be on board AMD's Gaming Evolved initiave. For the developers HSA means all subsequent AMD APUs and GPUs can use the same coding. The end game for AMD apparently being to create the programming tools across the game engines to allow all interested developers (which will be all of them in time) to optimize for AMD hardware in general and the HSA APUs and GPUS in particular at very little cost. It's already pretty obvious Nvidia is in the beginning stages of being shut out, that will continue and intensify following the relase of Kaveri and 8xxx cards. The game developers and publishers are VERY aware of how this is going to play out and it's only a matter of timing. For the publisher of Lara Croft that time has already come. They just flat out gave Nvidia the middle finger. For them Nvidia is just a time, manpower and money sucker with little added value. Nvidia is no longer relevant.

The top end Kaveri is rumored to have 512 SPs, roughly equivalent to a 7750. A jump up from existing APU graphics, but not that huge in itself. But paired with Steamroller cores, HSA efficiencies, highly optimized coding and some GDDR5 motherboards - that's getting into PS4 game performance territory on a $400 computer. How much do the developers and publishers love that? A massive jump in gaming performance where the bulk of PC sales happen - that's nothing but good for their bottom line.

So ... expect Kaveri to rack up some astounding scores on next gen games in particular, and, as Kaveri graphics will (reportedly) be compatible with all 8xxx AIBs and additive to those AIBs, Nvidia will be at a massive disadvantage with Kaveri owners as all those sweet HSA optimizations and extra graphics performance will *poof* disappear with an Nvidia AIB.

If the top end Kaveri pretty much does destroy Intel on gaming benchmarks (highly probable) AND quality of gameplay, and approaches PS4 performance on a $300 Motherboard/APU combo it is absolutely going to become the hottest must have gaming set-up, even for high end gamers who will be able to choose any of the 8xxx cards to upgrade with.

With Kaveri's successor and 9xxx cards with DDR4 - with every game highly optimized for AMD ... just dont' see how Intel or Nvidia can compete.

That's why I expect for gaming centric PC purchasers, AMD"s Kaveri and 8xxx AIBs (and their successors) will very rapidly supplant Intel and Nvidia as the processors of choice, as in by this time next years it will own the majority of the gaming CPU and GPU market.

The 'rest of the story' will include AMD processors in gaming centric tablets and laptops and, as the is every indication the professional graphics market is looking to welcome HSA and open standards with open arms, and that means, over time, AMD APU based professional cards, I suspect that market will follow in the footsteps of the consumer graphics market.

And the pool of HSA/Graphics proficient programmers will explode.

In time Intel will respond with killer graphics propelled by AMD and Nvidia former employees, but by then the landscape will be dominated by HSA (AMD), and at least for now Intel is not part of that alliance.

If you're holding Nvidia stock - might be a good time to sell ... and maybe buy AMD.

As an aside, I consider it likely Project Shield was developed as part of Nvidia's bid for Steam Box, which is the only place it made sense with Gabe's wanting the Steam Box to be able to stream multiple games at the same time, which is where the Shield made sense, and at CES Gabe and Valve name dropped Nvidia more than once.

But considering all the above I think Gabe ultimately decided on an AMD solution, for sure the developers, whose support Gabe said were 'critical' to the success of Steam Box, had to be massively behind and AMD HSA APU solution, which they were already optimizing for, and for Valve, purely on a cost/performance basis, had to be a far superior solution to and Intel/Nvida solution.

All of which is to say that I considered the release of the Shield to be confirmation Nvidia lost out on the Steam Box and was just throwing it out there to try and recoup some of the development costs .... and maybe a pretty desperate attempt to get a foothold in the android market with it's Tegra line getting no traction. And hence the recent licensing announcement. Nvidia is looking at a future landscape of drying up revenue streams. JHH is doing what he can.
 
Face to face, with an A10-6800K, and a Core i7 with GT3 iGPU without the EDRAM.
Maybe because desktop APUs with big IGP portions are pretty dumb, as the lack-of-market has demonstrated?

Your notion of removing EDRAM is arbitrary. Is NVIDIA "scared" too because they don't make a DDR3 Geforce Titan?

At best, you're admitting that A10's are unbalanced, bandwidth-starved GPUs that should have spent more transistors on memory hierarchy and fewer in other areas. I don't see why other GPUs should be forced to make the same mistake for "comparison's" sake.
 
Last edited by a moderator:
Maybe because desktop APUs with big IGP portions are pretty dumb, as the lack-of-market has demonstrated?
Lack-of-market because the "brainwashing marketing", which is too strong. But you know what I see. I told many people to try an AMD APU (mostly I recommend Trinity, but it doesn't matter). They switched from Clarkdale/Sandy Bridge/Ivy Bridge system, and all of them told me that the APU is incredibly good. One of them are very happy because Minecraft is playable now (at 100 fps on the AMD APU, when he just got 15 fps with Sandy Bridge).

At best, you're admitting that A10's are unbalanced, bandwidth-starved GPUs that should have spent more transistors on memory hierarchy and fewer in other areas. I don't see why other GPUs should be forced to make the same mistake for "comparison's" sake.
The A10-6800K IGP is the fastest solution, in the socketed CPU market. Make a faster socketed solution, than Intel can criticize it. Also make better drivers for it. Civilization V, Minecraft, Shogun 2, ... are painfully slow on any Intel IGP.
 
I find your point of view intellectually dishonest. Sure they are comparable products but your not comparing comparable technology. If you where to compare to technology which is what you are discussing you would be looking at GCN not VLIW but that doesn't support your position so you don't.
Well it is not dishonesty but a mistake, I simply forget that AMD has yet to introduce GCN in its APU and that is all there is too it. Still I'm not the one that want absolutely to compare product A to product B, it is more like some people can't read someone stating : "it looks to me that Intel 1) does great GPU and in some regard is ahead of the competition". It is plain derailing imo...
Not too mention that vliw4 is what AMD ships now and what the press is reviewing/comparing.
Another thing is I did not say that "Intel use of silicon is inefficient, doesn't scale, etc." against factual evidences.

Between the people that are still too willing to vouch Intel iGPU as sucky and those that feel like they have to defend anything AMD to a pathological extend, well there is obviously not much to discuss.

Anyway, I hope react sebbbi to my post so I know if I get it right or if I misunderstood how those caches (within the ROPs) behave.

About Kaveri, does somebody hear something about GDDR5, or AMD will stick again to DDR3?
The few slides and news I read, hint sadly at nothing of that sort.
I really hope AMD find fast a solution to let the iGPUs "breathe". I really hope they do. I was thinking about opening a topic on that matter of what AMD can do and in a more broad manner how large caches (on or off chip) could affect GPU performances. Especially on chip cache and how that could trigger architectural changes (say if texturing latencies are significantly lower, the amount of threads GPU have to deal with could be lower).
Anyway there were not many facts or data to discuss, CW is new, durango is not release and even if its scratchpad is not a cache it should provide the GPU with a relatively low latency memory pool so I ask ERP is pov so far no news and actually having think more about it I think now that is simply better to wait&see.
 
Last edited by a moderator:
Between the people that are still too willing to vouch Intel iGPU as sucky and those that feel like they have to defend anything AMD to a pathological extend, well there is obviously not much to discuss.
From the perspective of the hardware Intel is not bad. But if we count the drivers, than Intel is sucky. My wife can't play her favorite games (CIV5 - slow, MineCraft - slow, Gothic 3 - 1 fps, SW: Kotor - doesn't run) on a Sandy Bridge laptop. She is using my Llano notebook for this.

About Kaveri, does somebody hear something about GDDR5, or AMD will stick again to DDR3?
In the first round there won't be GDDR5 version.
 
How many design wins for Iris Pro?
Well I'm not sure anything has been announced so far but looking at the high price (/really high price) I would be surprised if the part made it in any laptops outside of high end ultra-book and Macs.

Intel might have crazy margins on those ones as buying a 4950 HQ seems more expensive (i haven't checked prices again out of lazyness...) than buying a core i7 ot discrete part.
Though AMD and Nvidia should be thankful to Intel and its margins as the thing as the potential to kill anything below GT650m.
 
From the perspective of the hardware Intel is not bad. But if we count the drivers, than Intel is sucky. My wife can't play her favorite games (CIV5 - slow, MineCraft - slow, Gothic 3 - 1 fps, SW: Kotor - doesn't run) on a Sandy Bridge laptop. She is using my Llano notebook for this.


Based on SB. In the meantime IVB and HSW launched. Also a new driver. Move on.
 
Based on SB. In the meantime IVB and HSW launched. Also a new driver. Move on.
No matter if IvyB or Haswell ... CIV5, MineCraft and Gothic 3 is still much slower than my mobile Llano. The only advantage is that SW: Kotor can be executed with the newest driver on IvyB and Haswell, but not with SandyB.
Also why "move on". The problem is the sucky driver not the hardware. Intel must support their product with much more care, like AMD and NVIDIA do.
 
No matter if IvyB or Haswell ... CIV5, MineCraft and Gothic 3 is still much slower than my mobile Llano.


Proof? I don't even believe that you tested IVB and HSW, or IVB on 15.31 which is important in particular for the old Gothic engine due to the low clock issue in previous driver stacks. Your main target seems to be bashing against Intel due to a biased thinking. HSW HD4600 for mobile is basically on par with a A10 mobile Trinity. You're moaning about the lack of a big socketed Intel GPU. Why you don't moan about AMDs lack of mobile GPU lineup? The big GPU with full shaders and acceptable GPU (A10) frequency is basically not available as a iGPU only Notebook. In the EU have a look to A10 notebooks: http://skinflint.co.uk/eu/?cat=nb&xf=29_AMD+A10-4M~29_AMD+A10-5M#xf_top

Basically nothing available as iGPU only notebook, all with crappy crossfire because AMD itself recommends it. The really meaningful AMD APUs without a dedicated GPU are A4, A6, A8 models.....all of them much slower than AMDs showcase A10 APU!!!
 
Back
Top