Trinity vs Ivy Bridge

I thought ddr 1866 was the smooth spot already on llano, with the current prices.
I wonder what happens if you overclock the A6 5400K to hell, and how good the stock cooler is.

that CPU at 4.8 or 5GHz, with 8GB 1866 and an SSD, GPU overclock if it can be done, under quiet cooling and with a great < 400W PSU.. it would be fun, "premium low end". I'm curious about how that compares to i3 3220 and 3225.
 
Nah the biggest gains for Llano were between 1333 and 1600, with them somewhat tapering off after that.

re5-memory.jpg


http://www.anandtech.com/show/4476/amd-a83850-review/4

Just a couple of examples, but this was generally in the tech press - a pretty large jump from 1333 to 1600 then diminishing returns. Trinity appears to just keep on scaling linearly up to 2400 MHz. I agree that at current prices ddr3 1866 is worthwhile for Llano now though.

The overclocking benchmarks could be interesting with these Trinity's. The 3870K was already very good in that regard.
 
Is there any real confirmation that Kaveri actually will come with eDRAM or a third memory channel or something? DDR4 doesn't seem likely, does it?
 
I mostly remembered a benchmark when A8 llano with 1866 was 10% faster than with 1600, it was of course very game dependent and could behave like above.
 
Trinity appears to just keep on scaling linearly up to 2400 MHz.
Not according to the numbers: http://www.xbitlabs.com/articles/graphics/display/amd-trinity-graphics_8.html.
The step from 1600 to 1866 nets you about a 10% increase (for a 16.7% increase in bandwidth). Going from 1867 to 2400 (a 29% bandwidth increase) only gets you roughly another 10%.
Though the lower-end A8-5600k part is doing very very well too. The A10-5800k has got 50% more shader units, and on top of that it's got also a 5% gpu clock advantage - yet it only delivers about 5-20% more gaming performance (using tomshardware and anandtech numbers - ok there's one exception, minecraft). Obviously AMD was well aware of that and didn't bother to only disable one simd but just got straight to disabling two, one third of the total units...
Still, faster memory obviously helps. Too bad though officially it still only seems to support ddr3-1866. Would have been nice if AMD bumped this up to ddr3-2133 (which is the fastest official jedec-approved ddr3). Maybe the MC isn't quite as good at higher frequencies (xbitlabs saying it needs 2T command rate for instance above 1867, which could explain why scaling isn't nearly as good after that point though you'd think that shouldn't make much of a difference for the gpu).
 
Is there any real confirmation that Kaveri actually will come with eDRAM or a third memory channel or something? DDR4 doesn't seem likely, does it?

We know that at least on the desktop it will use the same socket as Trinity, so that discounts triple channel memory.
It does however pack a GPU component around 50% faster than Trinity (33% more 'cores' and using an architecture ~15% more efficient per clock) so unless AMD plan to dramatically reduce the clock speed some sort of sideport (or eDRAM if it is ready) memory is the logical solution.
 
Is there any real confirmation that Kaveri actually will come with eDRAM or a third memory channel or something? DDR4 doesn't seem likely, does it?
There's no way for ddr4. It _might_ be available end of 2013 but there's no way it would fit into budget systems at that point. For that you're probably looking end of 2014 or sometime 2015 - successor of Kaveri at the earliest (and I don't even know how that's called...). Plus Kaveri should use the same FM2 socket as Trinity anyway.
Nothing has been confirmed wrt eDRAM, gddr5 sideport (which would need to be on-package due to using the same socket) or anything (and personally I'm not quite convinced it will have anything like that, I'm not even convinced that if Kaveri really has the rumored 4MB L3 that the gpu could use it but hopefully it could which might help a bit). The only thing confirmed seems to be 8 GCN CUs.
Though without any "bandwidth-boosting measures" it would indeed possibly have trouble facing HSW GT3 - on the shader side GT3 isn't that far off (320 "alus" vs. 512 "alus" at probably similar frequencies) while obviously GT3 would outclass such a chip in the bandwidth department thanks to its L4. But well intel wants to mostly ship HSW GT1/GT2 it seems beating that shouldn't be a problem.
 
There's a side port on Haswell but it's for mobile only - where things are soldered in and you can more easily have a custom package.
It could be the same for Kaveri, or it will be stuck with dual ddr3. We have no insight for side memory, it might well be in a future iteration.
 
I suppose the lower TDP mobile parts won't be nearly as bandwidth limited, if the available bandwidth is as high for them. And this is where Intel will be competing. The on-package memory should still give Haswell a power efficiency boost, though.

I don't think Intel is really playing that seriously in the desktop IGP game; they want to give something that provides some baseline that's "good enough", whatever that means, but not that especially with AMD. It's not that hard to see why, when you can outdo an IGP with a net cost increase of maybe $50-75 for a discrete GPU. The situation for the mobile devices is totally different, where the IGP buys you better battery life and a smaller/lighter form factor. So I can see why Intel would be more interested in playing in the latter domain than the former.

I'm actually kind of wondering if AMD is that committed to desktop IGPs either. Does anyone know how Llano performed here? I'm curious because it will have taken them such a long time to release desktop Trinity, even though it should have been ready ages ago. There was probably some bug or something they had to fix, but if this was any kind of priority for them you'd think they could have taken care of it sooner.

I know in the future IGP on the desktop is supposed to offer some computational benefit over discretes due to closer coupling of CPU and GPU resources but AMD is going to have a really hard road ahead of them creating enough killer apps to really sell people on this..
 
There's an interesting thing happening on the linux desktop/laptop front, I know it's about 1% market share but Intel is committing to drivers there ; the open source kind, which AMD and nvidia can't really do because of patents and IP.
So, AMD should at least do a great work on proprietary drivers but they have an image of supporting their products for two years. This sucks. They don't have a great reputation on the GPGPU front either, in the same manner, barring Bitcoing mining.

There's a lot of promises on HSA. But this only really starts with the gen after Trinity. CPU-only products will even disappear, on desktops and servers. So, Kaveri and up look serious but Trinity seems focused on consumer Windows products, with bones thrown to the linux crowd that aren't quite enough given a modern computer can last for ten years.
This is the meanest thing I can say against Trinity.
 
I suppose the lower TDP mobile parts won't be nearly as bandwidth limited, if the available bandwidth is as high for them. And this is where Intel will be competing. The on-package memory should still give Haswell a power efficiency boost, though.

I don't think Intel is really playing that seriously in the desktop IGP game; they want to give something that provides some baseline that's "good enough", whatever that means, but not that especially with AMD. It's not that hard to see why, when you can outdo an IGP with a net cost increase of maybe $50-75 for a discrete GPU. The situation for the mobile devices is totally different, where the IGP buys you better battery life and a smaller/lighter form factor. So I can see why Intel would be more interested in playing in the latter domain than the former.

I'm actually kind of wondering if AMD is that committed to desktop IGPs either. Does anyone know how Llano performed here? I'm curious because it will have taken them such a long time to release desktop Trinity, even though it should have been ready ages ago. There was probably some bug or something they had to fix, but if this was any kind of priority for them you'd think they could have taken care of it sooner.

I know in the future IGP on the desktop is supposed to offer some computational benefit over discretes due to closer coupling of CPU and GPU resources but AMD is going to have a really hard road ahead of them creating enough killer apps to really sell people on this..

Is there really much of a supply of cheap under $70 GPU anymore. Most of the under $100 GPU are older designs like HD 5570 and GT 440 with DDR3 memory that don't offer much over the Trinity IGP.
 
Is there really much of a supply of cheap under $70 GPU anymore. Most of the under $100 GPU are older designs like HD 5570 and GT 440 with DDR3 memory that don't offer much over the Trinity IGP.

When I said $50-$75 I was also factoring in the cost you pay getting a Trinity IGP instead of something that has the same or CPU capability but no IGP. Of course, right now such a thing may not be exactly available but Vishera will change that. Last I was aware you also pay a slight premium for the motherboard.

Newegg is turning up a fair number of HD 7750s, 6670s, and 6750s (and one 6770) with GDDR5 that can probably outperform Trinity. And even when those discrete cards are using DDR3 they still have the advantage of using dedicated buses not shared with the CPU.
 
Motherboards are decently cheap for AMD APUs (current llano ones), with cosmetic difference between the two chipset options. Actually you can ignore there's two chipsets because there are quite full featured mobos with the lower end one, as funny as it is.

the only cheaper AM3+ mobos would be the lowest end ones that take a 95W FX with no overclock headroom but they suck, they're more for the lower power older CPUs. (you can still build a sempron with geforce 6100 if you want cheap and play quake 1 under linux)
 
I think AMD has to go with eDRAM or something else in order to basically force the performance of the entry level discrete to be much higher than it is now. There would be a couple of pretty big bonuses to that - Nvidia would be further squeezed at the bottom end (I guess anything lower than a 640 is already pointless?), and there would be less chance of a combo of say, a pentium and an 8750-type card from being a faster gaming machine than the Kaveri's at a lower price.
 
I think AMD has to go with eDRAM or something else in order to basically force the performance of the entry level discrete to be much higher than it is now. There would be a couple of pretty big bonuses to that - Nvidia would be further squeezed at the bottom end (I guess anything lower than a 640 is already pointless?), and there would be less chance of a combo of say, a pentium and an 8750-type card from being a faster gaming machine than the Kaveri's at a lower price.

That's probably the plan. Kaveri will have 512 GCN SPs, up from Trinity's 384 VLIW-4 ones, so that alone should be quite a substantial bump. If there's no meaningful improvement on the memory front, it's going to be seriously bottlenecked—or just operate at lower clocks and save some power.

But Kaveri will be built on 28nm, and its successor on 20nm. If the latter is still a quad-core CPU, it will have a lot of silicon and power to "spend" on the GPU, so we could be looking at 1024 SPs, provided the bandwidth problem is solved. Really, if you can put high-speed memory on the APU's package, the sky is the limit in terms of GPU performance. And by sky I mean power. So perhaps 50W on notebooks and 125~150W on desktops. That seems really high, but if it replaces a lower-mainstream graphics card, or whatever you want to call it, it's not unreasonable, just as long as 15~25W notebook and 45~100W desktop SKUs are still available.
 
AMD are bandwidth limited and intel are shader limited and how they both fix these issues will decide the "winner" in the end.
Well, we don't know that Intel has a more efficient memory architecture, because their lower framerates with the same total bandwidth means their controller is seeing less load. They may well run into the same problem upon increasing their GPU power (BTW, we don't know for sure that it's all shader related).

However, the monster amounts of cache in Haswell could give Intel a huge advantage when they do run into the problem. I don't know if they're doing any tile-like rendering right now, but they could cover lower resolutions with only a few tiles, and will probably have the CPU resources to spare for command list recording and replaying, provided that their geometry processing is fast enough.
 
But well intel wants to mostly ship HSW GT1/GT2 it seems beating that shouldn't be a problem.

It's almost guaranteed that even Trinity will beat Haswell's GPU in the desktop space, since there are no plans for a GT3 there. Even on mobile, the point of GT3 seems to be about perf/watt rather than pure performance.
 
f1_2010.gif

No reflection over the difference (it's a different motherboard, but..) etc though (and generally quite some weird and unexplained numbers in that review).
 
Back
Top