Haswell vs Kaveri

In the first round there won't be GDDR5 version.
I really hope they will, it is more than time.
Windows be it 7 or 8 is no longer a memory hog, 4GB of GDDR5 would be fine.
(I think) They could take a lot of market shares in the emergent market and the low-end EU/NA market (core gamers playing casually, mmo/rts players).
They have to market it properly and price it nicely but it could be a great entry level product on a micro ATX with simplistic mobo (cutting on unnecessary features anyway the memory and APu would be soldered on the mobo.. we don't speak of an upgradeable product).

Ultimately I really hope they find a workaround to the severe bandwidth constrains faced by APUs.
I really wonder to which extend they could mimic Intel Crystal Web. Intel claimed that they would have been ok with only 32MB of cache. I would think that manufacturing 32MB of eDRAM on a 40nm would be that costly especially if AMD plans around that move moving toward 20 nm process.
Looking at the die size of trinity/richland they have a ~50/50 split in CPU and GPU. they won't push more than 4 cores, they could increase the CU count to say 12.
They would end with a significantly tinier chip, though cheaper to produce, with higher yields, etc.
May that could be enough to cover the cost of buying the eDRAM chip? 32MB should not be that big @ 40nm even less if eDRAM get available in the mean time on 23/32 nm process.

Then there are timeline and engineering power /how spread human resources are at AMD :(
Anyway they have to do something Intel is getting too threatening in the graphic fronts.
 
Proof? I don't even believe that you tested IVB and HSW, or IVB on 15.31 which is important in particular for the old Gothic engine due to the low clock issue in previous driver stacks.
Than don't believe it. It's fine for me. But take a look at Civ5 and MineCraft results at Anandtech.
The turbo now works for Gothic 3, but the game still not run at 10 fps or above. No matter what Intel IGP I use.

Your main target seems to be bashing against Intel due to a biased thinking.
No, I'm just bashing Intel for their sucky drivers.

Why you don't moan about AMDs lack of mobile GPU lineup?
I don't care about dedicated GPU. I already sucked too much with Optimus and other software swithcing methods. I just want a working system with IGP. I don't want Optimus/Enduro/Lucid-like software swithcing.

Basically nothing available as iGPU only notebook, all with crappy crossfire because AMD itself recommends it. The really meaningful AMD APUs without a dedicated GPU are A4, A6, A8 models.....all of them much slower than AMDs showcase A10 APU!!!
Buy from the USA. There are plenty of webshop.
 
Agreed, if you want to discuss the state of intel's graphics drivers, it's probably best you start a new thread.
 
Trinity and Richland are both highly bandwidth bound when the integrated GPU is used. The gaming performance scales up almost linearly when memory is overclocked. These GPUs could perform better, if they either had better/larger caches or access to faster memory.

In comparison HD 3000 (Sandy) and HD 4000 (Ivy) are not bandwidth bound. We see only minor gains by memory overclocks. This is explained by two things: First, Intel 4000 GPU is slightly slower compared to Trinity / Richland GPUs, and thus processes slightly less data, and thus requires slightly less bandwidth. Secondly, Intel processors share their L3 cache (3MB-8MB) with the GPU. A large general purpose read/write cache reduces GPU memory bandwidth requirements nicely. HD 4000 seems to be a very well balanced architecture (regarding to bandwidth). Intel did a good job.

GT3e upgrades the EU count from 16 to 40. That is 2.5x raw performance boost. In order to actually get close to that much extra performance, Intel needed to improve the bandwidth to 2.5x compared to Ivy Bridge. They could have added a quad channel DDR3 memory controller and slightly increased the memory clocks. However they chose to instead add a 128 MB L4 cache. This wasn't the cheapest choice (more research and manufacturing cost), but it was likely better for performance/watt than doubling the memory bus width.

AMD needs to do something drastic to their memory subsystem when they release Kaveri, if they intend to improve the GPU performance by 2x or more. GCN has a nice cache hierarchy compared to their old VLIW architectures (still used in Trinity and Richland). It should reduce the memory bandwidth usage, but not anywhere close enough to double the performance on the current dual channel DDR3 memory architecture. They need to either improve the caches further (larger L2 and ROP caches) and introduce triple channel memory controller (Nehalem had it already, and we all remember the 6GB and 12GB memory configurations), or go directly to a wide (and expensive) quad channel memory controller (similar to Sandy Bridge-E). I doubt AMD is going to introduce a huge L4 cache like Intel did for GT3e to solve their bandwidth issues. However AMD cannot just do nothing. If they don't get more bandwidth by some means (or save bandwidth by clever tricks / caches), they cannot improve their integrated GPU performance anymore. Their current APUs are already heavily bandwidth bound.
 
The 5800K/6800K performs very closely to a 6570 DDR3.

The 7750 DDR3 (1600MHz) is some 30% faster than this but the GDDR5 version is more than 50% faster than the DDR3 version - http://ht4u.net/reviews/2012/msi_radeon_hd_7750_2gb_ddr3_test/index36.php

AMD probably thinks a ~30% increase is good enough and depending on how the single-threaded performance of the Steamroller core improves performance it might be a little bit better than that with say 2133MHz DDR3.

In short, I don't see AMD going with GDDR5 or any other exotic memory configuration I'm afraid. I hope I'm wrong because the performance impairment from DDR3 is just sad now.

On the other hand you will be getting Iris Pro level performance for 1/3rd the price, so it's not a total loss. :p
 
I am the only person that thinks GDDR5m would work? I don't feel like Kaveri will be the only product using it, either: it'd work well on low end GPUs as well.

AMD's already labeled it as an "interim solution between DDR3 and HBM."

You can say cost is an issue, but I would argue that there's simply no way to get over the bandwidth speed bump without dishing out the cash to upgrade the suspension. eDRAM raises cost, L3 cache raises cost (and 4MB L3 likely wouldn't do much), GDDR5 raises cost, faster DDR3 raises cost, DDR4 raises cost, GDDR5m raises cost... the only other option would be to stick with DDR3 1333/1600, and have severely gimped performance.
 
Last edited by a moderator:
GDDR5 is certainly a option but 1) costs more 2) the SODIMM / expansion question. Is GDDR5 an option when these APUs mostly sell in low price machines? I suppose they just need to support multiple memory technologies.

I'm surprised that they don't already have GDDR5 support. Undoubtedly it has been thoroughly considered for even Llano.
 
I'm talking about GDDR5m. It supposedly solves the 8GB problem -- Gispel stated this earlier in this thread.

Is GDDR5 an option when these APUs mostly sell in low price machines? Etc.
It doesn't have to go in every machine. Outfit the low end ones with DDR3. Outfit the high end ones with GDDR5m, and you've got something very competitive in performance to GT3e, GT 640 and GT 650m, for a fraction of the former's cost and latter two's power cosumption.
 
For AMD's current APUs, I would also look at the other side of the chip the GPU hooks into. The system queue and CPU memory hierarchy the GPU thus far has tried to avoid as much as possible is still quite similar to when it was introduced with AMD's Sledghammer.

Perhaps the protocols, SRQ, and crossbar arrangement need to be reviewed in light of the changes that have occurred in the many years since.
 
The A10-6800K IGP is the fastest solution, in the socketed CPU market.
Right, and for the dozen people who care about buying a socket CPU but absolutely will not buy a cheap discrete GPU for it and get a way better experience, great :) Seriously, I really do not understand who the target market for these chips is.

Than don't believe it. It's fine for me. But take a look at Civ5 and MineCraft results at Anandtech.
Old driver, old version of Civ. EDIT: Ah I see willard has already said to take that discussion elsewhere, excellent.

I don't care about dedicated GPU. I already sucked too much with Optimus and other software swithcing methods. I just want a working system with IGP
There's no need for software switching in a desktop at all... what distinction are you drawing with a dedicated GPU vs IGP in a socket destop system in terms of user experience? Your insistence that you need a socketed CPU with IGP over CPU + GPU seems arbitrary and not justified by either form factor or cost. For the same cost, you still get a better experience with a cheaper CPU and cheap discrete GPU.

GDDR5 would certainly be better than pure DDR3 (for the GPU at least), but it's less power-efficient than EDRAM. For desktop power budgets, it's probably just fine, but the cache strategy really does scale exceptionally well.
 
Last edited by a moderator:
Right, and for the dozen people who care about buying a socket CPU but absolutely will not buy a cheap discrete GPU for it and get a way better experience, great :) Seriously, I really do not understand who the target market for these chips is.

General purpose machines meant for occasional gaming. Not a very big market, perhaps, but it's there.
 
General purpose machines meant for occasional gaming. Not a very big market, perhaps, but it's there.
I still don't get the argument though. Last I checked, you can still build a better machine for cheaper using a CPU+GPU with few - if any - compromises compared to the "big APU" solution. SoCs really make sense when you're space/power-constrained in a given form factor.

Also it seems to me that only gamers and businesses are buying desktops these days anyways. Almost all non-enthusiast consumers have moved to laptops for whatever reason.
 
I still don't get the argument though. Last I checked, you can still build a better machine for cheaper using a CPU+GPU with few - if any - compromises compared to the "big APU" solution. SoCs really make sense when you're space/power-constrained in a given form factor.

Not really. According to my favorite comparison shopping website, I could buy an A10-5700 for €114, which is somewhere between a Core i3-2100 and -2130 in performance. These cost €106 and €120, respectively.

Same price, but the AMD APU provides good graphics on top of it, and there's no way I could get a discrete GPU without completely blowing the budget. Having a single chip also makes maintenance and cooling a little simpler.

Also it seems to me that only gamers and businesses are buying desktops these days anyways. Almost all non-enthusiast consumers have moved to laptops for whatever reason.

That seems to be the trend, though I know non-enthusiasts who still prefer desktops, possibly because they find them more comfortable—which is part of the reason I favor them, too.
 
Choosing an APU requires you to make a pretty huge compromise. Buying an AMD APU forces you to give up single threaded performance, and even multithreaded performance can be disappointing. An Intel "APU" requires you to give up graphics performance, and have a weaker feature offering as well.

Neither of these issues are inherent to APUs, of course; it's just the current situation. AMD happens to be behind in the CPU world, and Intel happens to be behind in the graphics word. Not counting the unobtainable GT3e parts, that is.

Future APUs should look a lot better. Intel will round out their graphics performance, and AMD will round out their CPU performance. Things should look really interesting next year, or two years/generations from now.

From a cost perspective, I'd imagine that APUs should theoretically be better bang for your buck. Today though, expect for the aforementioned form factor advantage, there's not a huge reason to pick an APU. I think what Andrew's getting at is that you can go even cheaper than an i3, like a Pentium or Celeron, and still game as well or often better than on an APU, since games love that single threaded performance. It's totally application dependent, though.
 
Not really. According to my favorite comparison shopping website, I could buy an A10-5700 for €114, which is somewhere between a Core i3-2100 and -2130 in performance. These cost €106 and €120, respectively.
I don't remember the specifics - and I find EU tends to be a little screwy on prices compared to NA - but I recall people discussing a ~$50+$50ish setup that would outperform the APUs. Just hunt back to the Trinity discussions.

Anyways, SoCs are indisputably going to destroy discrete in laptops and anything smaller, but it's not clear if anyone cares in desktops. If SoCs were to become popular in larger form factors I imagine it's going to be in all-in-ones, not conventional sockets.
 
Last edited by a moderator:
APU's aren't meant to be the best bang-for-buck
That's not what I said.
They are meant to be the cheapest entry to gaming.
That is definitely not their main purpose.
If SoCs were to become popular in larger form factors I imagine it's going to be in all-in-ones, not conventional sockets.
Unfortunately, the enthusiast community tends to be extremely adverse towards change, unless those changes directly benefit peak and average performance. Performance is the only measuring stick of progress -- cut power in half by 50% while keeping performance flat, and in the eyes of internet hardware hobbyists, you've made no progress at all.

There will always be some complaint of "why did X company put Y hardware on my chip? I don't need Y hardware. We would be better off with more of Z hardware (even though most of the world doesn't need Z hardware)."

There's no doubt that you haven't heard the tired arguments that the IGP should be removed in favor of more cores or cache since the introduction of HD graphics. The hardware enthusiast community consistently fails to see the big picture -- they are the 1%, yet they hold themselves to be more important than the 99%. If enthusiasts could have their chips built the way they wanted them, they couldn't afford it. The mainstream crowd will always be the driver of what gets focused in SoC/CPU design. Even though your favorite company is working in the best interest of mainstream consumers, and in your best interest, at the end of the day, mainstream consumers are heavily subsidizing your CPU performance gains.

I'm sure that one day, discrete components will be all but gone from the mainstream market -- for the better. That's quite a few shrinks down the road, though.
 
Last edited by a moderator:
Back
Top