The AMD/ATI "Fusion" CPU/GPU Discussion

DemoCoder

Veteran
So, they have a CPU ready? :p

Does AMD (the combo CPU/GPU)? And does Nvidia even need to counter an integrated CPU/GPU low-end system? I'm not sure one has to counter everything a competitor does. Worst case loss for Nvidia, if the entire MCP business unit went away would be a 17% reduction in revenues. But MCP is already more pricey and more difficult to implement than other core logic and integrated chipsets out there, yet Nvidia has increased its marketshare, so purely price and ease of implementation alone can't be the sole determining factor here (no matter how desirable a combo low-end unit looks) SoC designs look like big wins, but historically, they seem to win most in mobile applications where form factor, density, power, heat, and cost have bigger factors. The more "mobile" the device, the more important these factors. By contrast, I'm not sure they are all that important for mid-range PC desktops. 15 years ago, SoC designs were "coming" for desktops. The hype was, your CPU will have integrated IDE, memory, UARTs, VGA, memory controller, etc. You'd basically just have to hook up memory, and disc and done. The utopia never arrived, and I don't really see it arising for desktops. I'd frankly always want a separate CPU with 100% of trannies used for CPU in my business desktop or midrange system, because it's frankly not a gaming platform, but a platform for doing work. It needs to run by IDE, spreadsheet, browser, enterprise apps, well.

I think paradoxically, AMD's combo-cpu/gpu is more of a threat to Nvidia's notebook revenues. IMHO.

AMD is making a big bet that integrated on-CPU gfx is going to be a big winning factor in markets Intel completely dominates today, and I just don't think Intel's dominance of the market is based on "better" technology.
 
Does AMD (the combo CPU/GPU)? And does Nvidia even need to counter an integrated CPU/GPU low-end system? I'm not sure one has to counter everything a competitor does. Worst case loss for Nvidia, if the entire MCP business unit went away would be a 17% reduction in revenues. But MCP is already more pricey and more difficult to implement than other core logic and integrated chipsets out there, yet Nvidia has increased its marketshare, so purely price and ease of implementation alone can't be the sole determining factor here (no matter how desirable a combo low-end unit looks) SoC designs look like big wins, but historically, they seem to win most in mobile applications where form factor, density, power, heat, and cost have bigger factors. The more "mobile" the device, the more important these factors. By contrast, I'm not sure they are all that important for mid-range PC desktops. 15 years ago, SoC designs were "coming" for desktops. The hype was, your CPU will have integrated IDE, memory, UARTs, VGA, memory controller, etc. You'd basically just have to hook up memory, and disc and done. The utopia never arrived, and I don't really see it arising for desktops. I'd frankly always want a separate CPU with 100% of trannies used for CPU in my business desktop or midrange system, because it's frankly not a gaming platform, but a platform for doing work. It needs to run by IDE, spreadsheet, browser, enterprise apps, well.

I think paradoxically, AMD's combo-cpu/gpu is more of a threat to Nvidia's notebook revenues. IMHO.

AMD is making a big bet that integrated on-CPU gfx is going to be a big winning factor in markets Intel completely dominates today, and I just don't think Intel's dominance of the market is based on "better" technology.
I think we're definitely seeing more integration even for desktops. We already have integrated memory controllers, and integrated PCI-E controllers are coming as well. And now graphics too. I think the next few years will see this trend speed up.

Soon our mainboards will just have one chip on it :LOL:
 
Last edited by a moderator:
I believe this time things are a tad different, integration makes more sense as GPUs and CPUs are someway converging toward an unique point (this does not mean that at the end will have a single architecture for both, far from it)
 
Yes, but those integrated controllers take up a much smaller piece of the CPU in comparison to an onboard GPU. Power consumption for non-battery powered non-servers isn't a big deal (and servers need no gfx anyway), and the economies of scale are different for mobile devices vs desktops. If you're talking mobile handheld devices for example, they are typically produced at a rate 10x over desktops. The volume of mobile phones, for example, far outstrips PCs.

Yes, you're seeing integration, but despite this "trend" NVidia is still increasing marketshare while others are shipping more integrated and cheaper systems.

What happens however, when you're trying to compete against Intel on CPU speed, yet your GPU is eating up anywhere from 20-50% of your tranny budget and making your chip more costly to engineer properly, as well as harder to test/verify, and probably with lower yields (of course, damaged GPUs will be sold as CPU-only chips with GPUs deactivated) Do enterprises and business customers buying mid-range desktops really care about graphics? Unless they are in DCC/CAD/etc, the answer is not much (they'd buy workstations)

Tight integrated (on chip) makes a hellavalot more sense in the notebook and handheld markets, where its value proposition is quite clear -- it enables devices to be built that simply could not be built before, because of size, heat, power, and cost limits.

I don't doubt that integrated will make inroads in the market, since in the end, it will allow higher volume production and decreased implementation costs for vendors already on razor thin industry margins. I just don't think it is the panacea knockout blow to Intel, NVidia, and other competitors that has been pitched. I think rather, it will split the core market and find niches where it dominates. One possibility is cheapo "e-Machine" homePCs/mediaPCs/mac-mini type systems which have game performance which doesn't suck Intel-style. This is the "console"PC platform approach.
 
I believe this time things are a tad different, integration makes more sense as GPUs and CPUs are someway converging toward an unique point (this does not mean that at the end will have a single architecture for both, far from it)

I know what you're getting at, but do you really think AMD's next CPU is going to based on an architecture which shares functional ALU units with the GPU in the way we've been discussing in the Console forums? I bet it will most likely be a K8/K8L derivative with a GPU bolted on, and a high-speed bus between them, with perhaps a shared memory controller, perhaps even sharing the L2 cache, running the GPU in a different clock domain.

Trying to schedule streaming mega-threaded GPU ops with a multi-gigahertz serial OoOE CPU just seems like way too much for them to take on.

In any case, having so many functional units available to the CPU along with mega-threading, would point to superior Niagara-style server performance, and not neccessarily better CPU performance on desktop apps. Afterall, today's CPUs already have trouble keeping their functional units 100% busy and fed, and today's desktop software by and large is not mega-threaded. It would be a solution looking for a problem.
 
Yes, but those integrated controllers take up a much smaller piece of the CPU in comparison to an onboard GPU. Power consumption for non-battery powered non-servers isn't a big deal (and servers need no gfx anyway), and the economies of scale are different for mobile devices vs desktops. If you're talking mobile handheld devices for example, they are typically produced at a rate 10x over desktops. The volume of mobile phones, for example, far outstrips PCs.

Yes, you're seeing integration, but despite this "trend" NVidia is still increasing marketshare while others are shipping more integrated and cheaper systems.

What happens however, when you're trying to compete against Intel on CPU speed, yet your GPU is eating up anywhere from 20-50% of your tranny budget and making your chip more costly to engineer properly, as well as harder to test/verify, and probably with lower yields (of course, damaged GPUs will be sold as CPU-only chips with GPUs deactivated) Do enterprises and business customers buying mid-range desktops really care about graphics? Unless they are in DCC/CAD/etc, the answer is not much (they'd buy workstations)

Tight integrated (on chip) makes a hellavalot more sense in the notebook and handheld markets, where its value proposition is quite clear -- it enables devices to be built that simply could not be built before, because of size, heat, power, and cost limits.

I don't doubt that integrated will make inroads in the market, since in the end, it will allow higher volume production and decreased implementation costs for vendors already on razor thin industry margins. I just don't think it is the panacea knockout blow to Intel, NVidia, and other competitors that has been pitched. I think rather, it will split the core market and find niches where it dominates. One possibility is cheapo "e-Machine" homePCs/mediaPCs/mac-mini type systems which have game performance which doesn't suck Intel-style. This is the "console"PC platform approach.

Even intel is looking to implement some co-processors on their chip in the future. I don't see an integrated GPU as just doing graphics work though. If the level of integration is high, you can have your integrated "GPU" do any sort of streaming or number crunching work. So i don't think all those transistors are going to waste. Starts to sound like an improved version of Cell...
 
I don't see an integrated GPU as just doing graphics work though. If the level of integration is high, you can have your integrated "GPU" do any sort of streaming or number crunching work.

What number crunching work? What sort of number crunching work is necessary for a typical office/business desktop application that can't be satisfied by a modern CPU?
 
I know what you're getting at, but do you really think AMD's next CPU is going to based on an architecture which shares functional ALU units with the GPU in the way we've been discussing in the Console forums?
No, I don't believe that..even though the idea sounds nice on paper.
I bet it will most likely be a K8/K8L derivative with a GPU bolted on, and a high-speed bus between them, with perhaps a shared memory controller, perhaps even sharing the L2 cache, running the GPU in a different clock domain.
Sharing a L2 cache? umh..dunno. I would like to see some ISA extension on the x86 side to control the GPU.
In any case, having so many functional units available to the CPU along with mega-threading, would point to superior Niagara-style server performance, and not neccessarily better CPU performance on desktop apps. Afterall, today's CPUs already have trouble keeping their functional units 100% busy and fed, and today's desktop software by and large is not mega-threaded. It would be a solution looking for a problem.
At the same time I wonder what kind of approach a company like nvidia would use to address the same problem (CPU and GPU integration).
For the sake of discussion let's assume all these rumours about them starting to develop an integrated (x86 compatible) part are true..do you see them implementing an x86 core from scratch? Dunno if it makes much sense.. I believe alternative approaches should exist..approaches that leverage on their next (no, not G80..) gen architecture.

Marco
 
GPGPU work doesn't require on-core integration. If you're using the CELL model, the "CPU" in CELL (PPE) is really more of a work scheduler and router. There may be a subclass of algorithms that require low-latency high-through communication between CPU/GPU, but that would be betting an entire architecture on an unproven niche, that either a) developers will specifically target these proprietary GPGPU architecture, or people will buy it to run HPC applications. Seems like an extremely risky prospect.

I don't think the value proposition of AMD's combo-chip is going to be based on selling that it can run some rare algorithms faster. I think their value proposition that they will attempt to pitch is cost/performance, which is why I think it'll be a CPU with a low-end GPU core bolted on (a DX10 "X1300" or less), a high speed bus, and some stuff removed. I don't think GPU and CPU pipelines will be shared, nor ALUs.
 
Sharing a L2 cache? umh..dunno. I would like to see some ISA extension on the x86 side to control the GPU.

Think locking L2 to fetch data from L2 into GPU.

At the same time I wonder what kind of approach a company like nvidia would use to address the same problem (CPU and GPU integration).
For the sake of discussion let's assume all these rumours about them starting to develop an integrated (x86 compatible) part are true..do you see them implementing an x86 core from scratch?

No, I don't see a from scratch approach. Then again, no one has bought Transmeta yet. :)

If not, then I would look for NVidia to try and license an integrated core to Intel in a partnership, booting poor PowerVR out of Intel's graces. Presumably, Intel would be outperformed by AMD/ATI's onchip GPU and would go shopping just like Sony did, and fall into NVidia's arms. Actually, the deal might be quite favorable for NVidia just like the RSX deal.
 
Think locking L2 to fetch data from L2 into GPU.
Ah ok..I thought you were suggesting some kind 'real' share (both entities coherently accessing the L2)
No, I don't see a from scratch approach. Then again, no one has bought Transmeta yet. :)
LOL, no..I was thinking about something different, something that works purely on a hw level.
If not, then I would look for NVidia to try and license an integrated core to Intel in a partnership, booting poor PowerVR out of Intel's graces.
But it seems Intel is heavely investing in graphics hiring lots and lots of engineers..
 
LOL, no..I was thinking about something different, something that works purely on a hw level.
But it seems Intel is heavely investing in graphics hiring lots and lots of engineers..

Yeah, but unless they hire former ATI/NVidia engineers, it takes time to build expertise and figure out how to solve all of the problems that ATI and NVidia already solved through years of effort. Many companies have tried and failed to enter the DX8/DX9 market over the years, and they always end up way behind ATI/NV. Intel can hire all the new guys they want, but I think they're still going to need time: time that they might not necessarily have if AMD puts pressure on them in 2008.
 
What number crunching work? What sort of number crunching work is necessary for a typical office/business desktop application that can't be satisfied by a modern CPU?

And thus the integrated GPU will be used as a GPU in this case :p

edit: Yeah agree with above, this is getting way off topic
 
Done. Hopefully this makes for an active-enough thread on its own :)
WRT the L2, I assume the chip could intelligently and dynamically reserve part of its L2 for the GPU portion of the die. I'm not convinced that kind of complication is worth the most likely minor gain you'd get from it though, but it'd be interesting anyhow!

Uttar
 
I think the GPU/CPU could be the thing to take over from all but the high end. Right now we have quad core CPUs arriving where a couple of the core just don't really have much to do. A lot of jobs are fairly linear, and a lot of people work in a fairly linear way, so why not take a couple of those cores and turn them into graphics processors?

It would be cheaper (no boards to deal with, only transistors), faster (direct memory integration) and would use transistors that would otherwise spend most of their time idle.

Sure, it will never compete with a quad core dedicated to CPU and a top end G90/R700 come 2008, but it will be able to offer a significant portion of that top end processing power to the mainstream. It's looking to replace probably all the low, mid, and maybe even the slower high end parts as part of the CPU you buy.
 
A person at the inquirer wrote this

http://www.theinquirer.net/default.aspx?article=35342

i know it is too long, and you know i am a good guy, so i'll summarise what he says

' GPU integration is bad, and will bring too many problems, adding FP units is good, and will increase performance'

So in other words he says AMD's fusion processors are a failure, while Intel's Nehalem will rule. Huh, meet me after 2 years.

I beleive that the AMD Fusion shall rule, and i dont really exclude the possibility of it becoming high-end, one of the reasons is in this beyond3d thread i made
http://www.beyond3d.com/forum/showthread.php?t=35014

while the other reason which is more pointful is in this
http://www.dailytech.com/article.aspx?newsid=3471 (look at the diagram on the right)
the diagram states that for grahics-intensive tasks ( one of which is gaming), the on die GPU will be more powerful than the CPU, so in other words, this is a high-end integration. And don't forget that future physics processing will be done on the GPU, so this is certainly a good idea. Hopefully, by 2008, shaders will become more general purpose, aloowing them to take up more tasks. Long live AMD, and long live Fusion.
 
As I postulated somewhere else in this forum a while ago, the real issue is getting the software boys to code for multiple cores - the gpu guys are further along here than the cpu guys because of OpenGL and DirectX - one day MacroShaft (...did I spell that right... ;) ) is going to come along with DirectCPU (tm) to isolate the software from the messy and troublesome core-specific stuff and AMD is just getting ready for the day...
 
I was just referring to the fact that we're ATI are AMD and we have have a line of discrete x86 CPU's.
 
Back
Top