The AMD/ATI "Fusion" CPU/GPU Discussion

Yes indeed, I actually agree with most of your points, and the licence issue is most pertinent, but what else can Nvidia do if they foresee a future where all that's left for them in the PC space is the very high end discrete market that they have to fight for against Intel and AMD/ATI?
I don't know what they can do, exactly. A few ideas and concerns I can say off the top of my head as an arm-chair quarterback follow.
Anyone is free to debate any of these points. I don't have the in-depth knowledge of each field to be certain of any of my analysis.

They need to contain the arena in which AMD/ATI may have an advantage:

In the very low-power/embedded space:
An AMD/ATI offering would permit lower platform costs from a single-chip configuration.
Nvidia should look into making a CPU for here, but go for an ARM or MIPs license. x86 hasn't set the embedded world on fire, and thus far AMD is focused on Geode x86 processors (at least I think they're mostly x86).
Nvidia could just snap up one of the many small companies that already make hardware in this realm, or partner with one of the larger manufacturers.

Laptops:
The value end is basically an AMD/ATI playing field.
Even if Nvidia could create a decent x86 chip to pair with a GPU, they couldn't match a company with a high-end fab and the volume to cut costs. They will need a parter or find some way to make life very hard for AMD.

Nvidia's going to have to find some way to out-innovate and differentiate its products with features and tie-ins that can make it worthwhile to not buy an integrated solution.
Until we see how the new culture and methods of operation form for AMD, we don't know how hard that will be. CPU manufacturers go by different rules, and that may lead to feature stagnation for ATI's future parts.

For everything between low and high-end laptops, Nvidia is going to do everything it can to portray the integrated solution in the most negative light that it can. This means something like TWIMTBP on steroids. Lots of sponsored developers, lots of software that has Nvidia-only bells and whistles, and lots and lots of (probably crooked) benchmarks.

There are some unknowns about the integration of GPU and CPU. The power density is going to be higher, meaning a more robust means of cooling is needed in a very tight space. On the other hand, the removal of a separate GPU might clean up the layout and airflow of the case.
This may have some bearing in the ultra-thin laptop market. Even if it doesn't, Nvidia can still say it does and parade the necessary expert studies to prove it.

The higher-end will still favor discrete components, unless AMD goes really high-risk and includes some on-package DRAM, which would require some very signficant design changes and a much higher unit cost.

PC space:
The low-price, low-performance segment again is very much an AMD/ATI strong point.
Without partnering with Intel, it is very unlikely Nvidia can hold this field at all without some heavy price cutting, which AMD could probably match.
The minute performance becomes even remotely an issue, however, Nvidia can take the advantage. The mid-end might not be all too profitable for AMD, because it would need commensurately larger silicon investment to match a discrete card, and cost scales quickly with larger die sizes.

Also, if combined CPU/GPUs make it into the higher levels of the PC market, then Nvidia has a possible chance of making money with every AMD/ATI chip sold.
The graphics board may be reduced to little more than a riser card in some low-cost computers, with just the RAMDAC and other associated port chips, but it also means that graphics output goes through a standard PCI-Express slot.
Any number of enhancements could be made to the board, and AMD is unlikely to become more than a reference board manufacturer.
Since AMD and ATI are consolidating a position on the production end of the graphics pipeline, Nvidia needs to build a position on the other end.
Nvidia could get into the business of designing and providing the components to the new interface boards. It could even make its own boards, something unlikely but much more viable than trying to start up a CPU business.
It could become the main supplier of chips that add functionality to the interface boards for all AMD/ATI CPU/GPUs.
Imagine what designers of an AIW board could do with all that real-estate freed up that used to be the GPU,RAM, and its power regulation circuitry. (Another biggie, what happens to all those bargain motherboards that need better power regulators?)

AMD would probably tell its ATI engineers to stick to their core competency.
Nvidia can branch out and find that niche it can use to leverage against its competitor, it has to.
Nvidia could even get into the physics processing unit business, since it will now have all that real-estate AMD has freed up, and the integrated CPU/GPU would largely remove a leg in the triangle of contention between the PPU, CPU, and GPU.

Even better, once Intel gets in the game (assuming it doesn't partner with Nvidia) it would depress prices on AMD's GPU/CPUs.
If Intel steers clear of the board section, the only place with unrestricted margins would be the one that Nvidia carved out.

The high-end could still be wide-open. Unless AMD suddenly has the spare capacity with all the multi-cores, laptop chips, and all that, the high-end ATI stuff is likely to be manufactured somewhere else, negating the process advantage (unless they go with Chartered, which has agreed to make some AMD cpus and has been given process help).
AMD's going to be using a lot of fab space just keeping up with Intel, especially in the server market. It's basically given up on desktop performance leadership for at least a processor generation.
ATI could get a lot more help with circuit design, but that's assuming that AMD doesn't need all the designers it can get for the CPU side.

Could ATI do the same thing? It might, if AMD sees that it is worthwhile, a PPU could hurt the viability of the those quad-core CPUs, so cannibalization of sales is a concern.

Servers:
For servers that need little or no graphics, a quad-core AMD chip with one stripped-down GPU somewhere on die would be a good sell.
Look for Intel to quickly put its own product in.
I don't know if Nvidia was serious about this space before, and it's likely it won't be after.

Unless the server really needs serious graphics horsepower, I'm having a hard time thinking of a reason Nvidia can sell anything here.


Intel is the big wildcard Nvidia needs. It needs Intel to keep pressuring the CPU side, so that the drain on resources hurts the graphics side of AMD/ATI.
As long as Intel keeps hurting AMD, ATI will be hurt as well.

Or they could count on the AMD/ATI thing just not working out.
A lot of things could go wrong very easily. Both AMD and ATI have had deadlines slip on them, but now they'd have double the deadlines that could slip.

If AMD/ATI get along amazingly well, look to see who buys Nvidia, or who investors will revolt against if it doesn't happen.

You won't need pins as such, the transistors that make a graphics chip will be incorporated into a die, just like a second CPU core is now. The didn't need to add more pins when they added a second core. Hypertransport seems to be getting regularly upgraded in speed and bandwidth, so maybe that's how they'll handle the issue along with faster and wider memory. I'm sure that AMD didn't just announce Fusion without some idea of how they will handle the memory issues that come with it.
The GPU is going to need bandwidth. Four of them will need four times as much. They're so very parallel that they will use every bit of bandwidth available.
Even with newer memory buses, the pin counts will be staggering, not counting the need for more power and ground pins.
Unfortunately, pad density doesn't follow Moore's law, and socket pin density advances even more slowly.

Look at the number of pads on the bottom of a GPU package and look at the number of pins in a motherboard's socket.
Nvidia's discrete GPUs will have a much less constrained pin budget, because they don't need to share pad space with a CPU.

It is also possible that the substrates the CPU/GPUs will be mounted on will be significantly more expensive and complex.
Mounting defects also lead to junked chips, so to keep up manufacturability, AMD may rein in the GPU devs significantly when it comes to pad allocation.

I doubt Nvidia has the time or money to get into a price war with AMD, with whom they still have a lot of lucrative deals. Not only is "hoping the other guy breaks first" a poor strategy, it's just as likely to damage Nvidia.
If it does not compete with a company that is also now a direct competitor, what's it going to do?
If it takes price cuts in to keep up sales areas where a CPU/GPU makes performance sense, why not do it? It is risky to cede market segments to AMD and allow it to focus on areas where it isn't as competitive.
Nvidia certainly can't start building fabs and try to compete as an x86 manufacturer.
Via would kill it on the low-power super-cheap end, and the rest is AMD and Intel.

That's been mooted, but it seems that the personalities at the top won't let that happen.
If AMD's gamble works, things will change.
If the execs will not change, there are boards that will change the execs, or competitors that will change the companies.
 
Tim Sweeney on this topic from the latest Q&A at FiringSquad

Tim Sweeney said:
From here on, there is really only one major step remaining in the evolution of graphics hardware, and that's the eventual unification of CPU and GPU architectures into uniform hardware capable of supporting both efficiently. After that, the next 20 years of evolution in computing will just bring additional performance.

Tim Sweeney said:
Talk of "adding physics features to GPUs" and so on misses the larger trend, that the past 12 years of dedicated GPU hardware will end abruptly at some point, and thereafter all interesting features -- graphics, physics, sound, AI -- will be software problems exclusively.

The big thing that CPU and GPU makers should be worrying about is this convergence, and how to go about developing, shipping, marketing, and evolving a single architecture for computing and graphics. This upcoming step is going to change the nature of both computing and graphics in a fundamental way, creating great opportunities for the PC market, console markets, and almost all areas of computing.
 
All I could gleam from various sites is that they want something done in 2 years, the gpu on die would be an extension to x86, (aka the real 3DNow!), the video output would be digital from the c/gpu and any analog bits would be on mb or riser card. It would also use the memory crossbar of course, wouldn't be surprised if ATI's ring setup were used. Having said that, until there is product it's all just brainstorming.

Should be fun. :)
 
greetings,

First of all i would like to take a few opininons about the target market for Fusion. Do you think it would be targeted at the high-end market?. I do think so, and i do have some evidence to back me up. First of all, the first and more pointful piece of evidence is here

http://www.dailytech.com/article.aspx?newsid=3471, if you look at the diagram on the upper right corner you will see that for gfx intensive apps, one of which is gaming, the GPU will be more powerful than the CPU. The second reason, which is somewhat pointless, is in the Terascale project articles. As far as Intel said, they will add specific task engines into their CPUs, one of which is graphics, and they showed on an image three compute cores replaced by three cores for handling textures. If this can be applied to make an on die GPU, it will sure bring about high performance.

If it happens to be targeted at the high-end market, do you think they will be able to add a core for FSAA, like the Xenos daughter die?

your opinions are most welcome.
 
If it happens to be targeted at the high-end market, do you think they will be able to add a core for FSAA, like the Xenos daughter die?
Maybe embedded in the chip.
Probably with 16nm process it will not cost you more than 20mm2.

One of the most important aspect of the AMD/ATI "Fusion" is the creation of a larger installed base for graphics enabled Personal Computers. This is key to atract and advance the development of games which is "what it's all about really!" :smile2:
 
I predict you're going to see more licensing deals between Intel and NVidia, probably starting out with an IGP license, and later progressing to a deal for on-CPU core. There is simply no way Intel can produce a core on par with the cores ATI is going to produce for AMD, no matter how many engineers they put on GMA*. A scaled up on-core GMA* won't be competitive. And if they go shopping, there is only one company left which can produce the kind of designs they need. I don't think they'll acquire NVidia, anymore than they bought Imagination Technologies to use the SGX. They don't really need to. All they need is a license to a core, potentially free for future modification and customization by Intel engineers. This can either be a one time deal, a per-chip royalty, or an ongoing relationship whereby Nvidia is contracted to develop the chip-integrated GPU portions according to Intel requirements.

I think it's as inevitable as Sony's last-minute RSX contract when their own attempts to build a CELL/GS rasterizer failed.
 
Does anyone think that the rumours (admittedly, mainly from The Inq) about NVidia intending to branch out into the CPU market hold any water? I'm a bit sceptical about it myself but if it was true then would NVidia even wish to license a core to Intel if they intended to take on "Fusion" directly themselves?
 
The dynamics will change once NVIDIA acquires a fab, and enter (or keep!) a technology partnership with IBM. There, I said it. Speculative food for thought... :)

Uttar
 
The dynamics will change once NVIDIA acquires a fab, and enter (or keep!) a technology partnership with IBM. There, I said it. Speculative food for thought... :)

... or gets into a strategic partnership with Intel and get access to their advanced process fab abilities. There, I said it. ;)
 
I predict you're going to see more licensing deals between Intel and NVidia, probably starting out with an IGP license, and later progressing to a deal for on-CPU core. There is simply no way Intel can produce a core on par with the cores ATI is going to produce for AMD, no matter how many engineers they put on GMA*. A scaled up on-core GMA* won't be competitive. And if they go shopping, there is only one company left which can produce the kind of designs they need. I don't think they'll acquire NVidia, anymore than they bought Imagination Technologies to use the SGX. They don't really need to. All they need is a license to a core, potentially free for future modification and customization by Intel engineers. This can either be a one time deal, a per-chip royalty, or an ongoing relationship whereby Nvidia is contracted to develop the chip-integrated GPU portions according to Intel requirements.

I think it's as inevitable as Sony's last-minute RSX contract when their own attempts to build a CELL/GS rasterizer failed.

Is INTEL really interested in entering the standalone - let alone > budget - GPU market after all? I'd personally figure they don't and for anything else at least theoretically they're already covered with their Eurasia/SGX license, which was signed in spring 2005. Enough time for integration and no the deal is in my mind most certainly not restricted to the PDA/mobile market.

Initial PowerVR SGX parts targeted mobile phone and handheld
devices and further PowerVR SGX IP cores target consumer, automotive and portable/mobile computing, and
portable/desktop computing.

http://www.imgtec.com/Investors/IMGAnnualReport2006.pdf

http://www.mercurynews.com/mld/merc...29.htm?template=contentModules/printstory.jsp
 
Last edited by a moderator:
Intel gets more $/mm2 from their fabs than NVIDIA could ever dream of. Why would Intel do such a thing, except perhaps to fuck with AMD?

Uttar
 
Intel gets more $/mm2 from their fabs than NVIDIA could ever dream of.

As do AMD.

Which is why I think we'll only see Fusion products for markets where the capability edge equates to fatter margins. That means laptops, where the higher performance/power and smaller form factor translates directly into $$$.

Cheers
 
Last edited by a moderator:
Which is why I think we'll only see Fusion products for markets where the capability edge equates fatter margins. That means laptops, where the higher performance/power and smaller form factor translates directly into $$$.
IMO, AMD presented it as a way to gain marketshare, not improve margins! So basically, gain marketshare in the low-end/mid-end CPU by having a lower total cost for the OEM than that of Intel, while keeping appropriate margins. That's purely my understanding based on what AMD said in an old CC at the time the merger was announced, though.

Uttar
 
IMO, AMD presented it as a way to gain marketshare, not improve margins! So basically, gain marketshare in the low-end/mid-end CPU by having a lower total cost for the OEM than that of Intel, while keeping appropriate margins.

That's really arguing the same thing, no? Improved margins is a pre-requisite for lowering cost (if you like to stay profitable that is).

You either have:
1. Better product with same cost/margins -> improve marketshare by offering a better product at the same price.
2. Same product with lower cost/better margins ->improve marketshare by lowering prices.

I disagree with you categorizing it as low-/mid-end CPUs, that is purely a desktop (or rather deskside) -centric view. As soon as you factor in physical size and performance/power, Fusion products should have a clear advantage compared to discrete solutions.

Cheers
 
I disagree with you categorizing it as low-/mid-end CPUs, that is purely a desktop (or rather deskside) -centric view. As soon as you factor in physical size and performance/power, Fusion products should have a clear advantage compared to discrete solutions.

That's going to depend on the target for the device. For desktops that need high performance, a discrete device is going to have dedicated RAM that's going to offer several times more bandwidth than the DIMMs on a motherboard can offer.

There's likely a market for high-end cpus with a basic GPU in areas where high CPU load is paired with little need for graphics, but in other areas the problem becomes more complex.

A high-performing CPU tends to have a larger die size than a lower-performing part, and the same is the same for a GPU. Trouble is, the cost of a chip to manufacture doesn't scale linearly with die size.

There's also a limit to how huge a chip can get. The rumored G80 die size is already large. Some chips out there are nearing the physical limits imposed by the size of the reticle used for photolithography.

Past a certain point, it will be significantly more expensive to produce a chip with both a GPU and CPU on board than it is to make them separately, and past a certain point, bandwidth and interface concerns will make discrete solutions more viable.

The tipping point may vary over time, depending on design, process, and experience with both, but it could be that the threshold is somewhere in the low-mid range for desktops.
 
Back
Top