New Fusion information, but now in English!

Hello,

Anandtech and Tomshardware have put up interesting articles about AMD's Fusion.

Anandtech Article
TGdaily Article

What are your thoughts on the three levels of integration described?

Feel Free to comment

See ya

The only thing I feel like commenting on at the moment is why do both Intel and AMD feel that exposing an ISA for graphics to be a good thing? Especially in light of the current G80 and R600 architecture discussion, if one particular design won out and became popular you're stuck with it even if the workload changes to favor the other.

If you hide that behind an API you still need to be concerned with the architecture while coding for performance in the present, but the architecture can be changed in the future when the workload dictates.
 
I see AMD hasn't mentioned how they're going to handle the memory problem.

If we want more than mediocre graphics performance, the GPU portion of Fusion is going to need maybe 40-100 GB/sec. It might need more, depending on what happens by 2010

A single high-performance CPU core at that point may likely get away with something between 10 and 20.

A GPU could likely tolerate a few hundred nanoseconds of memory access latency, while CPUs right now like an average best case of less than fifty. For better performance, future CPUs may be fighting to get the latency even lower.

A chip that pairs a CPU and GPU does not magically only need 50 GB/sec of bandwidth and tolerates 150 ns memory latency.

Rather, it's going to need an order of magnitude more bandwidth and record-low latency.
 
I see AMD hasn't mentioned how they're going to handle the memory problem.

If we want more than mediocre graphics performance, the GPU portion of Fusion is going to need maybe 40-100 GB/sec. It might need more, depending on what happens by 2010

[..]

A GPU could likely tolerate a few hundred nanoseconds of memory access latency, while CPUs right now like an average best case of less than fifty. For better performance, future CPUs may be fighting to get the latency even lower.

[..]

Rather, it's going to need an order of magnitude more bandwidth and record-low latency.


In papers presented at the International Solid State Circuits Conference, IBM revealed a first-of-its-kind, on-chip memory technology that features the fastest access times ever recorded in eDRAM (Embedded Dynamic Random Access Memory). IBM's new microchip technology will more than triple the amount of memory stored on chips and double the performance of computer processors. It will be available in 2008.
http://www.physorg.com/news90661936.html



IBM today announced a breakthrough chip-stacking technology in a manufacturing environment that paves the way for three-dimensional chips that will extend Moore’s Law beyond its expected limits. The technology – called “through-silicon viasâ€￾ -- allows different chip components to be packaged much closer together for faster, smaller, and lower-power systems.
http://www.physorg.com/news95575580.html



Earlier this year AMD and IBM signed an agreement about collaboration in development of on 65 and 45nm technologies to be implemented on 300mm silicon wafers (see this news-story). Under the terms of the agreement AMD and IBM will be able to use the jointly-developed technologies to manufacture products in their own chip fabrication facilities and in conjunction with selected manufacturing partners. Both IBM and AMD will produce chips using advanced SOI technology. In January we were told that IBM may also manufacture CPUs for AMD. Moreover, according to AMD plans, it intends to begin manufacturing its 65nm process on 300mm wafers in the second half of 2005 and the company was looking for a partner in the manufacturing facility that will produce 300mm wafers (see this news-story). Well, maybe AMD has already found this partner.
http://www.xbitlabs.com/news/cpu/display/20030512040641.html



AMD and IBM Detail Early Results Using Immersion and Ultra Low-K in 45nm Chips
http://www.physorg.com/news85247225.html



The vacuum technique, called Airgap, will be introduced to IBM's manufacturing process with the 32-nanometer generation of microprocessors, said IBM fellow Dan Edelstein, who led the project. These chips will start rolling out in 2009. IBM's semiconductor partners--including Advanced Micro Devices, Toshiba, Sony "and soon to be others"-- will be able to adopt the technology for their own chips, he added.
http://news.com.com/2100-1008_3-6180994.html



Agreement Now Includes Research and Development of Submicron Process Technologies through 2011, Adds Early-Stage Research on Critical Emerging Technologies Targeted at 32 and 22 Nanometer Generations

AMD today announced it has broadened the scope of its technology alliance with IBM. The expanded alliance now includes early exploratory research of new transistor, interconnect, lithography, and die-to-package connection technologies through 2011.​
http://www.physorg.com/news7810.html


Will AMD benifit.. I think so.. Cause without IBM ,,, AMD is toast against INTEL'S R&D group.. TOAST!
 
The airgap technology may or may not be used in the end.
It's too far out to tell, and it's not directly related to chip stacking.

The airgap chips may also face mechanical fragility issues, and there may be thermal concerns with 3d integration.

It's also one process node beyond IBM and AMD's tech agreement.

As for chip stacking, I'm taking a wait and see approach.
Thermal and mechanical concerns have not been worked out for mass production yet.

We'll have to see how much can be crammed onto a chip, as it definitely will not be as large as main memory, though by 2010, it may be enough to be useful as a tile framebuffer or L4 cache.
 
fusionamdnewdata01r1f5eko8.jpg

fusionamdnewdata021f5f4nu1.jpg

http://xtreview.com/addcomment-id-2534-view-New-motherboard-for-fusion-processor.html

100mm² GPU in 45nm? :oops:
 
I'm starting to understand Fusion a bit more now. It seems that AMD is branching off from the current dual core processors in to two families.

First family:

Single core---> Dual core----> quad core----> octa core----> 16 core----32 core


Second family:

dual cpu/one gpu---->quad cpu/dual gpu---->Octa cpu/quad gpu


Just guessing...


You know, it would be really fancy if devlopers off loaded phyisics on to fusion becuase of it's graphics functionality. It seems that fusion would be really great at that kind of stuff.
 
Except why would a developer do that specifically for a low end platform? Fusion will be a great way of saving money in the low end, but the high end will be just as the diagram shows people with octcore processors and the low end with dual core and graphics on die.

I really don't see why Fusion gets all the attention it does. It just doesn't seem like something that is radical. Its extremely natural for this progression to be occuring in the low end. Its always been about integrating as many features as possible in order to reduce costs, Fusion is just a step in that direction...
 
Except why would a developer do that specifically for a low end platform? Fusion will be a great way of saving money in the low end, but the high end will be just as the diagram shows people with octcore processors and the low end with dual core and graphics on die.

I really don't see why Fusion gets all the attention it does. It just doesn't seem like something that is radical. Its extremely natural for this progression to be occuring in the low end. Its always been about integrating as many features as possible in order to reduce costs, Fusion is just a step in that direction...

But is not Fusion the last step in that direction? Or the last series of steps?

I also don't see why Fusion just has to be lowend. I'm not saying that Fusion will be better than discrete boards, but what is to stop AMD from intergrating a mid range level GPU in the CPU minus some features?
 
But is not Fusion the last step in that direction? Or the last series of steps?

I also don't see why Fusion just has to be lowend. I'm not saying that Fusion will be better than discrete boards, but what is to stop AMD from intergrating a mid range level GPU in the CPU minus some features?

I don't give out cookies just because you did what was the natural next step.

Nothing would stop AMD from making a mid-range solution with Fusion except if it makes sense or not to do so. First you're going to be talking about a rather massive chip here. Mid-range CPU and GPU, far larger than low end, will sky rocket chip size and therefore the potential for cost saving greatly decrease as well (which is what Fusion makes sense for in the first place). Why would a consumer buy such a chip unless it offered a distinct advantage over traditional discrete products? Also what about other issues related to memory? They would grow considerably as well when going from low end to mid. Frankly, until shown otherwise I don't see Fusion as some "magical" technology, its little more than the next evolutionary step.
 
... will sky rocket chip size and therefore the potential for cost saving greatly decrease as well

Chip size would be constant. Size of Fusion chip = size of CPU + size of GPU. That *may* lower yields, unless some redundancy measures are implemented in the GPU part (redundant quads).

The problem is cost. Using a premium process (crazy many metal layer SOI) for GPU production would increase cost. I can think of three reasons why AMD think it's worth it.
1.) recent expansion of fab capacity means they need to use them for something, preferably something high margin.
2.) the overall power savings from integrating everything (save power on I/Os) outweighs the increased cost. This *will* matter in the higher margin laptop market.
3.) By integrating the GPU on the CPU die, you always know it is there and you can start to take advantage of it for other tasks like de/encryption and stuff like that.

Cheers
 
Doesn't Fusion seem like a direct opposite of the multi-chip strategy R700 is alleged to pursue? One one hand, we have splintering on a large die to decrease complexity and die size. Fusion, on the other hand, takes what are now multiple dies and crams them together. In high-to-mid range, "break up large dies" and "put as much stuff on one die as possible" can't both be right, can they?
 
Except if you think of R700 as multi-core with some sort of interconnect then it's just a natural progression to add one R700 core to a CPU and somehow get them to communicate.

Not saying it's easy or effective or efficient. Just that it doesn't take much to imagine a logical progression of a multi-core/multi-chip R700 -> one GPU core + cpu.

Regards,
SB
 
He's speculating that it will offer the performance of TODAY's mid-range to high-end solutions in late 2008 or early 2009. And that's a pretty big distinction. Tho it does at least suggest that they are aiming at higher-than-traditional-IGP performances, which is of course welcome.
 
So is fusion amd's version of the cyrix mediagx chip ?

"The Pentium class MediaGX processor integrates memory and PCI controllers, graphics and audio into the CPU and carries a keen price that will create new opportunities in the sub-£1,000 PC market, according to Cyrix. MediaGX chips come in two flavours, 120MHz ($79 per 1,000 units) and 133MHz ($99 per 1,000 units). The devices include the Cx5510 companion chip which has video drivers and BIOS extensions."
 
So is fusion amd's version of the cyrix mediagx chip ?

"The Pentium class MediaGX processor integrates memory and PCI controllers, graphics and audio into the CPU and carries a keen price that will create new opportunities in the sub-£1,000 PC market, according to Cyrix. MediaGX chips come in two flavours, 120MHz ($79 per 1,000 units) and 133MHz ($99 per 1,000 units). The devices include the Cx5510 companion chip which has video drivers and BIOS extensions."

AMD is getting closer to a true SOC (moreso than the MediaGX ever was), but AFAIK the initial Fusion parts will not feature integrated I/O (beyond HT of course) nor audio.
 
Last edited by a moderator:
AMD is getting closer to a true SOC (moreso than the MediaGX ever was), but AFAIK the initial Fusion parts will not feature integrated I/O (beyond HT of course) nor audio.

Actually I'm quite positive that it will feature audio capabilities - similar to those of current Radeons, meaning digital output only via HDMI
 
Fusion and embedded DRAM

It should make a lot of sense to produce a fusion processor on an advanced (and more expensive) SOI process in one of AMD owns fabs:

By that timeframe AMD could implement their large L3 cache not in SRAM anymore but in embedded DRAM Cells from IBM that use SOI to build up the capacitor.

You have to remember that the Xbox GPU already uses EDRAM for a very fast framebuffer and others. The increased size L3 Cache (3x more memory than SRAM) could be made configurable, so that CPU and GPU can access the same same without going over the extrernal memory. That would realy allow to the GPU and the CPU to efficiently collaborate.

Compare too the 6MB in Shanghai, the Fusion processor could feature 18MB L3 Cache.

The integrated memory controller could do DMA into this L3 cache to completly hide latency for larger GPU-type workloads.

This would indeed be more than than a simple combination of CPU and GPU: It would allow to realy create a fusion of both CPU in the sense that each part can work best on a specific workload.
 
Back
Top