Some new 'Fusion' details- but in german

I was always under the impression that Fusion will start out has two seperate dies under one package. I even remember seeing slides from back in November showing this. So really, I don't see what is new? But thanks for the translation though!;)


I was referring to high-end graphics.


From a enthusiast stand point, I can agree. But on another note, I don't see what thier is to lose from intergrating a high performance VPU in the CPU from any other stand point other than a enthusiast.
 
I hope AMD can put out some kind of integrated Fusion chips before too long. Perhaps this is a version of Fusion-lite that could be used to set up some preliminary ground work for fully built-in designs.

Fusion on-die was quaint, going for an MCM is just plain boring.

It already looked unlikely AMD's initial Fusion designs would do much more than paste a GPU in the place of a CPU core, but at least there was a chance of some interesting design tweaks or innovations due to close proximity.
Over time, perhaps specialized instructions or hardware would be used.

Further separating the chips would delay that, and it would also make the ATI buyout even more dubious if all they can manage for a product is an engineering hack.

Also disconcerting is AMD's fallback on its package engineers to bail it out. AMD's also lagged on package technology, and in recent times perhaps more so than it has in process tech.

The ceramic to organic substrate transitions were later for AMD, and going pinless is another thing Intel did first. As for MCMs, Intel's been doing that since the later Netburst Xeons.

I know being exciting isn't what necessarily sells chips, but man I hope AMD can do better than this.
 
First of all, thank u neliz for translating.

Secondly, if AMD is going for an MCM, then they better put out fusion in Q1 2008.

Thirdly, this article and the intel GPU (a really great one, thanx to the B3D ninjas:devilish: ) article at B3D have made me rethink what Fusion must look like to be up to my expectations.

If AMD decides to make use of ATI's expertise in FPU (shader) design, and combines it with theirs, then somehow makes a magic mixture of their CPU FPUs and ATI's shaders, i.e. some kind of CPU\GPU hybrid FPU that can be used for gfx or GPGPU work (this shouldnt be that hard, given that GPU shaders are somehow GP today) and integrated it into every CPU they have, it would be very appealing IMO. If they want a Fusion CPU, add the fixed function units like ROPs on die, want a normal one without an IGP, remove these fixed function units.

Now that isn't really easy to do, but i just wanted to give my opinion on the topic.
 
I have a strong feeling that K10 is the last true CPU arc. from AMD and everything afterwards will be based off fusion, and in my opinion, I don't see a single thing wrong with that.
Which K10 are you referring to? The generation with Barcelona?

If so, then current talk and forward-looking statements indicate it will not be the last CPU architecture from AMD (assuming it doesn't go broke).

Fusion as it has been discussed is also rather agnostic to the CPU architecture, since it allows for a kind of mix and match of different cores.
 
From a enthusiast stand point, I can agree. But on another note, I don't see what thier is to lose from intergrating a high performance VPU in the CPU from any other stand point other than a enthusiast.

....Money designing a product that will have trouble competing with non-integrated solution?
 

In high-end, the premium is on performance. An 800Million transistor package with 400M transistor CPU and 400M transistor GPU will be outperformed by 800M transistor CPU + 800M transistor GPU. Plus, inability to upgrade one without replacing the other, scalability questions, etc.
 
The latest analysis of the sales of GPU's in the last week on a variety of sites seems to indicate the low end seems to be disappearing and being replaced by integrated GPUs. It seems to me that AMD is anticipating this migration by creating the Fusion product.

It seems to me that having a a much more robust 'low' end benefits all gamers in that the low end presents a much higher target to aim for when designing games for the largest audience. I see Fusion as a new low end system with potentially better graphics. I think Intel also sees this the same way.
 
In high-end, the premium is on performance. An 800Million transistor package with 400M transistor CPU and 400M transistor GPU will be outperformed by 800M transistor CPU + 800M transistor GPU. Plus, inability to upgrade one without replacing the other, scalability questions, etc.

I see your point. However, what would stop someone from just dropping in a GPU board when Fusion starts to stress in apps?

I do love the idea of intregration when talking low end/IGP.
 
But in the low-end to mid market segments, a quadcore hybrid CPU/VPU that can allocate 1, 2 or 3 cores for a dedicated gfx rendering would/could be extremely efficient. However I am sure that internal cache on such devices would be a bottleneck for enthusiast level performance, but more than satisfactory in comparison to integrated solutions at present.
 
Efficient at what?

At the low and mid end, the bandwidth constraints of a CPU socket would kill any performance. Those VPU cores would be burning power waiting to suck data through a straw.

The die area of two hybrid cores would likely be larger than two specialized ones, so it's not a cost win.

I'm also not clear on when we'll be seeing low-end quadcores in the segment Fusion will be targeting initially.
 
Well... ok, on a cost basis maybe the hybrid idea wouldnt work too well with a\current architecture part, but why would power consumption be an issue? A low end laptop single core CPU uses about 5 - 6W under load, so if a slightly more aggresive gfx core was added alongside a refined CPU, I am sure it wouldnt pull anything to disimilar to a current AMD dualcore mobile part at around 25-30W.

In terms of internal bandwidth for low end rendering, It wouldnt be mad to suggest that the internal bandwidth would be improved upon current generation mobile parts from AMD that offer between 6-10GHz. Isn't that enough?
 
Well... ok, on a cost basis maybe the hybrid idea wouldnt work too well with a\current architecture part, but why would power consumption be an issue?
I'm trying to find what kind of efficiency you are talking about.
Performance per watt?
Cost per device?

A low end laptop single core CPU uses about 5 - 6W under load, so if a slightly more aggresive gfx core was added alongside a refined CPU, I am sure it wouldnt pull anything to disimilar to a current AMD dualcore mobile part at around 25-30W.

I'm unclear as to your meaning from before.
"a quadcore hybrid CPU/VPU that can allocate 1, 2 or 3 cores for a dedicated gfx rendering would/could be extremely efficient"

The variable number of cores being allocated either means that various designs can have 1,2,3 gfx cores or there are 4 cores, each of which can alternate between being a GPU or CPU.

There is poorer efficiency with the latter route, and the other route would quickly become CPU-limited.

In terms of internal bandwidth for low end rendering, It wouldnt be mad to suggest that the internal bandwidth would be improved upon current generation mobile parts from AMD that offer between 6-10GHz. Isn't that enough?

6-10 GHz of bandwidth?
Can you clarify?
 
The variable number of cores being allocated either means that various designs can have 1,2,3 gfx cores or there are 4 cores, each of which can alternate between being a GPU or CPU.

Speaking of hybrid cores, intel has said more about larrabee at IDF


"Still keeping with the enthusiast hardware theme, Intel moved on to its "Larrabee project," which is widely expected to be a discrete graphics processor for games. The firm was surprisingly quiet, though, seemingly taking care not to mention games. Instead, Intel stated that Larrabee-based hardware "will include enhancements to accelerate applications such as scientific computing, recognition, mining, synthesis, virtualization, financial analytics and health applications." Nonetheless, the company did mention that Larrabee was a highly parallel, Intel Architecture-based programmable architecture, that it would be easily programmable using existing tools, and that it was designed to scale to trillions of floating point operations per second (teraFLOPS)."

http://techreport.com/ja.zz?comments=12272
 
Firstly, just like to point out I am a lowly electronic engineer and DO NOT consider myself an expert in anything written about so far in this thread. I appreciate the feedback/discussion so far.

I'm trying to find what kind of efficiency you are talking about.
Performance per watt?
Cost per device?

Generally efficient, as in everything.:oops:

I'm unclear as to your meaning from before.
"a quadcore hybrid CPU/VPU that can allocate 1, 2 or 3 cores for a dedicated gfx rendering would/could be extremely efficient"

The variable number of cores being allocated either means that various designs can have 1,2,3 gfx cores or there are 4 cores, each of which can alternate between being a GPU or CPU.

There is poorer efficiency with the latter route, and the other route would quickly become CPU-limited.

Ahh I see what you mean, The more gfx it calculates the less time for the CPU. Thanks for clearing that up, but it did sound like a good idea in my head at the time. :p

6-10 GHz of bandwidth?
Can you clarify?

Sure, I was suggesting that an AMD turion has 6-10GHz internal bandwidth, whereas a current high end mobile GPU part like the 7950GT has 30GHz, so for low end, assuming improvements are made, is 10GHz + the expected improvements not enough? Or am I mixing up my bandwidths?
 
Sure, I was suggesting that an AMD turion has 6-10GHz internal bandwidth, whereas a current high end mobile GPU part like the 7950GT has 30GHz, so for low end, assuming improvements are made, is 10GHz + the expected improvements not enough? Or am I mixing up my bandwidths?

Well, we usually measure memory throughput in bytes per second.
 
Back
Top