The Intel GPU Rumour Mill

I'm not sure that going fully independent will save that many transistors over what is present now.
Saving transistors is not the aim, better utilization and even opening up entire new problem domains is. Needing to have coherent branching locks you in tight in what you can attempt on a GPU at the moment..
 
The full multicore approach is unlikely to be the only way to handle branch coherence issues. For the narrower discipline of graphics rendering, it may be devoting far too much when other solutions exist.

I think the overall lack of focus at this early stage seems to indicate that nobody is entirely sure what Larrabee will be good at, though it is likely not rasterization. It seems to expend a lot of resources for the sake of possible uses of which "someone might think will be a good idea, maybe".

I'm open to being converted, but the initial fluff isn't very comforting.
 
To get rid of the need for branch coherence you will need each individual shader to take care of it's own flow control. Do you start calling it multicore if each "shader" has it's own flow control? That's just semantics, and not very interesting.
 
Full independence won't be an overwhelming victory over successively better approximations.

A multithreaded array that can pick between two thread buffers won't give full independence, but would offer an increasingly better amount of utilization, much as adding to associativity for a cache leads to average miss rates that approach a fully associative cache.

Larrabee's big advantage apparently is to ignore the fact that there is usually a high degree of coherence. If there weren't, GPUs would have turned out differently.

Various measures have been taken to lessen the impact of diverging code. By the time Larrabee comes out, what's to say they won't be able to get within ~X% of its utilization?

If GPUs can manage a >X% advantage in peak resources, then Larrabee is a solution that is looking for a problem that will have have passed it by.
 
If GPUs can manage a >X% advantage in peak resources, then Larrabee is a solution that is looking for a problem that will have have passed it by.

Assuming this turns into a real product, I think it's clear that Intel isn't solely targetting graphics with this solution. If they did, they'd be better off with something that has a higher execution unit-to-control logic ratio as you state.

They are probably aiming for a superset of GPU and general vector functionality with an overhead low enough that Larrabee won't completely suck compared to fully specialized GPUs and vector machines.

To help them they have a massive manufacturing advantage, high capacity and low cost should help them offset what overhead Larrabee has.

Cheers
 
I think it some degree both Fusion and Larrabee are "solutions looking for a problem", in the sense that clearly everyone at pretty much all levels of the industry would have been much happier staying single core with ever ramping clock speeds. It seems to me that all this activity is based directly on the fact that model hit a wall.

That doesn't mean, however, that the solution that was forced on us all hasn't found a problem that it can address. . .
 
Assuming this turns into a real product, I think it's clear that Intel isn't solely targetting graphics with this solution. If they did, they'd be better off with something that has a higher execution unit-to-control logic ratio as you state.

They are probably aiming for a superset of GPU and general vector functionality with an overhead low enough that Larrabee won't completely suck compared to fully specialized GPUs and vector machines.

Then why are so many people touting it as an answer to GPUs?

If it's not as good at graphics as more specialized and established designs, why pay out the nose for a chip that will have no compelling performance advantage and no widespread support?

I think it some degree both Fusion and Larrabee are "solutions looking for a problem", in the sense that clearly everyone at pretty much all levels of the industry would have been much happier staying single core with ever ramping clock speeds. It seems to me that all this activity is based directly on the fact that model hit a wall.

That doesn't mean, however, that the solution that was forced on us all hasn't found a problem that it can address. . .

Fusion as it is targeted looks to me more like AMD's attempt to find or make a market segment where it won't get swamped by Intel, because commodity desktop CPU sales are a prime target for Intel's manufacturing advantage and deep price cuts.

An established graphics brand is something Intel can't churn out of its fabs by the millions.

Larrabee strikes me as a solution that is serial execution extended to be moderately parallel, and the closest problem Intel could think of that would fit is a deeply parallel problem.
 
Last edited by a moderator:
I haven't been keeping up with Larabee, but this piqued my interest: TR reports that Intel is reserving GDDR5 for a discrete GPU to sample 2H08 for release by 1Q09. (The .5/1GB quantities would seem to point to a more-than-average-speed card, even in that timeframe, but what do I know.) This info is Inq-sourced (Theo Valich).
 
Back
Top