Larrabee at Siggraph

nAo

Nutella Nutellae
Veteran
Intel will present a paper about Larrabee at Siggraph this summer:

Larrabee: A Many-Core x86 Architecture for Visual Computing

This paper introduces the Larrabee many-core visual computing architecture (a new software rendering pipeline implementation), a many-core programming model, and performance analysis for several applications. Larrabee uses multiple in-order x86 CPU cores that are augmented by a wide vector processor unit, as well as fixed-function co-processors. This provides dramatically higher performance per watt and per unit of area than out-of-order CPUs on highly parallel workloads and greatly increases the flexibility and programmability of the architecture as compared to standard GPUs.

I'm sure this post will put a smile on Geo's face :)
 
Larrabee uses multiple in-order x86 CPU cores that are augmented by a wide vector processor unit, as well as fixed-function co-processors.
I hope there are some nifty disclosures on just how much is under that umbrella.
 
Nice find, nAo.

I hope there are some nifty disclosures on just how much is under that umbrella.

Definitely looking forward to opening that can of worms.

DK over @ RWT has a discussion thread on the subject (credit nAo again).

speaking of can of worms.... interesting supposition by Doug Siebert (long-time RWT poster):
Sure, there may still be some discrete Larrabee parts if there is demand for them in the HPC world or Intel really does plan on pursuing the discrete GPU market. I'm skeptical of that, but I guess once Nvidia is dead in a few years Intel won't want to let AMD have that market to itself, even though it will be much smaller than it is today!
 
From the abstract it seems this talk/paper is more software oriented than hardware oriented. I wouldn't be surprised if we won't learn any new technical detail about Larrabee's hardware architecture.
On the other hand I can't wait to know also more about its software architecture to get a glimpse of how Intel will likely expose the hardware to software engineers. (I'm not exactly a CUDA fanboy)

Regarding fixed function units we are going to see some TMUs and probably not much more than that. Adieu rasterizer..
 
From the abstract it seems this talk/paper is more software oriented than hardware oriented. I wouldn't be surprised if we won't learn any new technical detail about Larrabee's hardware architecture.
On the other hand I can't wait to know also more about its software architecture to get a glimpse of how Intel will likely expose the hardware to software engineers. (I'm not exactly a CUDA fanboy)

Regarding fixed function units we are going to see some TMUs and probably not much more than that. Adieu rasterizer..

Even if they don't give us anymore details on Larrabee's micro-architecture, I'm sure they'll at least provide relevant instruction throughput numbers. Rather useful in your line of work, I'd think ;)
 
From the abstract it seems this talk/paper is more software oriented than hardware oriented. I wouldn't be surprised if we won't learn any new technical detail about Larrabee's hardware architecture.
On the other hand I can't wait to know also more about its software architecture to get a glimpse of how Intel will likely expose the hardware to software engineers. (I'm not exactly a CUDA fanboy)

Regarding fixed function units we are going to see some TMUs and probably not much more than that. Adieu rasterizer..
Even knowing the outlines of how TMU functionality is accessed by the x86 threads will be informative, unless the paper turns out to be entirely made of fluff.
 
There'll be some scalar processors in there too of course, but it's interesting that they will be depending on a wide vector unit to do a lot of the work.
 
There'll be some scalar processors in there too of course, but it's interesting that they will be depending on a wide vector unit to do a lot of the work.

Isn't it more likely that the "ALUs" will simply support both vector and scalar instructions?
 
There'll be some scalar processors in there too of course, but it's interesting that they will be depending on a wide vector unit to do a lot of the work.

The scalar units exist alongside a wide SIMD unit in each core.
Larrabee's descriptions don't hint at any scalar-only cores.
 
Ouch, they also have a DX9 implementation, I wasn't aware of that :)
I wonder if Abrash&Co. had the opportunity to ask for specific hardware optimizations that would speed up software rasterization.
 
Ouch, they also have a DX9 implementation, I wasn't aware of that :)
I wonder if Abrash&Co. had the opportunity to ask for specific hardware optimizations that would speed up software rasterization.

Would the new Radix 16 Divider and Super Shuffle Engine in Penryn family CPUs qualify?
 
Division is helped by the Radix 16 divider and variable-latency divides, though how does that look specific to graphics?

Super-shuffle isn't much of a graphics-only optimization as it's bringing Intel's shuffle latencies within the same range or better than what AMD's had for years.
Conroe had pretty long latencies. Netburst's latencies were pretty brutal.

There was that instruction for blending, which might seem closer to a graphics optimization.
 
Well, nothing can be said to be graphics specific since it's useful for other purposes as well, but there are a few candidates for where graphics might have been one of the main motivators. Super-shuffle would count into that, the blending instructions, and most certainly the DPPS instruction, which can implement DOT2/DOT3/DOT4 stuff. The fast divider is a bit too generic to say it's graphics related, although it'll help for a number of important tasks.

The dot product instruction is particularly interesting to note, especially given that Intel never really seemed to show much interest for graphics in previous SSE instruction sets.
 
Damnit, I'm going to have to buy a Larrabee viddy card. . . .just because. A) To show what an iconoclast I am. B). As a collectors item of when Intel began their conquest of gpus C). As a collectors item of Intel's Folly in thinking they could conquer gpus. D). Because I enjoy bitching about IHV's not providing robust software support and compatibility, and this is almost certain to be "a target rich environment" with Larrabee.

Or some combination of the above.

:p:cool:;)

Has anybody heard a productization name rumour yet? "Bitchin' Fast 3D" or somesuch?

Oh yes, and y'all feel free to leak that paper to me/B3D in advance. :) I understand Dougie sees B3D ninjas behind every potted plant. :p
 
Ahahaha... ahh man Geo, you're too harsh ;)

Man, anytime Intel wants to step up to our interivew, it's been out there for them to do so.:LOL:

The god's honest truth is I want them to succeed because I love high-end graphics and the more serious deep-pocket players there are the happier I am.

And I've been saying for over a year now that the vibes are Intel is genuinely alarmed this time and that means they won't take one swing and give up. My only point is they need to be psychologically prepared to get their nose bloodied in round one, because they probably will, and it will likely be on the software side no matter how sweet their hardware is on the theoreticals.
 
Back
Top