Larrabee at Siggraph

Does anyone know if the vector units can operate on integers? Data format conversion aside..

Yes.
Native support for int32, float, double.
Load-store conversions to-from unorm8, uint8/int8, int16 (uint16?) and half.
Instructions for conversion to-from more exotic formats (guessing fp10/int10, 5:5:5:1, 5:6:5 etc).
 
I highly doubt that Larrabee will be seen in any consoles. Firstly, console manufacturers are so paranoid of users actually doing things with their consoles besides what they want them to do, and another x86 console would attract the 'homebrew' community beyond anything. Secondly, as someone mentioned, MS wants to 'license' the processors others design for them. MS owns the Xenos, not ATI. Thirdly, Intel has been leaning so much towards open platforms and open source that I doubt they'd dabble in consoles with this. I can more see them creating a Centrino-level platform with it for multimedia and games to rival consoles.
 
Only if they're transferred as fp32: 32 pixels * 4 components/pixel * 4 bytes/component = 512 bytes. The ability to convert 8-bit unorm to fp32 when reading from L1 means that for 8-bit textures they can stay 8-bit over the ringbus: only 128 bytes for 32 filtered pixels.
The ring bus was disclosed as being 1024 bits wide.
128 bytes per cycle should be enough to fill an entire ring segment, and a few consecutive cycles can fill the entire bus.

I'm really curious whether they do fixed-function filtering for fp16 and fp32. The two reasons they gave for having FF filtering hardware were (1) texture decompression (DXT formats), and (2) area-efficient 8-bit filtering. The first only applies to 8-bit formats, the second is much less compelling for fp16 and fp32.

There was also the argument that fetching unaligned quads required specialized pipeline logic, and that filtering required excessive register file bandwidth for the VPU.

Multiple fp16 or fp32 samples in flight over the bus would be bandwidth-intensive.
 
From the SIGGRAPH paper in section 3.2: "Larrabee's scalar pipeline is derived from the dual-issue Pentium processor."

I understand that Intel has used this analogy several times now, but that doesn't make it any more correct. Seriously, Larrabee has as much relation to P54c as Conroe does to Katmai.
 
MS wants to 'license' the processors others design for them. MS owns the Xenos, not ATI.
I don't know the exact terms of the deal, but this is false. MS has the right to do with Xenos as they will, but so does ATI as can be seen by it's being the basis of R600 and the "mini-Xenos" mobile core.

Also, MS is in the business of trying to make money so I don't assume past decisions cannot be reversed.
 
They didn't sort them exactly, but they probably did render (and shade) all opaque surfaces first. There is only so much the developer can do to trip up deferred shading though ... Z-kill, changing the Z-test ... what more? When it detects chicanery it can always fall back to normal tiled rasterization (it has to be able to do that anyway for transparant surfaces).
Mintmaster got exactly what my idea is and I'm not entirely sure if I get your idea here but, what are you advocating? The term deferred rendering/shading has so many different meanings and interpretations that I can't make any sense of what you are writing :)

To re-state my argument one last time: IMHO it doesn't make sense for LRB's sw rasterizer to try to determine the closest opaque fragment before shading it as most modern games already do it anyway (while working on a LRB based game console one would probably try to lay down the zbuffer first in the sw renderer itself in order to remove a whole rendering pass -> big win for a sort middle architecture)
 
I think the way to look at it is that MS own the graphics chip, but they license the IP that goes into it from ATI. They can manufactur it and control that side, but the royalties paid are for the IP of every one sold.
 
Also, MS is in the business of trying to make money so I don't assume past decisions cannot be reversed.
Sure. Also we must keep in mind that everything Intel design is probably so tied to their specific fabs/process/technology that their most cutting edge IP can't probably easily 'ported' over other fabs employing a different process.
 
On the console front, I heard a couple of months ago that Intel had approached MS with regard to Larrabee usage in the next XBox - now what that means I don't know since I didn't speak to a primary source on the matter. But IMO there would be definite benefits for both parties involved. I don't think it's a matter of MS wanting to own or not own the IP so much as simply having a contract in place that doesn't constrain them. To get Larrabee into the box and have Microsoft themselves working on development tools and DirectX-for-Larrabee I think would be a huge win for Intel, to say nothing of the captive developer market they would then have. Enough of a win that I could see Intel offering Larrabee to Microsoft on some extremely favorable terms, and with their node process advantage, potentially at a price lower than what MS would be paying out to TSMC to begin with for an IP they own.

Now all of that said, if ArmchairArchitect says MS has already decided to continue with POWER, there's no reason to doubt that. One would think that the next consoles would already be shaping up afterall, and this whole thing would be a huge gamble for MS. But I just wanted to point out that the traditional reasons why we think MS wouldn't go back to Intel probably don't apply as such here; Intel is the one that would need MS, not the other way around, and a lot has changed in the industry between the launch of XBox 1 and the ramp-up to XBox [720].
 
I understand that Intel has used this analogy several times now, but that doesn't make it any more correct. Seriously, Larrabee has as much relation to P54c as Conroe does to Katmai.
"Derived" implies a little bit more than an analogy...
dictionary.com said:
Adj. 1. derived - formed or developed from something else; not original
 
Last edited by a moderator:
"Derived" implies a little bit more than an analogy...

I fully understand what the term "derived" (and its derivatives ;) :p) mean.

I'm saying they are wrong. Raise your hand if you buy the statement that Larrabee is a P54c derivative. x86-64, SMT, vastly improved branch predication, etc are not things you just slap onto an ANCIENT x86 core. This is why I insist Larrabee has nothing to do with P54c. This *analogy* is just a way to relate the fact that it is a dual-issue, short pipeline architecture.
 
On the console front, I heard a couple of months ago that Intel had approached MS with regard to Larrabee usage in the next XBox - now what that means I don't know since I didn't speak to a primary source on the matter. But IMO there would be definite benefits for both parties involved. I don't think it's a matter of MS wanting to own or not own the IP so much as simply having a contract in place that doesn't constrain them. To get Larrabee into the box and have Microsoft themselves working on development tools and DirectX-for-Larrabee I think would be a huge win for Intel, to say nothing of the captive developer market they would then have. Enough of a win that I could see Intel offering Larrabee to Microsoft on some extremely favorable terms, and with their node process advantage, potentially at a price lower than what MS would be paying out to TSMC to begin with for an IP they own.

Now all of that said, if ArmchairArchitect says MS has already decided to continue with POWER, there's no reason to doubt that. One would think that the next consoles would already be shaping up afterall, and this whole thing would be a huge gamble for MS. But I just wanted to point out that the traditional reasons why we think MS wouldn't go back to Intel probably don't apply as such here; Intel is the one that would need MS, not the other way around, and a lot has changed in the industry between the launch of XBox 1 and the ramp-up to XBox [720].

So they wouldn't just have a good GPU, they'd have incredible influence in this new, exciting CGPU programming model.. That would put the definite nail in PC gaming (and Linux gaming obviously). Intel is really good at Linux and open source driver support, they're good at platforms, they could really revitalize PC gaming with this thing. If they pawn it off to MS on a silver platter I will be very sad.
 
I fully understand what the term "derived" (and its derivatives ;) :p) mean.

I'm saying they are wrong. Raise your hand if you buy the statement that Larrabee is a P54c derivative. x86-64, SMT, vastly improved branch predication, etc are not things you just slap onto an ANCIENT x86 core. This is why I insist Larrabee has nothing to do with P54c. This *analogy* is just a way to relate the fact that it is a dual-issue, short pipeline architecture.

I'm curious why you would keep the U and V pipes however, surely the size of them must be minimal now?
 
I don't follow you. I get the reference, but I don't understand your point. Sorry if I'm just being dense here.

If its not derived from the P54c then why wouldn't you design the new core with two symmetrical pipelines rather than keeping the complex and simple pipelines?
 
I'm saying they are wrong.
Why would they lie? There's absolutely no motivation for such a statement if it wasn't the truth.

x86-64, SMT, vastly improved branch predication, etc are not things you just slap onto an ANCIENT x86 core.
I'm no hardware guy, but it doesn't seem that unreasonable to me. Sure it's an "ANCIENT" core, but who cares if all you're really using/keeping is the stuff that hasn't changed a lot. The key here is that P54c had a really short pipeline which is ideal for the intended application.

Why is it so offensive to you that they may have worked from P54c as a base anyways? Why would they bother to reinvent the wheel as it were when they already had a perfectly suitable basis from which to work?
 
Why would they lie? There's absolutely no motivation for such a statement if it wasn't the truth.

Who said it's a lie? Not me. It's an analogy.

I'm no hardware guy, but it doesn't seem that unreasonable to me. Sure it's an "ANCIENT" core, but who cares if all you're really using/keeping is the stuff that hasn't changed a lot. The key here is that P54c had a really short pipeline which is ideal for the intended application.

Why is it so offensive to you that they may have worked from P54c as a base anyways? Why would they bother to reinvent the wheel as it were when they already had a perfectly suitable basis from which to work?

My point is that P54c is so ancient that any "derivative" featuring x86-64, SMT, advanced (modern) branch predication, and a whole host of others would be so far from the original that it could hardly be considered a derivative.

It is an analogy meant to convey design decisions, not a literal etymology.
 
On the console front, I heard a couple of months ago that Intel had approached MS with regard to Larrabee usage in the next XBox - now what that means I don't know since I didn't speak to a primary source on the matter. But IMO there would be definite benefits for both parties involved.

On second thought, if the PS4 guess of Cell 32iv is 1 Tflop machine (4 PPE + 32 eSPE) paired with a somewhat next gen NVidia GPU (1-2 Tflop), a Larrabee alone powered XBox will need something on the order of 2-3 GHz clock and 32-48 cores to match the flops. POWER7 road map is a 4GHz pair of 8 core CPUs, which seems underpowered flops wise compared to the Cell 32iv and Larrabee. Toss in the crazy idea of a pair of Larrabees or Larrabee paired with an ATI GPU and then you have something really interesting (but perhaps too expensive).
 
Back
Top