Intel Larrabee set for release in 2010

More seriously though, why do companies divulge their roadmaps three years ahead? That never made sense to me, unless you're in a situation where you have nothing else to lose. AMD, I can understand. Intel doesn't seem to be in that position though.

Easy answer and one that bears constant repetition. The point of such statements has nothing to do with what the state of things will be like in three years; rather, it has to do with how the company is perceived by investors *today*. High tech companies battle as much for mind share, never forget, as they do for market share. As you correctly surmise, talking in glowing, hyped-up terms about technology you are not even close to having finished, let alone close to being able to ship, has zero value for today's consumer. Nor is it especially informative of the actual state of things to come, as many, many such "leaked" product development cycles are discarded before they were "scheduled" to arrive, in deference to the "something else" that actually ships at the appointed time. The point of all such predictive chatter is to hype the company today, and whether said technology actually ships is often if not always completely beside the point. While most high-tech companies do this sort of thing from time to time, Intel seems to me to be one of the worst offenders in this regard. Basically, these days when I hear chatter that "Intel says this is what it will be doing in three years" I just let it go in one ear and right out of the other...;)
 
Easy answer and one that bears constant repetition. The point of such statements has nothing to do with what the state of things will be like in three years; rather, it has to do with how the company is perceived by investors *today*.

While I don't disagree with this, I think it isn't always about investors. If you need major support elsewhere in the chain you need to rally/interest/educate those other sectors too, and that also takes time. Whether it be ISVs, OS API, tools makers, whatever.
 
I'll respond to Arun's points about JIT later, but I wanted to throw this related point out there.

It seems to me that Arun and others are correct to sumrise that Larrabee won't really be competitive with whatever NVIDIA has out at launch when it comes to traditional raster rendering. In fact, I sort of take that as a given, simply because in spite of whatever generalization trajectory NVIDIA is now on, their leading-edge part will always be more specialized than Larrabee.

The promise of Larrabee as a GPU, it seems to me, is in the real-time ray tracing stuff that Intel is developing, and in how that combines with raster and other techniques to yield the possibility of new types of eye candy (or physics, AI, etc.). At least, Intel spent enough time hyping RTRT a their Research@Intel day that this is clearly the direction they're headed with Larrabee.

So what I envision as the post-Larrabee horserace won't so much be an Intel Larrabee GPU vs. an NVIDIA GXX GPU duking it out for FPS scores on the exact same game engine, as it will be Larrabee (which is "GPU" only in scare quotes) plus whatever "software" engine someone builds with it vs. a bone fide, raster-rending GPU from NVIDIA plus the state-of-the-art hardware-accelerated raster engine at the time.

My ultimate point is this: I'm betting that when people look at the kinds of games you can build with Larrabee vs. the kinds of games you can build with the NVIDIA GXX, they may actually like the Larrabee option.
 
My ultimate point is this: I'm betting that when people look at the kinds of games you can build with Larrabee vs. the kinds of games you can build with the NVIDIA GXX, they may actually like the Larrabee option.
I think you might be putting some misplaced faith in Larrabee--it has to win meaningful market share to get developers to write for it (since I'm guessing it's not just going to be Direct3D), but there have to be games for it in order for anyone to buy it as a graphics board.

Also, it's not completely impractical for a GPU to be used as a raytracer, and that will probably be a lot more true by the timeframe we're talking about for Larrabee (which I've put at absolutely no earlier than Q1 09, more likely Q3 or Q4).

I'm not trying to poo-poo Larrabee, but it's going to have to be so much better than any rasterizer out there to have a chance of catching on at this point.
 
Regarding raytracing, I'm really not convinced Larrabee will have any advantage against G100/R800/R900. Last I heard, most of its processing power is still SIMD, and has all the related disadvantages. How is that any different from G80/R600? And worse, why would anyone believe that this will be superior to true next-gen GPUs?

I don't think Intel's focus on raytracing is because they are confident they will be incredibly good at it. Quite on the contrary, it is because they predict NVIDIA and AMD will be bad at it. But why would anyone believe that? Unless Larrabee has fixed-function units focused on raytracing, that is completely ridiculous.

There is nothing magical about Larrabee that makes it superior for raytracing to other SIMD processors out there, besides the fact that they *might* have some fine-grained control mechanisms in addition to the SIMD processing power. But what's to prevent NVIDIA and AMD to add that to their GPUs in due time? I'm not even convinced it matters so much, anyway.

Then think about something else: raytracing doesn't shade your scene. It replaces one specific and small part of the pipeline, which is rasterization. And that's it. Even if it was MUCH faster at raytracing for some kind of mystical reason, it would still be MUCH slower at shading. So unless your definition of great graphics is chrome (read: full reflection) everywhere, and I'm sure it is for some people, your super-fast raytraced scene is just going to look like shit. Good luck getting momentum with both developers and gamers in that case, except for niche games that nobody will ever play anyway.

So either that's not Intel's strategy, or they should give up on their strategy immediatly, before it is too late and the entire world will mock them for the next 5000 years. Well, I guess they're used to that with Itanium already, but at least they had contractual obligations as an excuse there... Note that I'm only talking about graphics here, HPC is another question entirely and the prospects there are obviously a lot more interesting, but depend a lot on Intel's execution, but also NVIDIA's and AMD's.

If Intel does integrate efficient fixed-function units for graphics, things might be very different, but we'll see about that. The initial diagrams did indicate that, after all, although it wasn't clear what they were all about... So while I'm being quite harsh here, do keep in mind that it is all with a big 'IF', and based on the correctness of your assumptions.
 
The promise of Larrabee as a GPU, it seems to me, is in the real-time ray tracing stuff that Intel is developing, and in how that combines with raster and other techniques to yield the possibility of new types of eye candy (or physics, AI, etc.). At least, Intel spent enough time hyping RTRT a their Research@Intel day that this is clearly the direction they're headed with Larrabee.
They spent a lof of time hyping RT but they really didn't tell us why we should get hyped about it. I don't think gamers will switch to Larrabee just cause they can get local reflection here and a bit of dynamic ambient occlusion there, given that rasterization can fake both right now.
Moreover I wouldn't be surprised if in 2-3 years NVIDIA and AMD GPUs will allow us to implement an efficient ray tracer on them, even though I'd like to see from them some non uniform rasterization support.

My ultimate point is this: I'm betting that when people look at the kinds of games you can build with Larrabee vs. the kinds of games you can build with the NVIDIA GXX, they may actually like the Larrabee option.
I think you need to make some pratical example here cause I don't see what RT will enable us to do that we can't do right now (or in 2 years).

Marco
 
So what I envision as the post-Larrabee horserace won't so much be an Intel Larrabee GPU vs. an NVIDIA GXX GPU duking it out for FPS scores on the exact same game engine, as it will be Larrabee (which is "GPU" only in scare quotes) plus whatever "software" engine someone builds with it vs. a bone fide, raster-rending GPU from NVIDIA plus the state-of-the-art hardware-accelerated raster engine at the time.

My ultimate point is this: I'm betting that when people look at the kinds of games you can build with Larrabee vs. the kinds of games you can build with the NVIDIA GXX, they may actually like the Larrabee option.

I wonder what Microsoft's view on this battle might be? Given their vested interest in owning and controlling The Platform through DirectX I can't see them being happy to sit and watch Intel and NVIDIA duke it out, then siding with the winner. What you seem to be describing is a world in which NVIDIA provides hardware acceleration for MS's API, whilst Intel promotes an entirely different way of programming game engines (potentially bypassing MS's platform entirely).
 
OpenMP is a mixed bag in my experience. My personal feeling is that one of its major failings is that it's too easy to use (paradoxical as that may seem). It has some serious shortcomings in certain areas (eg. memory placement). It's a great way to scale from 1 to 10 threads, it's not so hot going from 10 to 100. IMO it hides too much from the programmer for it to be viable for extreme scalability on a NUMA architecture.

Hi nutball,

This is OT - I'm just starting to play with OpenMP a bit. There's one area that's really confusing me atm: how are OpenMP runtime errors communicated to a user program? For instance, say the runtime fails to create the desired number of threads, or a mutex etc... are there any standard ways to hook in an error handler (I'm not seeing any), or does the program just fall over and die?

So I guess the whole error handling aspect of OpenMP and actually also OpenMPI seems a bit MIA to me... is this just newbness on my part?
 
This is OT - I'm just starting to play with OpenMP a bit. There's one area that's really confusing me atm: how are OpenMP runtime errors communicated to a user program? For instance, say the runtime fails to create the desired number of threads, or a mutex etc... are there any standard ways to hook in an error handler (I'm not seeing any), or does the program just fall over and die?

Basically the OpenMP standard presumes that the compiler will map the OpenMP directives onto an underlying threading infrastructure, and broadly speaking hands off all the exception handling issues to that infrastructure (without really giving you the application programmer the chance to intercept exceptions in any standard-compliant way).

In the environments I've worked in, the OpenMP run-time presumes that's it's incomprehensible that one would run out of resources such as threads of mutexes (which is a 99% correct assumption), and dies horribly if it happens that you do.
 
The promise of Larrabee as a GPU, it seems to me, is in the real-time ray tracing stuff that Intel is developing, and in how that combines with raster and other techniques to yield the possibility of new types of eye candy (or physics, AI, etc.). At least, Intel spent enough time hyping RTRT a their Research@Intel day that this is clearly the direction they're headed with Larrabee.

So what I envision as the post-Larrabee horserace won't so much be an Intel Larrabee GPU vs. an NVIDIA GXX GPU duking it out for FPS scores on the exact same game engine, as it will be Larrabee (which is "GPU" only in scare quotes) plus whatever "software" engine someone builds with it vs. a bone fide, raster-rending GPU from NVIDIA plus the state-of-the-art hardware-accelerated raster engine at the time.

My ultimate point is this: I'm betting that when people look at the kinds of games you can build with Larrabee vs. the kinds of games you can build with the NVIDIA GXX, they may actually like the Larrabee option.

I still want to know what they are going to do about API support. For all the "as soon as x86 gets competitive they win because of their huge tail of infrastructure" works for them on gpgpu type apps, it works against them on the gaming/graphics apps if they aren't using the DX infrastructure. Have they started talking to MS about integrating what they need?
 
Dave said:
Why presume that it wouldn't / couldn't use DX?
The argument used above is that Larabee will make NVIDIA and AMD obsolete because it's x86 and it can do ray tracing. As soon as you go DX, you lose both of these "advantages". However, to be compelling for gaming, Larabee must support DX-next.
 
But that is an either/or situation. You can't forsee a scenario where it could offer great HPC capabilities, good RayTracing capabilities and passable/acceptible DX rendering?
 
I'm not presuming it can't use DX at all. I asked the question because I don't see any reason to presume it *can*, or that if it does that the DX support will be the competitive advantage they are relying on. Most pointedly, I want to know how they intend ISVs to integrate the ray-tracing stuff on the software side. . . thru bog standard x86, thru DX, thru some proprietary api/sdk/etc that leverages x86, or thru some propietary software stuff that leverage something proprietary on the hardware side? (Did I miss an option? :smile: )
 
I'm not presuming it can't use DX at all. I asked the question because I don't see any reason to presume it *can*, or that if it does that the DX support will be the competitive advantage they are relying on. Most pointedly, I want to know how they intend ISVs to integrate the ray-tracing stuff on the software side. . . thru bog standard x86, thru DX, thru some proprietary api/sdk/etc that leverages x86, or thru some propietary software stuff that leverage something proprietary on the hardware side? (Did I miss an option? :smile: )
This is entirely key to any headway this architecture makes in the consumer graphics market. It's also pretty clear that we just don't know enough about the architecture to even guess at its performance at this point. A huge amount of detail needs to be fleshed out before we can find a fit for it in the graphics space.
 
Maybe they'll sell first gen as a combo CPU and ray-tracing accelerator. Sort of an integrated PhysX. . .and you still use your Radeon/GeForce for the DX stuff.

When I first started to write that, I was 1/2 playing around. The more I think about it tho, the more it seems to me it might actually be the best transition strategy open to them as they build ISV support. Leverages the enthusiast market penchant for dropping large sums for niche type of advantages (at first anyway), while not having to compete head-to-head for DX performance with the best NV and AMD can provide in the timeframe. Also allows (encourages even) ISVs to build in such support as an add-on/switch-on for the enthusiasts, which is likely to be a necessary strategy for them as well as they build their market penetration.
 
Maybe they'll sell first gen as a combo CPU and ray-tracing accelerator. Sort of an integrated PhysX. . .and you still use your Radeon/GeForce for the DX stuff.
Do you mean using Larrabee as a gamer's CPU?
Larrabee would run right into Gesher with the current time frame.
Intel's own product lines would bump heads.

It's not a new situation, but one that would more likely threaten Larrabee (niche maybe performance leader in certain workloads) than Gesher (broad-market, broadly consistent workload performance on new and old apps).
 
While I don't disagree with this, I think it isn't always about investors. If you need major support elsewhere in the chain you need to rally/interest/educate those other sectors too, and that also takes time. Whether it be ISVs, OS API, tools makers, whatever.

Right--it isn't "only" about investors, although this kind of claptrap has proven itself very effective "analyst bait" many, many times in the past. Usually, when you see an analyst talking in glowing terms about some future technology that isn't close to being finished or close to production, you know right away that this kind of PR is the source of his so-called "inside information." What's in it for the analyst, of course, is that when people read this sort of stuff, and believe it--which many of them do--then they go out and buy stock based on the "inside information" furnished by the analyst. This has the effect of personally and professionally enriching the analyst's reputation, which is exactly why this sort of thing is done regularly. Whether or not the projected product ever ships at the appointed time becomes entirely secondary and amounts to little or nothing in the end as few will remember who said what about a product three years before it was *unofficially* scheduled to see the light of day.

I can't say I agree with the "education" idea, though, as I think it's difficult to educate people on something which does not, and may never, exist--although, it's certainly true that there's a lot of "educational info" out there on things like UFO's, etc., for the people inclined to think they are real...;)
 
Back
Top