Larrabee delayed to 2011 ?

rpg.314

Veteran
http://www.brightsideofnews.com/new...er-left-intel-because-of-larrabee-fiasco.aspx
After we saw roadmaps for introduction of Larrabee pushed back all the way to 2011,

and according to documents we saw, it won't reach the market in the next 12 months.

WTH is going on with lrb and intel? Are they gonna make their 2010H1 deadline or not? If they slip to H2, they'll quite likely have to put up with 32 nm shrinks of r8xx and prolly 32 nm shrink of gt300 too. Fighting them on a 45 nm chip would be hard, but may be, the delay is because they will go straight to 32 nm, in which case they'll have an advantage if they are able to launch in say Q1.

EDIT:

A bit later I find this,

http://www.semiaccurate.com/2009/09/16/new-larrabee-silicon-taped-out-weeks-ago/

:oops:

If they have indeed quashed a number of bugs, I suppose they could launch in early 2010.
 
Most probably Intel wants a smaller die foot print for the monster, to be more competitive on the market and consequently this will give them some extra time to polish the software graphics "pipeline" and the driver model.
After all, there are so mush APIs to validate for. :D
 
I have high hopes for this project but realistically can't see them outperforming nVidia or ATI in the grpahics only applications.

Would be interesting if we could have larrabee and ATI/nVidia + some nice AMD or Intel CPU next year. The masochists like me can have fun programming 3 processors XD
 
I think we'll prolly see 32 nm "dumb shrinks". Mixed into the same sku's of 40 nm if need be.

What for? I mean there's little doubt that AMD will go for 28nm at Globalfoundries. Is the hussle to change to 32nm@TSMC libraries even for a a dumb shrink worth it?
 
Larrabee is chasing a fast moving target, the chip seems more and more like an albatross. By time time it comes out, AMD might be coming to market with their Fusion GPGPU, and by that time Nvidia will have who knows what.

Intel should stick to what they are good at, bribing OEMs um I mean making CPUs.
 
I sure hope they ditch the abominable x86 ISA. Why is this needed if the chip will be incompatible with their CPUs anyway?
 
By the time it arrives, they probably will have ditched the pathetic x86 ISA.

To me lrb demonstrates convincingly that you'll have to pry x86 isa from intel's cold dead fingers. They do have an ARM license though. bribing ARM to let them slap a vpu onto a simple in order arm core might be the way to go, after all these promises of programmability.
 
@MODS:

Can you please change the thread title to "Larrabee delayed to 2011?". After all, it is not a fact yet, just Theo's piece.
 
I have high hopes for this project but realistically can't see them outperforming nVidia or ATI in the grpahics only applications.
Raytracing Quake Wars in real-time really looks mighty impressive to me compared to the spinning cubes I've seen so far from other vendors. At least it shows they have some magic sauce that makes their architecture more efficient. The Radeon HD 4890 versus GeForce GTX 285 also proves that it's not all about the TFLOPS.

Whether the things that give it a massive advantage at raytracing will also help it with classic rasterization is a different question though. And while ATI is doing nothing to combat Amdahl's Law, the rumours surrounding GT300 being a total redesign suggest it still has a chance at stealing Intel's thunder...
 
Raytracing Quake Wars in real-time really looks mighty impressive to me compared to the spinning cubes I've seen so far from other vendors. At least it shows they have some magic sauce that makes their architecture more efficient. The Radeon HD 4890 versus GeForce GTX 285 also proves that it's not all about the TFLOPS.

I just looked for a representative video and found this:
http://www.youtube.com/watch?v=mtHDSG2wNho

It doesn't look any better than Crysis IMHO.
Apart from the useless reflective floating spheres it would be hard to tell that this is raytracing.
 
Last edited by a moderator:
It doesn't look any better than Crysis IMHO.
That's completely irrelevant. Other chips simply can't raytrace such a scene in real-time.

It indicates that Larrabee is vastly better at adapting to tasks other than the classic rasterization pipeline. But since the rasterization pipeline has also become highly programmable I expect them to also have certain advantages when rendering Crysis. The only question is whether that translates into an advantage in absolute numbers or not.
 
That's completely irrelevant. Other chips simply can't raytrace such a scene in real-time.
I'd say it's completely irrelevant wether other chips can't raytrace such a scene in real-time if larrabee's raytracing doesn't look one iota better (or run faster) than other chips' classic rasterizing - and looking at this video it DOESN'T.

Apart from the surprisingly fluid framerate this realtime rendering completely underwhelms. The lighting is extremely flat, surfaces look very flat and matte even when close up, the water looks incredibly sluggish and unrealistic (kind of what I'd imagine a sea of transparent mercury would look like).

It indicates that Larrabee is vastly better at adapting to tasks other than the classic rasterization pipeline.
Perhaps. But if it gives no real-world visual improvements (other than more accurate reflections) then it's a pointless advantage. A decidedly ho-hum IQ raytraced scene (at not incredibly high framerates) simply don't push any buttons for me.

Geeking out on realtime raytracing is all good and well for some computing nerds. It might even find a decent niche in some professional market segment. But it won't last in the consumer market if it can't compete with traditional rasterizers in traditional rasterizing titles at the same or better cost/framerate as its competitors. Larrabee has yet to demonstrate it can do that, and considering how long ATI and NV has had to develop the traditional rasterizer (and games devs developing software for them), I'd say intel has its work cut out for it...

It's a very interesting tech though, but the more time passes without larrabee delivering anything substantial makes you doubt the viability of the entire concept.
 
I don't give a rat's ass about programmability if the chip cant deliver better IQ @60 fps than another chip that runs 3Dx.y, whichever way you render it (raytracing, photon mapping, radioisity, rasterization, funny rasterization algorithms.........).

LRB/GPUs/swift shader etc. have to win on better IQ/$/Watt @60fps, PERIOD. Nobody cares about features/programmability if they aren't in D3Dx.y. Unless it delivers on the above metric, it wont sell in enough volumes to sustain R&D on it, even if you are intel. Good luck sustaining it on HPC market alone.
 
Back
Top