Intel Larrabee set for release in 2010

I can't say I agree with the "education" idea, though, as I think it's difficult to educate people on something which does not, and may never, exist--although, it's certainly true that there's a lot of "educational info" out there on things like UFO's, etc., for the people inclined to think they are real...;)
I think the difference here is that we're all certain Larrabee will exist, and thus the education point becomes entirely relevant.
 
I can't say I agree with the "education" idea, though, as I think it's difficult to educate people on something which does not, and may never, exist--although, it's certainly true that there's a lot of "educational info" out there on things like UFO's, etc., for the people inclined to think they are real...;)

I was thinking more devrel kind of "education", given the lead times typically involved in such things. In my (early, assumption-laden) opinion, they are going to be facing the largest devrel challenge in graphics/gaming since MS launched DX. Whether they will step up to that challenge with appropriate resources to the scale of the mountain they aim to climb is, in my book, one of the more interesting questions here.
 
I hate to spoil the fun, but I think there's a possibility Larrabbee won't be the $300 graphics chip rumor coming in 1-2 years. Rather, it could be the upscale version of the G965(much). Their G35 presentation says that its scalable to 128 shaders(up from 8 on G965). With proper drivers, architecture optimization, dedicated fast memory, and more shader firepower, it may be possible.
 
http:/www.pcinlife.com/ours/edison/larrabee_newest.png

The Larrabee HPC version
 
Last edited by a moderator:
My thoughts:

I think Larrabee will be ready in late 2008 with GDD5 interface:
http://www.theinquirer.net/?article=38011

Will have a DX10 driver + custom parallel programming using Ct:
http://download.intel.com/pressroom/kits/research/Flexible_Parallel_Programming_Ct.pdf
or perhaps OpenMP directly.

Intel gonna patch DX10 shader system in a way we could fire a "trace/gatherPoints/computeFlux" HLSL/GLSL instruction from the shaders. Will release(or support officially) the David Pohl realtime raytracing framework as a sepparate API ( unless Microsoft is starting to design a pure raytracing API or incorporating it to DX11 ).

My doubt is that if Larrabee is a "discrete" graphics card or a CPU/GPU hybrid with general use cores(I suspect is this last option )

On the other hand we should see what Ageia PhysX2 could do in the future(they are working in HW-accelerated ray batches), the new SaarCor FPGA prototype ( http://graphics.cs.uni-sb.de/SaarCOR/DynRT/DynRT.html that's the old one running at 60/90Mhz) and the ARTVPS AR500 processors ( http://www.artvps.com/page/109/raybox.htm )
 
Last edited by a moderator:
Will have a DX10 driver + custom parallel programming using Ct:
http://download.intel.com/pressroom/kits/research/Flexible_Parallel_Programming_Ct.pdf
or perhaps OpenMP directly.
That's very interesting, first time I see this document.
I can certainly say that I'm more excited at the direction Intel is going with their GPGPU language than what NVIDIA is doing with CUDA, even though there's nothing in CUDA that prevents NVIDIA to support the kind of stuff Ct compier and runtime can do (on paper..)
 
That's very interesting, first time I see this document.
Ct is good because simplifies a lot the OpenMP model.... but I would like to use standard things to manage Larrabee, not to learn another parallel language/model :devilish:

I can certainly say that I'm more excited at the direction Intel is going with their GPGPU language than what NVIDIA is doing with CUDA, even though there's nothing in CUDA that prevents NVIDIA to support the kind of stuff Ct compier and runtime can do (on paper..)
CUDA lacks a function calling stack ( all the functions are inlined ). Without it need to do very nasty things to get the spatial structures to work(although there are some stackelss papers on the Internet) easy. I hope Larrabee final code language will allow us to use a calling stack (and the full x86 features) to program it more easy. And if could use OpenMP better... because is a standard.

Personally I would like Larrabee as a 48-generic core CPU... .but I bet will be like 2-4 generic cores + a lot of specialized SIMD units, like Cell.

Also would like an Intel free raytracing API with trace/gatherPoints/computeFlux functions. The trace just test triangle-hit (hit/closestHit, retrieves baricentric coords of the hit and distance). The gatherPoints is like the Renderman ( hemisphere ray test ). The computeFlux averages photonmap samples in a surface disc for realtime global illumination. This API would be ready to manage lots of polygons (millions, like the PS3 car in http://www.cro-xp.org/tuning-video/video/8B2-9rNBvIg/Lamborghini-raytraceado) and full dynamic scenes. Daniel, are you there to explain a bit more? :p
 
Last edited by a moderator:
According to the graphic, Larrabee's top model will be a 32 core 4-thread in-order 16-wide-SIMD design.
http://translate.google.com/translate?u=http%3A%2F%2Fpc.watch.impress.co.jp%2Fdocs%2F2007%2F0611%2Fkaigai364.htm&langpair=ja%7Cen&hl=en&safe=off&ie=UTF-8&oe=UTF-8&prev=%2Flanguage_tools

48 in 2010 :eek:
So I bet will be 8 in 2010 really :p

Btw, the 16 elements SIMD I think is to pack 4x4 coherent rays in a row.

What scares me is the "the operational demonstration of Larrabee is released in 2008" sentence.
 
Thanks for the Ct link. Glancing over it the language looks more like Peakstream than CUDA. You have just normal C/C++ code with some vector stuff scattered in it, and the compiler pulls out the vector portions. I'm not sure which method I like better, having well defined kernels (CUDA) or depending on the compiler to figure out what the kernels are (Ct and what was Peakstream).
 
Speaking of Peakstream... has there been an news concerning the future of their commercial product?
 
The Ct whitepaper is very interesting. They do hint at giving the programmer more low level access, which is leading me to believe that Ct might allow to express a program in vector intrinsics as well as writing custom kernels.
 
It seems Intel said in the IDF2007 that gonna launch Larrabee as a "Workstation/server chip" by the middle of 2008:

http://www.fudzilla.com/index.php?option=com_content&task=view&id=3091&Itemid=1

Btw, I think they bought Havok to improve their upcoming physics and raytracing API.
http://arstechnica.com/news.ars/pos...ysics-engine-for-forthcoming-gpu-product.html

You will *not* quote Fudzilla as a viable news source. Can someone add this to the rules please?
 
Back
Top