Larrabee delayed to 2011 ?

;) Let's look at it from another angle: if Larrabee derivate is integrated into every CPU starting from 2012 (providing DX11 support), then all game devs suddenly have new "minimum system requirements". It doesn't matter that the games don't look as great as with addin boards, because the installed base is suddenly huge.
 
;) Let's look at it from another angle: if Larrabee derivate is integrated into every CPU starting from 2012 (providing DX11 support), then all game devs suddenly have new "minimum system requirements". It doesn't matter that the games don't look as great as with addin boards, because the installed base is suddenly huge.

It probably won't matter either that CPUs will grow by magnitudes in die area and power consumption :oops:
 
Looking at the Larrabee presentation, I wonder how complicated it is for nVidia/ATI to rasterize the rest, and use ray-tracing only on the water. Both have presented hybrid approaches at SIGGRAPH (with selective ray-tracing), and I would be surprised if they don't provide better performance than what we see here. It's not like Larrabee is magic compared to current GPUs, true, it can run non-graphics workloads (read: stuff which AMD/nVidia cannot show at all), but ray-tracing seems to work on GPUs pretty well already (see OptiX, OTOY stuff), so if they present it as a differentiator, I would expect either rock-solid performance or some really complicated stuff (like AO, to showcase the gather performance, instead of coherent reflections ...)
 
I'd say it's completely irrelevant wether other chips can't raytrace such a scene in real-time if larrabee's raytracing doesn't look one iota better...
It's a tech demo. The first demos of pixel shaders also didn't look impressive at all, but nowadays it's unthinkable not to have support for them. You really have to look at what has been technically achieved here.

I'm not saying we need raytracing support per se, but we do need the kind of flexibility that allows raytracing. Mainstream consumers don't care about rendering Crysis at 200 FPS at 3840x2400. Their monitors (and eyes) only do 60 FPS at 1920x1200. For a long time achieving higher framerates and higher resolutions was the main goal, but this 'fillrate-race' of GPUs is about to end. You can no longer double the number of resources and expect the framerate to double. The individual tasks are getting smaller and the number of dependencies increase.

An architecture like Larrabee is far better at coping with this, and no matter how you feel about the actual graphics that's proven by their raytracing demo. So even if raytracing itself is not the future, Larrabee allows the developers to go in any direction they like, unrestricted by rasterization APIs.

Obviously these things don't happen overnight, so other vendors still have plenty of time to follow the same route once it actually starts to matter. But Intel certainly has a head start.
 
...but ray-tracing seems to work on GPUs pretty well already (see OptiX)...
Please. Ten pool balls doesn't really equal a Quake Wars scene. Writing a shader that checks for intersection with all primitives in the scene is really not that hard, but you need a really low number of primitives to make it run in real-time. By the way ray-sphere intersection is much simpler than ray-triangle intersection.
 
That's completely irrelevant. Other chips simply can't raytrace such a scene in real-time.
How do you know?

It indicates that Larrabee is vastly better at adapting to tasks other than the classic rasterization pipeline.
It could also indicate that others don't have any interest in spending effort to port a game with 10 year old visuals. Seriously, what's the upside?

"Look! Not only can we do what we did 10 years ago with the same quality, we're now also validating the arguments of Intel by moving the battle to their turf!"

By the way ray-sphere intersection is much simpler than ray-triangle intersection.
Maybe that's why the 'interesting' visual effects in that video are reflecting spheres?
 
IObviously these things don't happen overnight, so other vendors still have plenty of time to follow the same route once it actually starts to matter. But Intel certainly has a head start.

As long Intel hasn't anything competitive to sell, I would not grant them a head start.
 
It was a tech demo... I would expect that anyone looking at it is interested in the technology, not the art. Remember this conference/demo was targeted at developers, not consumers.
 
Edit: Apparently this was on 10% of the real chips performance. If that's true, then it's definitely more impressive http://www.semiaccurate.com/2009/09/22/larrabee-breaks-cover-last/

No, the chip had 10% of hoped for perf.
Oh wait, one GPU running at sub-10% of hoped for performance beating 16 Xeon cores is a huge step forward.

But this was A6 silicon, not B0. So may be B0 will be able to do real water simulation. ;)

But honestly, the visuals sucked big time on that trailer. I dont care what you use (hw/algorithm/architecture) to render stuff. I want better IQ @ 60 fps, not better FUD.
 
It's reported on every tech webpage in existence, so it's not just developers who see this...and they knew that.
That doesn't change the fact that you need to realize that it was targeted at developers, regardless of who decided to report it. It's still IDF, and thus it's the technology that is interesting, not the dated art... i.e. the point in the water moving was to show that it worked well with dynamic geometry, not to revolutionize fluid simulation, and so on.
 
Last edited by a moderator:
I may have missed the particulars of the demo and how it compared to the 16x CPU baseline.

The earlier raytrace demo was about 20 FPS.

If the latest run is comparable and Larrabee is in the 10% range, it would mean Intel hopes to get Quake Wars running at 200-300 fps at 720p resolution.

There were GeForce 285 numbers at a somewhat higher resolution for bog-standard Quake Wars of about 160 FPS with 4x AA and 16x AF.

Perhaps that is the desired range for Larrabee?
 
I don't find purely fixed function GPU's interesting anymore. I hope you see my point. It has to out do the present, to be interesting to me atleast.
... I'm not sure I understand what you're saying. You don't find ray tracing interesting moving forward into a world of programmable pipelines at all? You seem to contradict that in your own statement though so I guess I don't see your point...
 
You raise some very good and interesting points, Nick! Just wanted to say this right at the start, so you don't think I'm being overly negative towards you. :)

It's a tech demo. The first demos of pixel shaders also didn't look impressive at all
Hm, I remember the Doom3 tech demo running on the GF3 looked quite impressive to me, certainly far more impressive to me then than this does now... ;)

You really have to look at what has been technically achieved here.
I don't see why I should worship this tech demo just because it's a technical achievement; it's a technical achievement that doesn't have the faintest hope to even come close to the best rasterization IQ we have today.

Yes, pixel shading was pretty primitive 6ish years ago, but the computing power of today's GPUs is so vastly higher today than it was then... The GF3 that ran the Doom3 tech demo had some 60 million-ish trannies as I recall, damn impressive for its time. The 5870 has well over 2.1 billion... That's a factor of ~35x. Of course pixel shading pretty much has to look much better today with such a vast increase. :)

What I'm trying to say here (perhaps not obvious, since I tend to be so damn verbose when I write) is that larrabee raytraces a tech demo with a framerate somewhere in the 20s and not particulary impressive visuals even if you ignore that's it's a brown-textured id Software game - apart from the fact it's raytracing in realtime that is... So if you're gonna bump up the quality of the raytracing to say Crysis at enthusiast level settings, antialiasing, motion blur, depth of field, tone mapping, the whole enchilada...what's the framerate gonna be THEN? Obviously not in the 20s anymore, but more like...5. ...If that much even.

Admittedly I'm pulling numbers out of thin air here, so I could be completely wrong of course. But it doesn't seem out of the realm of possibilities, considering what a strain Crysis is on traditional rasterizers that can render much faster than a raytracing setup can.

For a long time achieving higher framerates and higher resolutions was the main goal, but this 'fillrate-race' of GPUs is about to end.
IMO that race has already ended, it ended around when the G80 was released and real focus was placed on shading power rather than rasterizing power. It's true that rasterizing increased then as well and continues to increase even now, that's really a neccessity to drive increased performance, but it's not the main focus at all I think most would agree.

You need more rasterization power to increase framerates with high levels of anti-aliasing. We're not yet where Crysis runs at 200fps or anything even close to it, especially now that ATI is dabbling in supersampling AA again, that's the first time a major GPU vendor does that in more than half a decade or so...

An architecture like Larrabee is far better at coping with this, and no matter how you feel about the actual graphics that's proven by their raytracing demo. So even if raytracing itself is not the future, Larrabee allows the developers to go in any direction they like, unrestricted by rasterization APIs.
I don't see what actual NEED larrabee fulfils with its raytracing demo. Are any devs at all asking to be liberated from traditional rasterization, truly?

I seriously doubt it. Pretty much every devtool in existence is geared towards the polygon-based rendering method. Every major console and PC API are exclusively rasterization-based. I truly doubt, with all the pressing concerns regarding development time and costs that major new methods of rendering is going to be implemented independently by devlopers ANY TIME SOON. If ever really.

Especially not on a platform that's proprietarily owned and controlled by one single manufacturer, namely Intel, whom has no real prior experience in the high-end consumer 3D graphics and dev-relation/support field, has NO market share and won't have any market share to speak of for the next half-decade after Larrabee releases even if they stick a Larrabee co-processor into every single CPU or chipset they sell (unlikely).

Unless both AMD and Nvidia go out of business entirely, which developer would plow down millions into inventing their own completely custom larrabee-based rendering setup that won't run on any other graphics processor out there?

But Intel certainly has a head start.
They certainly have a head start with something, but if it's a head start that's actually going to lead them to victory rather than towards a gigantic left turn into a bog is a different matter entirely. Lots of question marks regarding larrabee's actual viability (as opposed to theoretical potential) remain, and intel's continuing bumbling on the matter and pushing back deadlines isn't helping.
 
I wouldn't give too much importance to the fact that what was shown used RT. According last year Siggraph paper there's nothing in LRB architecture that was designed specifically for ray tracing (no fixed function RT unit).
Performing well in RT is always good news, as one would expect more regular work-loads (rasterization?) to perform even better.
 
Obviously these things don't happen overnight, so other vendors still have plenty of time to follow the same route once it actually starts to matter. But Intel certainly has a head start.
I`d say that STI has a 5 year start, at least. Those companies already predicted the bottleneck back in 2000 when Intel still was expecting its Netburst-Architecture to reach 10GHz for several more years.

Sure they might' ve been too early with Cell, but expecting Intel to make everything right at its first attempt is a big assumption. Still using x86 for all its cores smells like a horrible compromise.
 
Back
Top