Intel Yorkfield (45nm) Realtime RayTraces @ 90fps!

GMâ„¢

Newcomer
Well, I could hardly believe it when I first read the article, but it seems Intel's work with Daniel Pohl is really paying off. I first looked at his work thanks to an article here @ b3d, but I can't remember what it was exactly. Good on the bloke though, because I think he has really made a name for himself so soon after leaving Uni.

The article is from the inquirer, but I think on this occasion it can be trusted. ;)

Here is a screen shot:


Article is here (Courtesy of the theinquirer.Net):
http://www.theinquirer.net/default.aspx?article=41858

Quick breakdown is that they were able to perform 768x768 resolution real-time ray-tracing @90fps on a single Intel quad-core sub 3GHz chip, in comparison to just 4fps on a 50x CPU Xeon setup!

I really cant wait to see what Intel come up with in 2009. I really think information like this is really going to intimidate Nvidia and AMD. Anyone think they have the technology/Knorr-how to compete?
 
I do not know, but a little bit strange, that above the 50 Xeons is a picture of costlier vegetation and above the 90FPS @ Yorkfield a simplier room-enviroment? :???:

In Raytracing Netburst should not be so bad or?
 
Yes you can assume straight away that there are some inconsistencies between the benchmark results, however AFAIK 90fps for that resolution in the 2nd scene is still impressive. Does anyone have any links to some captured footage?
 
If they are not comparing the same scene in the test then the test means nothing! One is an outdoor, sunlit scene and the other is an indoor scene. The lighting on the outdoor scene is possibly many times more complex than the indoor scene. This comparison is trash.
 
It does not only looks like Quake, it is Quake:
Q3RT and Q4RT

This guy(Daniel Pohl) and his work is hyped by Intel, because he should help them to sell their multi-cores in the future. ;)
 
!! They're spouting the same uninformed nonsense about raytracing vs rasterization again, and quite honestly this is inexcusable coming from Intel. If they keep this up, and continue to refuse to answer our quite-reasonable questions (B3D had a great interview list) I'm gonna lose a lot of respect for them...

Sure they make good chips, but seriously give me some confidence that you have the intelligence to make Larrabee good...

[Edit] Plus they just lie... their claims of 10x faster than G80/Cell for raytracing are just wrong. Clearly they missed SIGGRAPH this year, and last year, and several other occasions on which real-time raytracers doing much more complex stuff have been demoed running on GPU's/Cell. Sigh. I should stop reading anything from Intel related to raytracing.
 
Last edited by a moderator:
What makes me really wonder is whether Intel thinks that Pixar are honks, as they always rasterize first and ray-trace later as it gives better performance ... if ray-tracing would be *so* much faster than pure rasterizing, why would they even bother with it. Same goes for mental ray, which is also (usually) faster if the first ray is not raytraced ...
 
Pure raytracing fares shockingly badly when compared to other rendering techniques run on the same computer, see e.g. realtime global illumination demos.

I'd really like to include some recent raytracing work, but I haven't found any publicly available demo. For obvious reason, videos are not good for comparison.
 
Sure they make good chips, but seriously give me some confidence that you have the intelligence to make Larrabee good...
I wouldn't be surprised if the marketing guys (and even some parts of engineering) still only cared about making sure management was completely sold on the idea. Who cares about the truth in those kinds of circumstances? You cannot afford management starting to doubt your initial claims.
 
[Edit] Plus they just lie... their claims of 10x faster than G80/Cell for raytracing are just wrong. Clearly they missed SIGGRAPH this year, and last year, and several other occasions on which real-time raytracers doing much more complex stuff have been demoed running on GPU's/Cell. Sigh. I should stop reading anything from Intel related to raytracing.

Lol, true or not its about time Intel started talking some smack about Cell after all the crap IBM gives its processors in the media :LOL:
 
I wouldn't be surprised if the marketing guys (and even some parts of engineering) still only cared about making sure management was completely sold on the idea. Who cares about the truth in those kinds of circumstances? You cannot afford management starting to doubt your initial claims.
I know, it's all marketing... but I hate it when people eat it up so much :( Realistically I know the Larrabee guys know better, but if I was working there I'd feel bad to see (and try to prevent) that level of misinformation spread by my company!

Anyways I'm probably over-reacting but after not even being willing to answer B3D's excellent interview questions, I wish people would be smart enough to pretty much ignore anything that Intel says about raytracing.
 
well, while the comparison is not fair, it's still correct. realtime raytracing went a long way those last years, and evolved rapitly, to be by today capable to handle certain scenes good enough for gaming ("good enough", i can imagine the flames and re-interpretations).

what i am impressed, is that during the next year intel wants to push raytracing for developers, wich will be a worlds first.
 
what i am impressed, is that during the next year intel wants to push raytracing for developers, wich will be a worlds first.

What they're really "pushing" of course are their processors...;) When you think about why the concept of "real-time ray-traced 3d games" has yet to ignite the interest of a single solitary developer enough to actually commit its resources towards creating such a game for commercial sale, it isn't hard to understand why such an effort would amount to the "first time" it's ever been done. It's just not something developers want to do. It is, however, a concept that Intel enjoys pushing to the masses.

What isn't well understood at all apparently is that today's environment of 3d-accelerators and APIs came *after* lots of people had been doing cpu-based ray tracing for a long time. 3d-accelerators, better known as gpus, were an advancement over cpu-based ray tracing because they allowed for rendering speeds far, far in excess of ray tracing speeds even on the fastest cpus--to the degree that no comparison was directly possible. Despite these glowing tidbits of information, I think the situation is still very much exactly like it has always been in that regard.

Ray tracing and 3d gpus are simply apples and oranges. Ray tracing is really nice for rendering frames to video, imo, because a consistent 30-60 fps is just not required. For 3d games the gpu is obviously the way to go. People sometimes become blinded by the one-sided publicity to the extent that they remember only that cpus are advancing rapidly. For some reason they easily forget that the same is true for gpus, as well.
 
Plus they just lie... their claims of 10x faster than G80/Cell for raytracing are just wrong. Clearly they missed SIGGRAPH this year, and last year, and several other occasions on which real-time raytracers doing much more complex stuff have been demoed running on GPU's/Cell.

Perhaps Cell can do faster ray tracing than Penryn but I for one haven't seen any particularly fast GPU ray tracers.

Fastest GPU ray tracer I know about is this. 2M triangle scene with simple shading runs at ~5.7FPS on G80.

One of the fastest CPU ray tracers I know is described here. It is the older version of the ray tracer Intel used on that 45nm quadcore. There on 3.2GHz P4 with HT they traced the same scene with similar lighting at around 24FPS. Now consider that Yorkfield has massive IPC lead over P4 and twice as wide SSE. I've personally seen >2x increase of speed per-clock with SSE ray tracers when comparing K8 vs Core2. So I wouldn't think it would be wrong to say quadcore Yorkfield would be around 6-8x faster than that P4 (4x more cores, 2x wider SIMD, scale back a bit for HT) when tracing that scene. It could very well be even faster considering how much time has passed since those MLRTA results were publicised.

So what exactly did Intel miss when comparing against GPU ray tracers? I surely hope there is some research paper somewhere describing much faster GPU ray tracer :)
WaltC said:
What isn't well understood at all apparently is that today's environment of 3d-accelerators and APIs came *after* lots of people had been doing cpu-based ray tracing for a long time.
Perhaps you meant GPUs came after people had been using software rasterizing for a long time? Or perhaps ray casting as in Doom, Duke Nukem 3D and original Wolfenstein? Ray casting is very different from ray tracing.
WaltC said:
For 3d games the gpu is obviously the way to go.
For now, yes. Not because ray tracing is more recource hungry but because there is no good HW accelerator as there is for rasterizing.

I don't think ray tracing on regular CPUs will ever become viable for game developers. I do think that Larrabee can be fast enough to do it, though. Especially considering that it'll likely have texture filtering HW added to it.

Also once scene size reaches certain (moving) target in the number of primitives ray tracing becomes faster solution than rasterizing.
WaltC said:
People sometimes become blinded by the one-sided publicity to the extent that they remember only that cpus are advancing rapidly. For some reason they easily forget that the same is true for gpus, as well.
True, though ray tracing algorithms have seen massive improvements during last few years and the speed of improvement doesn't seem to slow down anywhere in near future. I don't think one could say the same about GPU rasterizing where performance has increased almost entirely because of faster GPUs.

I do admit that there are still things left that are difficult to do with ray tracing. First that comes to my mind is rendering fur (lots of thin triangles in one place), though it can be faked quite decently. Tracing dynamic scenes seems to be solved, at least for the case where the total number of triangles doesn't vary from frame to frame.

One thing is for sure, we are living at interesting times and things seem to become more interesting almost every day :)
 
For now, yes. Not because ray tracing is more recource hungry but because there is no good HW accelerator as there is for rasterizing.

I don't think ray tracing on regular CPUs will ever become viable for game developers. I do think that Larrabee can be fast enough to do it, though. Especially considering that it'll likely have texture filtering HW added to it.

Also once scene size reaches certain (moving) target in the number of primitives ray tracing becomes faster solution than rasterizing.
Lots of good points there.

Ray-tracing also does certain things quite well that rasterisation has difficulty with: for example, indirect lighting and authentic soft-edged shadows are an absolute pain to do on a GPU but they fall out of the ray-tracing calculation by default. The biggest obstacles to graphical realism in current games (IMO) are geometric complexity and light/shadow processing. I am drooling at the possibility of Larabee allowing real-time ray-tracing in games. :yes:
 
That seems to be an interesting demo, too bad I don't have D3D10 HW and Windows :)

In the PDF they talk about doing 40 rendering passes every frame, can anyone more knowledgeable say if those are full passes over the entire framebuffer or some smaller offscreen bitmaps they are talking about? Also if I understood correctly that algorithm wouldn't scale too well to more complex and bigger scenes.

Of course with RT GI isn't all that simple either. I don't know of any newer GI research but Wald did some research on it around 4 years ago. IIRC he achieved around 5FPS with 20 2P P4 cluster when rendering four powerplants with 50M triangles in total. I wonder what kind of speed could be achieved on, say, 2P 3.16GHz Core2Quad box with the fastest algorithms currently known ...

One thing I'd like to know is what would be the memory bandwidth requirements for the GPU and Wald's CPU GI implementations be. As can be seen with Larrabee FP power is rather cheap, it is the memory bandwidth that will start limiting performance in (near) future. That means unless something magical happens and memory bandwidth starts to climb as fast as CPU performance people will have to start choosing algorithms that can save on bandwidth, possibly by doing more complex calculations. Ray tracing certaily would fit as it doesn't need multiple passes and most scene access is relatively coherent and can be cached.
 
Back
Top