Plus they just lie... their claims of 10x faster than G80/Cell for raytracing are just wrong. Clearly they missed SIGGRAPH this year, and last year, and several other occasions on which real-time raytracers doing much more complex stuff have been demoed running on GPU's/Cell.
Perhaps Cell can do faster ray tracing than Penryn but I for one haven't seen any particularly fast GPU ray tracers.
Fastest GPU ray tracer I know about is
this. 2M triangle scene with simple shading runs at ~5.7FPS on G80.
One of the fastest CPU ray tracers I know is described
here. It is the older version of the ray tracer Intel used on that 45nm quadcore. There on 3.2GHz P4 with HT they traced the same scene with similar lighting at around 24FPS. Now consider that Yorkfield has massive IPC lead over P4 and twice as wide SSE. I've personally seen >2x increase of speed per-clock with SSE ray tracers when comparing K8 vs Core2. So I wouldn't think it would be wrong to say quadcore Yorkfield would be around 6-8x faster than that P4 (4x more cores, 2x wider SIMD, scale back a bit for HT) when tracing that scene. It could very well be even faster considering how much time has passed since those MLRTA results were publicised.
So what exactly did Intel miss when comparing against GPU ray tracers? I surely hope there is some research paper somewhere describing much faster GPU ray tracer
WaltC said:
What isn't well understood at all apparently is that today's environment of 3d-accelerators and APIs came *after* lots of people had been doing cpu-based ray tracing for a long time.
Perhaps you meant GPUs came after people had been using software rasterizing for a long time? Or perhaps ray casting as in Doom, Duke Nukem 3D and original Wolfenstein? Ray casting is very different from ray tracing.
WaltC said:
For 3d games the gpu is obviously the way to go.
For now, yes. Not because ray tracing is more recource hungry but because there is no good HW accelerator as there is for rasterizing.
I don't think ray tracing on regular CPUs will ever become viable for game developers. I do think that Larrabee can be fast enough to do it, though. Especially considering that it'll likely have texture filtering HW added to it.
Also once scene size reaches certain (moving) target in the number of primitives ray tracing becomes faster solution than rasterizing.
WaltC said:
People sometimes become blinded by the one-sided publicity to the extent that they remember only that cpus are advancing rapidly. For some reason they easily forget that the same is true for gpus, as well.
True, though ray tracing
algorithms have seen massive improvements during last few years and the speed of improvement doesn't seem to slow down anywhere in near future. I don't think one could say the same about GPU rasterizing where performance has increased almost entirely because of faster GPUs.
I do admit that there are still things left that are difficult to do with ray tracing. First that comes to my mind is rendering fur (lots of thin triangles in one place), though it can be faked quite decently. Tracing dynamic scenes seems to be solved, at least for the case where the total number of triangles doesn't vary from frame to frame.
One thing is for sure, we are living at interesting times and things seem to become more interesting almost every day