Actually raytracing (AFAIK) is more efficient for high poly scenes. Its scales better in system demands. It also allow non-tesselated surfaces for perfectly smooth-edged objects in a very low vertex count using NURBS/SDS.
Well, yes, you can apply all nature of spatial subdivision schemes to minimize the number of scene elements you actually test per ray, and the scaling with complexity becomes logarithmic. Hell, we use that all the time to speed up raycasts for line-of-sight tests in console-land even now.
However, consider the idea of raytests against a sphere defined as a center and radius... pretty fast, obviously. Now try and imagine raytests against a sphere defined as a polygon mesh -- now you've got several hundred or thousand potential scene elements to test against that make up a single object. And using some sort of spatial subdivision does help by culling individual elements, but the end result will still be slower than having that single scene element that defined the whole object.
The same thing could be said of raytracing NURBS. If you implicitly calculate the surface at any point, it's a lot faster than say, computing a certain tesselation depth in polygons and then raytracing against the polygon mesh. Similarly, this is also because a model built in NURBS will have a comparatively low-complexity control point mesh. Same could be said of Catmull-Clark or Doo-Sabin subdivision surfaces, which converge to Bezier curves on infinite iterations (cubic and quadratic respectively).
The problem ATM is it's too processor intensive. Realtime raytracers (tried one a couple of years back) are slow and pixelated - 15 fps of blocky graphics...no thanks!
They have improved over the years. I can get a good 24 fps at 800x600 on current PC CPUs. However, every scene in these cases were still constructed of spheres, cylinders, boxes, and CSG combinations thereof. And they were simple enough of scenes that spatial subdivision hierarchies would have slowed you down. Doing the same in a game level is a far cry away.
The math works well in SPE-land, though, since you do have quite a few nice built-in instructions, and each individual calculation is pretty compact in terms of complexity. Plus the nice thing about raytracing is that every ray is totally independent, so you can basically multithread infinitely (until you run out of pixels, anyway).
So I basically have to say... yeah CELL might be able to do something basic in realtime, but at what cost and what gain? I doubt it would be enough to justify the trouble. I mean, the landscape demo was cool and all, but if you look back to Outcast, which did the same thing in game back then, the scaling up to that demo's level isn't really that out of this world.
Where raytracing would really show clear benefit would be if you had some kind of stochastic GI system or even stochastic area light sampling would show SOME improvement. And full-on MCPT of a dynamic scene every frame would easily require something 1000x CELL.