(As context, I think that general competition between rasterization and ray tracing is great--I'm a big fan of both approaches (I'm actually a fan of anything that makes a CG picture in the end.
), and it's very healthy to have people pushing hard from both sides--competition is what leads to progress!)
And he still completely ignores the fundimental problem that if you have so many polygons per camera pixel, you have a big mess of noise/aliasing on your hand and you need LOD anyways. We don't need to render any more polygons IMHO (GPUs can already do enough), we just need to find better ways to distribute them dynamically around the scene. True, continuous LOD is the problem here, and raytracing does nothing to solve that... it merely turns the scene into a big mess (check out the sunflower scene :S).
This is a really important point to keep in mind. Coupled with good culling structures, the complexity of a good rasterization-based engine really is something along the lines of num pixels * depth complexity. And the nice thing about rasterization is that you don't need to spend so much work building good acceleration/culling structures versus ray tracing--you can be a bit more sloppy and still get good performance.
Another thing that rasterization has going for it is the efficiency gain to be had from MSAA. The key thing about that is that you generally only run the pixel shader once per pixel, but you can have a larger number of point samples against the geometry. As shaders get more and more complex, this is an increasingly big win--the MSAA samples are increasingly cheap (relatively speaking), or conversely, the win from not shading once per sample gets bigger and bigger. I am not aware of a ray tracing architecture that has been able to exploit this well; maybe there is a clever way to do it. But that ends up being another case where rasterization keeps running ahead giving ray tracing more of a gap that it needs to close to be competitive.
His argument about not needing hybrid renderers is along the same lines as our discussion thread here: once you need tons of rays/pixel the primary ray gets amortized anyways. However note that he used the argument that we don't have very many extra rays per pixel to justify why ray tracing isn't much slower than rasterization a few pages earlier... marketting propoganda to the max it seems! (Furthermore he uses terrible examples of why we'd want lots of rays per pixel.)
The way I like to think about this problem is that at the end of the day, it's a competition for what gives the most visual bang for the buck for the FLOPS and bandwidth used. Ray tracing offers a certain trade-off there, and in return gives some great visual effects. On the other hand, lots of smart people out there are doing great research about other ways to use those FLOPS to make amazing images--people like Ravi Ramamoorthi, Peter-Pike Sloan, etc, have done all sorts of great work using clever math to develop computationally efficient ways to generate really nice imagery. The bar that ray tracing needs to pass is that the FLOPS it requires must give a better visual result (however you quantify that), than other ways you could use those FLOPS in rendering. That is a higher bar to get across than the "can it generate an effect interactively in the first place"--rather, for the FLOPS burned, ray tracing also has to show that was the best use of the FLOPS.
Anyway, like I said, I think this competition to maximize image quality per FLOP is what makes all this stuff so much fun.
-matt
(The above are my personal opinions only and are not the positions of Intel.)
(Please forgive the following advertisement.)
PS: We are hiring! Send me a note at "matt.pharr" at the domain of intel.com if you've got expertise with advanced graphics algorithms, compilers, or low-level high-performance programming and are interested in working on some really interesting projects on really interesting architectures--there are some really fun opportunities to change the world in a big way.