It can scale logarithmically, if you're clever
There is little point in not being clever
but it's not inherently logarithmic.
RT is logarithmic in scene complexity wise, there is no question about that. If you think otherwise, please explain. I would also be interested to hear what scales better in rasterizing compared to RT.
If you meant photon mapping and GI then depending on implementation those might not scale as well as regular ray tracing but generally they should still scale logarithmically, though probably with bigger constant costs.
I also think that photon mapping and GI are not very viable rendering methods yet. They need an order of magnitude more computing power than RT, of cource they also give a lot better image quality.
Btw, could anyone have dreamt in '96 that real-time dynamic shadows could be doable or that in ten years we can render 5M+ triangles at interactive framerates instead of olny a thousand or so?
I would not.
Complexity mathematical denifition is based on a limit, ergo pay attention to constant costs, it might be the case that your log complexity algorithm gets beaten by a linear or superlinear complexity algorithm with vastly smaller constant costs for any reasonable N.
That is mostly correct in todays world. Ray tracing is not very competitive with small complexity but things get a lot more interesting once scenes become complicated.
Just imagine what would it be like to render a 350M independant triangle plane (~8GiB of triangles) on a single PC. A 1.8GHz 2P singlecore Opteron can render it at 1-3 FPS with simple shading and shadows @640x480.
http://openrt.de/Applications/boeing777.php
It takes several seconds to just pump that data throug PCIe link, not to mention the nightmare it would be to use any kind of good space partitioning on it to make it rasterizable. With RT there are algorithms that can build a decent BKD tree automatically from almost any source data. Though with that kind of immence amounts of data it would take a while.
Of cource that was an extreme example and most games today don't have >1M triangles in view frustum so it is relatively complicated to compare rasterizing and ray tracing performance in games. Also the fact that every single game is designed and optimized with only rasterizing in mind will not improve things either. There have been a couple (I know of three) of tries on RT games but most of them use rather old techniques. One such is Oasen:
http://graphics.cs.uni-sb.de/~morfiel/oasen/
Just compare the details of huge landscape with the details of Oblivion. The latter uses so much LOD that it isn't funny and it still chockes high-end GPU's.
that's wishful thinking, unfortunately that's not the case.
Of cource you can construct special cases where things blow up but you can do the exact same thing with anything, including rasterizing.
One huge problem in comparing RT vs rasterizing is that mostly people compare high-end GPU's against software implementations. A bit more fair would be to compare software vs sofware. E.g, UT 2004 has a software engine and my previous 2.8G P4 chocked on it when it rendered 320x240 upscaled to 640x480 with extreemly little details and massive LOD.
http://www.radgametools.com/pixofeat.htm
You can read about their inhuman optimization efforts here:
http://www.ddj.com/dept/global/184405765
http://www.ddj.com/184405807
http://www.ddj.com/184405848
Regular x86 CPU is the second worst thing to run RT after current GPU's. Even Cell is not much better, it just has more power per die but is just as inefficient. Unfortunately there are not many HW products to use for comparison. There is ART's pure series of ray tracing HW but that is not meant for real-time rendering. There are several versions of the RPU but so far they haven't gone much further from research. Perhaps in next-years Siggraph we hear something interesting from them.
RT has become interesting only during the last 5-8y or so. Rasterizing has been used in high-end markets for ~25y. During last decade, huge amounts of cash has been pumped into researcing and developing of rasterizing techniques and HW. If the same would happen with RT things would get much more interesting. In my oppinion, RT is much more future-proof, mostly thanks to logarithmic scaling, global access to scene in pixel-level and smaller memory bandwidth requirements.