Unfortunately, so it is.The thing is that current and future GPUs are quite inept at RT
Basically there are two possibilities:
1) GPU's get flexible and powerful enough to be able to ray trace with decent efficiency
2) GPU's get a addon cirquity to perform ray tracing.
First one is quite probable in a few GPU generations. Second one will most likely not happen since RT needs to know the location of every polygon and current GPU's don't work like that. Unless 3d API's change a lot it won't happen.
Third possibility might be that some console developer decides to use ray tracing. One of the older plans with PS3 was to use two or three Cells. One for logic and other(s) for graphics. Two Cells are roughly comparable to current console GPU's in terms of speed and image quality, especially when you add some special instructions that it lacks.
If Cell as architecture works fine and has a few generations during the next few years I wouldn't be surprised if PS4 would use RT for its graphics. On the other hand, usually its Nitendo that revolutionalizes.
Assuming that triangles are rathrer small compared to the volume they are scattrered around, yes, it scales logarithmically. Why shouldn't it? Of cource it might not be as efficient as tracing nice convex objects.Throw a polygons soup to a ray tracer and tell me if its complexity is still logarithmic.
Something like this is traced rather easily and it does scale logarithmically with increased polygon count.
http://www.acm.org/tog/resources/SPD/rings.png
Bart museum is a bit more difficult but nothing too bad. Rebuilding the tree is not that much more expensive than recalculating triangle coordinates and again it scales logarithmically.
http://www.ce.chalmers.se/old/BART/
Worst-case scenario would be lots of triangles intersecting each other. There you can't partition them and you have to intersect most of them.
If we talk about real-world situations you don't see a lot of random animated polygon soups or lots of intersecting triangles in small spaces.
Sorry about that, from the title I was assuming we compare viabilty of different rendering algorithms.It's completeli irrelevant as I didn't write algorithm A is better than algorithm B.
I should have described my thoughst a bit better there. What I meant was that when I have two algorithms that are roughly equal in speed in the average case I would take logarithmically scaling one because its worst case is usually not as bad as linearly scaling ones.I wrote that your picking up RT in any case instead of rasterization just cause the former scales as O(logN) is wrong, and this is not about computer graphics, it just follows
from complexity definition.
Also I didn't say that I would take RT in any case, I said I would take logarithmically scaling algorithm. Though if we had RT HW with comparable transistor counts and core frequency I would most likely take RT for most rendering tasks.
I agree that there are exceptions where logarithmical algorithms behave worse than linear ones but mostly these are just that, exceptions. One of such might be searching. Finding an integer in an array of ~50 elemets* using binary search is usually not faster than using linear search. They talk about that in the pixomatic articles I linked before.
*) might be less for CPU's with short pipelines.
I also haven't said that RT is the one ultimate solution to every rendering problem out there. I said it will pay off with complex scenes with lots of effects that use RT recursive properties.
If you throw your average game at RT it usually won't show its powers that well since it is designed with minimal use of things that are not very effective with rasterizing. Funny thing is that those effects are mostly trivial and rather effective to perform with RT.
So can ray tracing. Most tricks you can use with rasterizing you can use with RT aswell.BTW rasterization can be sublinear as well if you're clever enough.