Real time raytracing to go mainstream?

The thing is that current and future GPUs are quite inept at RT
Unfortunately, so it is.
Basically there are two possibilities:
1) GPU's get flexible and powerful enough to be able to ray trace with decent efficiency
2) GPU's get a addon cirquity to perform ray tracing.

First one is quite probable in a few GPU generations. Second one will most likely not happen since RT needs to know the location of every polygon and current GPU's don't work like that. Unless 3d API's change a lot it won't happen.

Third possibility might be that some console developer decides to use ray tracing. One of the older plans with PS3 was to use two or three Cells. One for logic and other(s) for graphics. Two Cells are roughly comparable to current console GPU's in terms of speed and image quality, especially when you add some special instructions that it lacks.

If Cell as architecture works fine and has a few generations during the next few years I wouldn't be surprised if PS4 would use RT for its graphics. On the other hand, usually its Nitendo that revolutionalizes.
Throw a polygons soup to a ray tracer and tell me if its complexity is still logarithmic.
Assuming that triangles are rathrer small compared to the volume they are scattrered around, yes, it scales logarithmically. Why shouldn't it? Of cource it might not be as efficient as tracing nice convex objects.

Something like this is traced rather easily and it does scale logarithmically with increased polygon count.
http://www.acm.org/tog/resources/SPD/rings.png
Bart museum is a bit more difficult but nothing too bad. Rebuilding the tree is not that much more expensive than recalculating triangle coordinates and again it scales logarithmically.
http://www.ce.chalmers.se/old/BART/

Worst-case scenario would be lots of triangles intersecting each other. There you can't partition them and you have to intersect most of them.
If we talk about real-world situations you don't see a lot of random animated polygon soups or lots of intersecting triangles in small spaces.
It's completeli irrelevant as I didn't write algorithm A is better than algorithm B.
Sorry about that, from the title I was assuming we compare viabilty of different rendering algorithms.
I wrote that your picking up RT in any case instead of rasterization just cause the former scales as O(logN) is wrong, and this is not about computer graphics, it just follows
from complexity definition.
I should have described my thoughst a bit better there. What I meant was that when I have two algorithms that are roughly equal in speed in the average case I would take logarithmically scaling one because its worst case is usually not as bad as linearly scaling ones.
Also I didn't say that I would take RT in any case, I said I would take logarithmically scaling algorithm. Though if we had RT HW with comparable transistor counts and core frequency I would most likely take RT for most rendering tasks.

I agree that there are exceptions where logarithmical algorithms behave worse than linear ones but mostly these are just that, exceptions. One of such might be searching. Finding an integer in an array of ~50 elemets* using binary search is usually not faster than using linear search. They talk about that in the pixomatic articles I linked before.
*) might be less for CPU's with short pipelines.

I also haven't said that RT is the one ultimate solution to every rendering problem out there. I said it will pay off with complex scenes with lots of effects that use RT recursive properties.
If you throw your average game at RT it usually won't show its powers that well since it is designed with minimal use of things that are not very effective with rasterizing. Funny thing is that those effects are mostly trivial and rather effective to perform with RT.
BTW rasterization can be sublinear as well if you're clever enough.
So can ray tracing. Most tricks you can use with rasterizing you can use with RT aswell.
 
Throw a polygons soup to a ray tracer and tell me if its complexity is still logarithmic.
Agreed.
Assuming that triangles are rathrer small compared to the volume they are scattrered around, yes, it scales logarithmically. Why shouldn't it? Of cource it might not be as efficient as tracing nice convex objects.
How does one get the polygons IN to the renderer in sub-linear time?:rolleyes:.... or are you only moving a camera around a static scene and so amortize the set up costs?
 
Last edited by a moderator:
Raytracing only "scales logarithmically" when an efficient spatial data structure is used (ex. kd-tree). Indeed without such an accelerator, raytracing is rediculously slow and scales terribly.

The same "logarithmic scaling" can be attained with rasterization using a similar structure combined with an LOD sceme (of where there are many). Moreover one can use a much looser structure to get the same benefit while still maintaining some ability to handle dynamic scenes (something which a strict kd-tree - and hence most raytracers - does not address well).

Anyways raytracing is certainly awesome, but to be honest, I'm only really interested in tracing secondary rays (and hence tricks like exploiting ray coherency aren't relevant). Rasterization is simply the best way to do coherant, primary (and shadow) rays.

As has been mentioned on this forum before, what we really need is a system that can do both rasterization and raytracing efficiently, with a high-speed connection between the (probably disjoint) processors for each task. The PS3 looks pretty good for that: rasterize (and shade) the primary rays on the GPU, do the secondary rays on the SPEs. Depending on the interface latency, it may even be a win to shade secondary rays on the GPU, doing only ray intersections on the Cell.
 
Assuming that triangles are rathrer small compared to the volume they are scattrered around, yes, it scales logarithmically. Why shouldn't it? Of cource it might not be as efficient as tracing nice convex objects.
I think you missed the points here. One, as Andy mentioned, the logarithmic scaling is a result of some sort of data structure used to limit the objects you test per ray, but that is not an *inherent* feature of a raytracer. Without it, it's just plain linear as you test every object in the scene for every ray. And either way, it's still linear with respect to resolution.

The other point is that you have additional costs in that your structure needs updating in dynamic scenery and a lot of data needs to be moved which is constant costs. So even if it scales up really well in comparison to rasterization, it could be that it takes rather massive complexity to get to that crossover point. For certain types of interactions, we're probably already there, but I don't know if that's true for everything.

My position is mainly that it's an ultimately necessary transition because scene complexity is growing rapidly, as is complexity in the stringing together of dozens and dozens of cheap trick shaders, and that ultimately makes things fragile. Raytracing, otoh, is fundamentally simple.

Rasterization is simply the best way to do coherant, primary (and shadow) rays.
Agree on the primary rays part, but I have mixed feelings about the shadow rays part. Maybe for filling in an irradiance cache of sorts, but in general.... eeeeh.
 
Agree on the primary rays part, but I have mixed feelings about the shadow rays part. Maybe for filling in an irradiance cache of sorts, but in general.... eeeeh.
You're right, it may end up being better to do shadow rays with a raytracer, but the fact that they're pretty coherent leads me to believe that ultimately the best solution will use rasterization (and even standard shadow maps work decently well - there are many extensions to make them a lot better). That said, I don't have a strong feeling on which way this one will go... you could be totally right :)
 
one thing i think has merit has been srt of hinted at here
is to use cell to do the shading of the scene, 100% greyscale ie no color info, a quasi radiosity approach limited to say 2-3 bounces, i believe this is achivable with quite complicated schemes in realtime
(ok its not gonna be 100% accurate since u dont access the scenes textures but i believe it will look jawdroppingly brilliant)
afterwards u just use the gpu to texturemap (color in) the scene
 
forget real/true realtime raytracing for a minuite.

what about some rendering technique that completely fakes raytracing
(be it using shaders or something else) that gives reasonable "raytracing like" results, that provides a large leap in the quality of current realtime graphics ?
 
There is work being done in that direction, yes, and SM3.0 helped a bit with things, but I don`t think we`ll be away from precomputed stuff(for gaming) in RT area for quite a while.
 
Intel thinks it's possible ;) - here's an article from Intel Technology Journal on the topic....
I'm a little unhappy with how they misrepresent some of the advantages and disadvantages of raytracing vs. rasterization in that paper, not to mention that they needlessly imply a strict division line between the two techniques (hybrids are often the most efficient way to do it, as discussed earlier in the thread).

Then again, the article is not entirely surprising coming from Intel, as they have every reason to motivate the exclusive use of traditional CPUs for rendering. I'd personally be a bit happier if they perhaps noticed the proven benefits of rasterization and maybe considered slapping one into their "terascale" architecture.
 
they needlessly imply a strict division line between the two techniques (hybrids are often the most efficient way to do it, as discussed earlier in the thread).
Which is why I can sort of accept ATI's stand on the "raytracing vs. rasterization" question where they said that some rendering paradigm will come along that will subsume them both. nVidia's position, which I don't accept, was one of "You f***ing moron! It's so obvious that our existing technology will scale up ad infinitum and is therefore inherently superior to God."

A simple hybrid I'd be almost happy with would be a rasterizer that has access to the full scene data so you can shoot sample rays out into the world. Though I think by the time that is even considered, we'll already have a need to raycast for the rasterization part because mesh density will be that high.
 
A simple hybrid I'd be almost happy with would be a rasterizer that has access to the full scene data so you can shoot sample rays out into the world.
I don't think that's too far off. Shooting rays from shaders is fairly straightforward as long as you restrict them to operate outside of control flow. In this restricted case, one can simply break the shading portion into "before" and "after" the ray cast.

With rastization hardware available, I still can't see a good reason to raytrace coherant (primary) rays - now or ever. The papers on bundling primary rays seem to be trying to awkwardly re-invent rasterization. However raytracing secondary rays is certainly a good idea, and I'd be surprised if it doesn't show up in applications in the near future.

As noted, many per-pixel displacement mapping techniques borrow from raytracing, and I suspect that generalized raycasting (potentially hitting the whole scene, or a fair sized subset of it) isn't far off.

[Edit] Check out http://www.rtt.ag/cms/en/
 
Last edited by a moderator:
Though my obvious preference is to suit the hardware to the task than to suit the shader and the data structures to the hardware. And I don't think that's much to ask because there's no reason for the GPU to become more generic than simply handling graphics really well. The whole GPGPU concept makes me feel as if people are running out of ideas graphically.

With rastization hardware available, I still can't see a good reason to raytrace coherant (primary) rays - now or ever.
REYES rasterization comes to mind. The only real reason that comes to mind is that you can't rasterize the geometry at all in the first place. i.e. you need to sample it because it's sub-pixel scale. Renderman still qualifies as rasterization, though, because things are done incrementally on the geometry scale (as opposed to say, BMRT, which is an explicitly raytracing version). It's just that you still have to sample the cut-up micropolygon mesh either way.
 
The whole GPGPU concept makes me feel as if people are running out of ideas graphically.
I don't think that's the case at all: there are really not two disjoint spaces between "graphics" and GPGPU. Would you consider raytracing on a GPU to be GPGPU? How about performing a convolution on an array? Sorting an array? Sorting objects for blending?

My point is that a lot of "GPGPU" algorithms come up in "graphics" applications. There's also the fact that - for example - sorting 10x faster for your $ is interesting to a lot of people. Sure the Cell will consume some of the latter space, but we can do graphics pretty well on the Cell too...

Regarding other rasterization schemes and derivatives, it still seems to me that using some form of iterative operation with a (potentially composite) occlusion buffer, combined with a proper geometry LOD scheme is pretty efficient. I can't see raytracing beating that sort of scheme, even with a good acceleration structure and coherant ray tricks.
 
In a sense, those hybrids are already here.
E.g. relief mapping is raycasting (of heightmaps) integrated into rasterization.
I disagree, the fact that it still works per primitive puts it in a different camp.

BTW, I think once we have the D3D10 geometry shader this research into hacks designed to sidestep the fact that you can't carry over information between vertexes/pixels will become useless. Raycasting using vertical ray coherence is more accurate, more general and needs less samples per pixel.
 
I disagree, the fact that it still works per primitive puts it in a different camp.
Well, feel free to make your own definition of 'hybrid'! :) *

Raycasting using vertical ray coherence is more accurate, more general and needs less samples per pixel.
And it's less than trivial to implement with 6 DOF, even in pure SW. I don't expect to see one running on a GPU in the near future. :)

EDIT: * After some meditation, I seem to understand the root of the misunderstanding. I didn't mean to say that relief mapping is a hybrid [post=813711]"that has access to the full scene data"[/post]. I just wanted to say that it's a hybrid in the general sense of the word. I guess I shouldn't have said "those hybrids", just "hybrids". :)
 
Last edited by a moderator:
not to derail this thread, but it just occured to me, we don't really need realtime raytracing right now. most prerendered/offline CGI does not make use of it. raytracing is used very sparingly even in highend high-budget CGI for films.

what we need is higher precision graphics that look more like prerendered CGI, and we don't need raytracing to do that. we need more AA-sampling, much more geometry, more post-processing capability in realtime. more complex lighting, more light sources, and above all, high & stable framerates. everything should be, in general, 60fps by default.

PC GPUs and games that use them, should not waste processing power to get 200+fps. (something that JC touched on several times, in recent years) T.wo modes should be there, 120fps and 60fps for first person shooters, and just 60fps for everything else.

console games can all be 60fps.

I think before realtime raytracing is tackled, the entire industry should focus on far more complex graphics combined with higher image quality combined with taking out the problem of framerates. maybe that's a difficult thing (framerates) for some to wrap their heads around, but it has been done before, in the arcades in the 1990s. why we don't have 60fps standard at home now with more powerful technology (by far) is beyond me.

my main point is this, todays best realtime graphics still look like 3D polygon graphics of the mid 1990s, only 100x more. but things have not fundamentally changed, with the exception of shaders, and shaders have not really changed things in terms of quality, the quality that prerendered CGI without raytracing has.

sorry for the ramble.
 
Back
Top