Real-Time Ray Tracing : Holy Grail or Fools’ Errand? *Partial Reconstruction*

If you have a low poly approximation (say for LOD) you can render a cubemap and do your GI from that. No need for classic ray tracing.
I think the ATI ping pong demo does something like this

http://www.firingsquad.com/media/article_image.asp/2268/09

filling the scene with a 3D array of cubemaps to get the required density of "GI" sampling.

Now combine this with something similar to what Jawed posted, smart re-projection of the previous frame's results (so you don't have to redo the entire ray trace each frame) and bang, you will have interactive GI while rendering.
The paper I linked actually gave this specific example :smile: - the dataset for GI is slowly changing as a scene/camera animates and the eye finds it hard to see artefacts in GI.

Jawed
 
If you have a low poly approximation (say for LOD) you can render a cubemap and do your GI from that.
Cube maps are wholly inadequate for anything covering a decent portion of the scene. You can't even chop up large objects because you get discontinuity. Moreover, if you have 1000 objects are you going to render 1000 cube maps, each with 1000 objects in them?

Now combine this with something similar to what Jawed posted, smart re-projection of the previous frame's results (so you don't have to redo the entire ray trace each frame) and bang, you will have interactive GI while rendering.
Like I was saying, reprojection only fixes the easy cases like static scenes with diffuse surfaces. GI has a domino effect. You move one thing and it changes the lighting everywhere. Framerate will be limited by things that reprojection can't address.
 
Little gaps during motion can be easily filled with motion blur.
You can certainly hope so, but you can't know that (if you know exactly how wrong you are you know exactly how large the bounding volume should be). I just don't see the point, doing it right is only a little more work.
 
I think the ATI ping pong demo does something like this

http://www.firingsquad.com/media/article_image.asp/2268/09

filling the scene with a 3D array of cubemaps to get the required density of "GI" sampling.

The paper I linked actually gave this specific example :smile: - the dataset for GI is slowly changing as a scene/camera animates and the eye finds it hard to see artefacts in GI.

Jawed

Jawed,

Slightly OT, but have you tried the methods in "Accelerating Real-Time Shading with Reverse Reprojection Caching" to amortize the cost of doing relief mapping refinement over multiple frames?

Also,

What I was referring to is different, works in very fast moving scenes, and you only render one cubemap at the same world position as the camera (instead of multiple cubemaps sampling illumination from various areas of the scene). The concept without a layered cubemap will not work for 1st person (because large amounts of the cube would render the person), but will work well enough for a 3rd person game (but obviously is limited in only being able to get eye facing reflections).

BTW, chapter 17 of GPU GEMS 3, "Robust Multiple Specular Reflections and Refactions" goes over a technique of ray marching in a cubemap. They use a bruit-force method of doing the entire ray march each frame. This is limited in framerate or quality. If you had a method to spread the ray march and refinement across multiple frames, then you are starting to get an idea of what I am talking about. Obviously there is a lot more involved to getting it to work well without visual problems...

Mintmaster, frame rate is not limited by amount of pixels needing re-projection. Instead re-projection pixels result in only an approximate solution which is refined over the next few frames. Instead of thinking of re-projection in the form of "caching" fully finished shader computations, in the context I was describing, it is used to gather information from the previous frame to continue converging to the correct results.

Keep in mind this is rendering and the "correct solution" is something which looks good enough visually and meets the frame rate requirement, instead of 100% "correct" ray tracing.
 
Jawed,

Slightly OT, but have you tried the methods in "Accelerating Real-Time Shading with Reverse Reprojection Caching" to amortize the cost of doing relief mapping refinement over multiple frames?
:oops: I'm just an armchair (well wicker chair) enthusiast who last did rasterised 3D graphics programming back in the early 80s and nowadays just reads stuff to get the high level concepts. Actually, theoretically, I'm a ray-tracing fanboy, having been bitten by the bug way back then...

Jawed
 
:oops: I'm just an armchair (well wicker chair) enthusiast who last did rasterised 3D graphics programming back in the early 80s and nowadays just reads stuff to get the high level concepts. Actually, theoretically, I'm a ray-tracing fanboy, having been bitten by the bug way back then...

Ahh the 80's ... I'll never forget "The Last Star Fighter" ;)
 
You can certainly hope so, but you can't know that (if you know exactly how wrong you are you know exactly how large the bounding volume should be). I just don't see the point, doing it right is only a little more work.

Sometimes it is faster to forgo exact occlusion computations and just insure that you have enough overlap to cover the screen (and handle coverage over the predicted position of objects in the next frame) and allow the Z buffer to do it's work. Kind of relates back to the issue where ray tracing has to be exact in its search (expensive), and with rendering you simply throw enough fragments to a pixel to insure that it is covered at least once (easy).
 
Another approach would be, to store all (main) beams in a tree, include the objects/surfaces hit at each intersection, and only recalculate the ones when the object/surface is transformed differently (dirty flag). But you really need the whole scene, with objects consisting of curved surfaces up front on the GPU for it to be practical.

Althoug Cell sounds good for this as well, as long as you can partition your structures in a good way.
 
Ryan on the other hand just doesn't get it though ...
NVIDIA's stance is that rasterization is not inherently worse for gaming than ray tracing is if only because all the years of work and research that has gone into up to today.
That's not what David Kirk said ... not in the fucking remotest sense. Getting to interview David Kirk and then spinning what he said into supporting your own bias is just disrespectful, your own bias being so hopelessly wrong makes it even worse :)
 
Last edited by a moderator:
The most ironic thing that could happen after all this is if NV60 or R800 (DX11) was/were more viable for real-time raytracing than Larrabee...
 
Ryan on the other hand just doesn't get it though ...

That's not what David Kirk said ... not in the fucking remotest sense. Getting to interview David Kirk and then spinning what he said into supporting your own bias is just disrespectful, your own bias being so hopelessly wrong makes it even worse :)

Are you serious? MY bias? If anything, all of my previous articles on the topic have leaned towards ray tracing being the superior long term option.

Regardless, my pont is perfectly valid and correct. See these quotes from Dr. Kirk:

Second, most rendering engines in games and professional applications that use rasterization also use hierarchy and culling to avoid visiting and drawing invisible triangles. Backface culling has long been used to avoid drawing triangles that are facing away from the viewer (the backsides of objects, hidden behind the front sides), and hierarchical culling can be used to avoid drawing entire chunks of the scene.

In order to do a good job of rendering these effects, you would have to shoot tens or hundreds of rays per pixel. This is far from real time. As a side note, these effects are "soft" and very well-approximated through rasterization and texturing techniques in real-time.

Virtually all games and professional applications make use of the modern APIs for graphics: OpenGL(tm) and DirectX(tm). These APIs use rasterization, not ray tracing. So, the present environment is almost entirely rasterization-based. We would be foolish not to build hardware that runs current applications well.

One important benefit of rasterization is that the power consumption is much lower for fixed function hardware than programmable hardware. So, rasterization has a significant advantage for mobile platforms where power is a concern.

This all points to work that hardware and software developers have done over the years to overcome those inherent draw backs that rasterization has for for some rendering. (To quote Dr. Kirk again: "Rasterization is blisteringly fast, but not well-suited to all visual effects.")
 
Welcome to the board, Ryan! Anyway, I personally don't think what you said is is really biased, but neither is it fair/accurate:

First of all, I'd like to point out that raytracing also had plenty of research thrown its way in the last 10 years, and it'd also be much less viable if not for that research. The only real difference is that was done not by game developers, but by the academia. EDIT: What I'm implying here is that while there are 'clever tricks' for rasterization such as complex visibility determination schemes, there also are 'clever tricks' for raytracing; what else what today's acceleration structures, after all? They're hardly obvious, and they certainly didn't exist a few decades ago!

Secondly, there never was a single point in the history of computer graphics where raytracing made more sense than rasterization outside of specific effects and when performance didn't matter much. Smart techniques didn't make rasterization viable; they merely kept it better than the alternatives.

As a side note, I'm honestly tired of Intel's claims about using raytracing for shadow rays. Unless you want hard shadows ala Doom3, their described approach doesn't really work - and the industry isn't moving towards that, but rather away from it (and very rapidly indeed). The only good reason (for shadowing) not to use fixed-function rasterization hardware is if you want to use logarithmic shadowmap algorithms, IMO, and you might be better off doing rasterization than raytracing even then...
 
Are you serious? MY bias? If anything, all of my previous articles on the topic have leaned towards ray tracing being the superior long term option.

Regardless, my pont is perfectly valid and correct. See these quotes from Dr. Kirk:

This all points to work that hardware and software developers have done over the years to overcome those inherent draw backs that rasterization has for for some rendering. (To quote Dr. Kirk again: "Rasterization is blisteringly fast, but not well-suited to all visual effects.")

Point 1 is that rasterization uses hierarchical structures to cull and avoid visiting invisible triangles, but that is as much of a workaround as the ray-tracer's reliance on its own form of accelleration structure. Dr. Kirk pointed out that neither method is truly ahead in this regard.

Point 2 is an admission that rasterization can approximate various effects pretty well. All graphics is approximation, so it's not really boosting either method over the other.

Point 3 is a sign of the work done for rasterization, but it is also a recognition of the market. It doesn't necessarily reflect an inherent valuation of one method as being superior or inferior.

Point 4 is a special case of specialized hardware vs. generalized. One possible interpretation is that rasterization at present is very amenable to running on more power-efficient hardware.

His overall argument seems to be that ray tracing is useful, but it doesn't work miracles. It has a number of advantages, but it has costs that can seriously impact its usefullness in dynamic scenes at real-time frame rates.

If anything, I think the tenor of your statements insinuates a level of bias I didn't see in Kirk's statements.
You go out of your way to say how Nvidia's monetary interest is in rasterization, both before and after your discussion with Kirk.

What are we supposed to assume from that? That you aren't implying Kirk is a corporate shill and so his points must be suspect?

What did he say that's so unheard of or unreasonable?
Ray tracing for much of the rending workloads we have is slower.
Both methods use hierarichal methods for culling or accelleration.
Both scale the depth of their approximations to achieve speed.
Both wind up using a lot of common hardware.
Both have different strengths and weaknesses.
Both will likely be used in the future.
GPU hardware is capable of ray tracing, and there are methods for increasing the applicability of raytracing using the same hardware as most of the GPU.
 
rshrout, I think you're fundamentally misunderstanding how CUDA would work. From the article:
NVIDIA's stance is that rasterization is not inherently worse for gaming than ray tracing is if only because all the years of work and research that has gone into up to today. They seem willing to adopt ray tracing support on their cards in terms of programmability with CUDA and let the developers decide which option will be right for the industry.
CUDA kernels can access D3D or OGL objects--at least textures, and maybe more. I assume this is how a hybrid rasterizer/raytracer (as Kirk is proposing) would be implemented.
 
As a side note, I'm honestly tired of Intel's claims about using raytracing for shadow rays. Unless you want hard shadows ala Doom3, their described approach doesn't really work - and the industry isn't moving towards that, but rather away from it (and very rapidly indeed). The only good reason (for shadowing) not to use fixed-function rasterization hardware is if you want to use logarithmic shadowmap algorithms, IMO, and you might be better off doing rasterization than raytracing even then...

Arun, your point on shadows is quite exacting.

I personally think Intel is using all this ray tracing stuff to divert people from a real interest in pushing Larrabee as a more pliable renderer than NVidia and AMD's offerings.

As for ray tracing, it already has a future place in being used in a renderer's fragment shader doing relief mapping or correct (ie non-infinite) environment mapping ... just waiting until the hardware gets a little faster (perhaps next console generation) before this becomes common place in games.
 
3dilettante: Nice point-by-point reply! :)

I personally think Intel is using all this ray tracing stuff to divert people from a real interest in pushing Larrabee as a more pliable renderer than NVidia and AMD's offerings.
I'm still not sure why I should believe that will be the case though? I agree it's likely, just not certain. Larrabee, because it's x86, has significant ISA overhead which can only be bypassed by creating a new vector-based API. Other architectures can be much more creative in terms of ISA to achieve other potentially more appealing trade-offs.

That implies that (in theory) a from-the-grounds-up architecture can be (slightly?) more flexible than Larrabee for a given level of performance, not less. So the correct question to ask is not whether GPUs can be more flexible; they obviously can. The question is whether they want to be. Certainly if I was starting the design of a next-generation GPU today with the threat of Larrabee in mind, flexibility would be pretty high on my todo list... Same for one year ago. Two years ago? That becomes a much more difficult (and interesting) question.
 
Back
Top