DirectX Ray-Tracing [DXR]

AFAIK path-tracing can't simulate that.
It was a tongue-in-cheek comment.

Having said that, about 25+ years ago I read a paper (I don't recall the title nor author(s)) that proposed rendering by propagating wavefronts. Unfortunately, for rendering cost reasons, IIRC the wavelengths were not much smaller than the objects being modelled and so the images were full of diffraction artefacts.
 
It was a tongue-in-cheek comment.

Having said that, about 25+ years ago I read a paper (I don't recall the title nor author(s)) that proposed rendering by propagating wavefronts. Unfortunately, for rendering cost reasons, IIRC the wavelengths were not much smaller than the objects being modelled and so the images were full of diffraction artefacts.
Isn't something like that also applicable to sound? If I recall, the devs of Gears of War 4 used wave simulations for sound propagation. Precomputed, of course.
 
Isn't something like that also applicable to sound?
It'd surely be far easier for sound certainly, though I have a feeling I read some schemes were still doing ray tracing for sound as well, but take that with a large grain of NaCl.
 
It'd surely be far easier for sound certainly, though I have a feeling I read some schemes were still doing ray tracing for sound as well, but take that with a large grain of NaCl.
Yeah I think Steam Audio and Google's Resonance use ray-tracing.
 
Limit Octane to the uses and quality of the RTX demos and it could be running at 60fps in current hardware.
Respectfully, you don't even have anything *close* to the amount of data you'd need to draw such a conclusion. This has always been the claim of offline raytracing stuff for the past 10 years but for some crazy reason, you never see someone take the hour or two to modify some settings and get magical real-time path tracing without massive amounts of noise. Wonder why that is? :)

Again, this isn't to knock offline or even "interactive CAD" renderers, but they are entirely a different beast with different goals and constraints at this point. Ultimately if what you said is true they would have done it years ago as that would have made a hell of an impressive demo and games absolutely would be happy to use it. But that's the thing, it just isn't correct. As I stated there's still two orders of magnitude difference between performance here, and nothing about pointing out the relative differences between the compromises you have to make at those different performance levels supports your argument that one is a super-set of the other.

I mean I could conversely argue that the RTX demos could just increase their sample counts and recursion depths and poof, way better than Octane 4! But that's just as nonsense as the claim in the other direction, and hope that deep down you know it :)
 
Respectfully, you don't even have anything *close* to the amount of data you'd need to draw such a conclusion. This has always been the claim of offline raytracing stuff for the past 10 years but for some crazy reason, you never see someone take the hour or two to modify some settings and get magical real-time path tracing without massive amounts of noise. Wonder why that is? :)

Again, this isn't to knock offline or even "interactive CAD" renderers, but they are entirely a different beast with different goals and constraints at this point. Ultimately if what you said is true they would have done it years ago as that would have made a hell of an impressive demo and games absolutely would be happy to use it. But that's the thing, it just isn't correct. As I stated there's still two orders of magnitude difference between performance here, and nothing about pointing out the relative differences between the compromises you have to make at those different performance levels supports your argument that one is a super-set of the other.

I mean I could conversely argue that the RTX demos could just increase their sample counts and recursion depths and poof, way better than Octane 4! But that's just as nonsense as the claim in the other direction, and hope that deep down you know it :)
1. Because denoising algorithms weren't anywhere near as good as they are now.

2. Indeed, Octane was used for offline rendering while Brigade was being worked in parallel for real-time use. Now, however, they've merged both projects which has resulted in massive speed gains for Octane.

3. Not really since in the case of Octane you'd only need to lower the quality while in the case of the RTX demos the technology would need to be severely modified since it doesn't support the same feature set as Octane. On the overall point, I think it's pretty obvious that greater performance can be achieved by greatly reducing the workload which is exactly what I proposed.
 
3. Not really since in the case of Octane you'd only need to lower the quality while in the case of the RTX demos the technology would need to be severely modified since it doesn't support the same feature set as Octane. On the overall point, I think it's pretty obvious that greater performance can be achieved by greatly reducing the workload which is exactly what I proposed.
I think you're over-estimating how difficult it is to write a path tracer. Indeed that's a lot of the appeal in the first place: it's actually very simple to produce high quality images with path tracers... individuals can and routinely do it in days or weeks as undergrads after all. As DeanoC noted earlier, that's also pretty much the first thing people put into their DXR test-beds as a reference implementation. Indeed almost all of the interesting technology is ultimately around performance.

This is why I'm confused by how strongly you seem to think your case is here without any actual data. "But we'll just make this slow thing fast" is not a particularly compelling argument for anyone who has worked in rendering (offline or real-time) :) Indeed the way you make things fast is by doing the sorts of things that the DXR demos are doing... there's not some sort of magic that is going to allow you to shoot 50x as many rays at the same level of performance on the same hardware, or even 10x, or even 2x. Of course algorithms and hardware will continue to improve and hopefully at some point shooting dozens of rays per pixel on low end hardware @ 60fps is a thing that is possible.

But ultimately your claims that there's some special sauce in Octane that someone could just change a setting or two and make it run at way better performance and quality to any of the DXR demos on the same hardware is extremely questionable. If that's the case, even if they somehow didn't know about DXR in advance someone would have run out of GDC and tweaked a few settings and had a great bombshell of an article the next day, let alone several weeks later. If there's some huge advance possible here I think we'd all be super happy to see it, but until someone demos that actually running on the same hardware/assets/etc. it's just smoke and mirrors (pun intended).

To reiterate the interesting part of raytracing *is* the performance work and compromises to make it fast. Thus making comparisons across vastly different performance targets is simply not interesting.
 
Art of creating right compromises and utilizing hardware well is not easy. Layman might not even see other than black and white while missing all the shades.

In internet unfortunately the blind often are most vocal and sure of themselves.
 
I think you're over-estimating how difficult it is to write a path tracer. Indeed that's a lot of the appeal in the first place: it's actually very simple to produce high quality images with path tracers... individuals can and routinely do it in days or weeks as undergrads after all. As DeanoC noted earlier, that's also pretty much the first thing people put into their DXR test-beds as a reference implementation. Indeed almost all of the interesting technology is ultimately around performance.

This is why I'm confused by how strongly you seem to think your case is here without any actual data. "But we'll just make this slow thing fast" is not a particularly compelling argument for anyone who has worked in rendering (offline or real-time) :) Indeed the way you make things fast is by doing the sorts of things that the DXR demos are doing... there's not some sort of magic that is going to allow you to shoot 50x as many rays at the same level of performance on the same hardware, or even 10x, or even 2x. Of course algorithms and hardware will continue to improve and hopefully at some point shooting dozens of rays per pixel on low end hardware @ 60fps is a thing that is possible.

But ultimately your claims that there's some special sauce in Octane that someone could just change a setting or two and make it run at way better performance and quality to any of the DXR demos on the same hardware is extremely questionable. If that's the case, even if they somehow didn't know about DXR in advance someone would have run out of GDC and tweaked a few settings and had a great bombshell of an article the next day, let alone several weeks later. If there's some huge advance possible here I think we'd all be super happy to see it, but until someone demos that actually running on the same hardware/assets/etc. it's just smoke and mirrors (pun intended).

To reiterate the interesting part of raytracing *is* the performance work and compromises to make it fast. Thus making comparisons across vastly different performance targets is simply not interesting.
Yeah I don't know if I'm expressing myself poorly or your misreading what I'm writing.

I'm not saying that OTOY has a magic wand with which they're going to make Octane run at 60fps by the end of the year at the exact same quality they have now. What I did propose, as restated in the third paragraph in my previous post, is to massively lower the quality of the simulation to achieve much faster performance (less spp, lower resolution, less bounces).

Even at a much reduced quality, IMO, that is still more impressive than the current RTX demos because it's fully path traced and the GI feature set is greater.

As for empirical proof, take for example Blender and render a scene with Cycles, which is a path tracer. Changing the render settings you can see an almost linear relationship betweeen spp, resolution, and the time it takes to render the image.
 
It was a tongue-in-cheek comment.

Having said that, about 25+ years ago I read a paper (I don't recall the title nor author(s)) that proposed rendering by propagating wavefronts. Unfortunately, for rendering cost reasons, IIRC the wavelengths were not much smaller than the objects being modelled and so the images were full of diffraction artefacts.

I remember when I was researching the emergence of geometric light optics (Torrance etc.), I found a handful of papers from the 60s which dealt with modelling reflectance (that was basically all earth albedo/reflectance or military airplaine reflectance research) with wave optics models. Sadly I can't recall the titles anymore either. But one of the motivations of Torrance for gearing towards geometric optics was that the wave optics based models where extreme difficult, impractical on computers and under-determined, couldn't produce a lot of the observable phenomena. I think he mentions some of it in the introduction of the microfacet paper.

Which is more problematic is that neither is modelling light as in reality because light shows both, wave and particle properties. If we don't want to make a quantum probability simulator to unify the two effects, we would have to have a simultanious wave/ray-tracer at least to produce some real efects.

Anyway, here's a modern paper of wave tracing with a volumetric grid. If we would agree that space itself is quantized in reality (and in effect creates gravity), maybe it's the right approach, just a bit too coarse. ;)
 
What I did propose, as restated in the third paragraph in my previous post, is to massively lower the quality of the simulation to achieve much faster performance (less spp, lower resolution, less bounces).
Sure, but that won't get it to the performance of the DXR demos. It's necessary, but not sufficient.

Even at a much reduced quality, IMO, that is still more impressive than the current RTX demos because it's fully path traced and the GI feature set is greater.
Right but you seem to be missing that this stuff *costs a lot of performance* or indeed introduces more noise (which is ultimately the same thing). Again, implementing that feature set is trivial (as noted, our DXR test bed can do full path tracing...), the only interesting bit is how fast you can get it. And as far as I can tell, you're not explaining how they would make that faster than the DXR demos but at higher quality.

As for empirical proof, take for example Blender and render a scene with Cycles, which is a path tracer. Changing the render settings you can see an almost linear relationship betweeen spp, resolution, and the time it takes to render the image.
Cool, so you can get blender down to <15ms/frame and still be at higher quality than the DXR demos then? Because that's what "empirical proof" is :) You can't just get out your ruler and draw your line and expect that to hold across two orders of magnitudes of performance, particularly in the direction of faster performance.

Anyways I think it's great if people continue to work on getting more and more performance out of higher and higher quality and sample counts, but I still take issue with your claim that there's some sort of magical technology in the offline renderers that when applied to DXR is going to get us full path tracing quality and faster performance. Path tracing is trivial to implement; the reason the DXR demos are generally not doing it is because there are better ways to spend your rays and samples when you only have a few of them per pixel :)
 
I'm still not quite clear on the difference between ray tracing and path tracing.
I've not implemented a path tracer like others here, but my understanding is path tracing is a performance optimization for ray tracing and it's credited with making ray tracing practical for visual effects with offline rendering. Essentially, each time there's a bounce you regroup rays so you trace rays going in the same direction. This reduces divergence on a SIMD architecture and reduces the number of times the bounding volume hierarchy, representing objects in the scene, needs to be traversed.
 
I will be honest, I am more interested on DirectML for the near feature, especially if it will work decently good on iGPUs on system that already have a discrete card.
 
Sure, but that won't get it to the performance of the DXR demos. It's necessary, but not sufficient.

Right but you seem to be missing that this stuff *costs a lot of performance* or indeed introduces more noise (which is ultimately the same thing). Again, implementing that feature set is trivial (as noted, our DXR test bed can do full path tracing...), the only interesting bit is how fast you can get it. And as far as I can tell, you're not explaining how they would make that faster than the DXR demos but at higher quality.

Cool, so you can get blender down to <15ms/frame and still be at higher quality than the DXR demos then? Because that's what "empirical proof" is :) You can't just get out your ruler and draw your line and expect that to hold across two orders of magnitudes of performance, particularly in the direction of faster performance.

Anyways I think it's great if people continue to work on getting more and more performance out of higher and higher quality and sample counts, but I still take issue with your claim that there's some sort of magical technology in the offline renderers that when applied to DXR is going to get us full path tracing quality and faster performance. Path tracing is trivial to implement; the reason the DXR demos are generally not doing it is because there are better ways to spend your rays and samples when you only have a few of them per pixel :)
I understand you're upset because I called Octane superior to your work but you could at least read what I've posted:

"Limit Octane to the uses and quality of the RTX demos and it could be running at 60fps in current hardware."

As I've stated before, the Octane demo is doing far more than the RTX demos, hence the lower performance of 1 frame per second at 1080p. I used Cycles as proof of the almost linear relationship between spp, resolution and rendering time. By substantially lowering the quality of the simulation you can get much better performance. Now, granted, perhaps I exaggerated by assuming a performance of 60fps. Maybe 30fps is more like it :) .

As for the noise, we've already seen denoisers that can do a pretty good job with as little as 1spp:

 
I understand you're upset because I called Octane superior to your work but you could at least read what I've posted:
"Limit Octane to the uses and quality of the RTX demos and it could be running at 60fps in current hardware."
The issue is not ego or that I didn't read what you posted - the issue is what you posted doesn't make much sense. Your argument basically boils down to:
a) Octane is higher quality than the DXR demos, but slower. (No shit.)
b) If you made Octane lower quality, it would be faster. (No shit.)
c) If you made Octane the same quality as the DXR demos, it would run faster than them. Maybe 60fps... or 30fps... or something, but it's better! (... uhh...)

I hope you can see how c) does not follow from - or really even relate to - a) and b). Repeating a) and b) thus doesn't make your argument any stronger.

I've been giving you the benefit of the doubt and allowing you to try and justify your assertions with further explanation or - you know - actual data, but you just keep repeating the same fallacies. At this point I'm not sure whether you're just trolling or you don't really understand how this stuff works at a technical level. I'll give you the most generous interpretation and assume it's the latter, but I'm not sure we're going to make much progress here if you're not willing to listen to why your arguments are not compelling.

As for the noise, we've already seen denoisers that can do a pretty good job with as little as 1spp
Incidentally I implemented SVGF recently so I'm pretty familiar with it :) You may have missed it but every DXR demo heavily used denoising (often 3 or more separate, tuned denoisers for different terms!) so that's hardly a silver bullet for path tracers.
 
The issue is not ego or that I didn't read what you posted - the issue is what you posted doesn't make much sense. Your argument basically boils down to:
a) Octane is higher quality than the DXR demos, but slower. (No shit.)
b) If you made Octane lower quality, it would be faster. (No shit.)
c) If you made Octane the same quality as the DXR demos, it would run faster than them. Maybe 60fps... or 30fps... or something, but it's better! (... uhh...)

I hope you can see how c) does not follow from - or really even relate to - a) and b). Repeating a) and b) thus doesn't make your argument any stronger.
Actually, it does. The reason why it doesn't seems so to you is that you just cannot fathom the idea of another group making a substantially faster algorithm than yours. So at the end, it is ego.

I've been giving you the benefit of the doubt and allowing you to try and justify your assertions with further explanation or - you know - actual data, but you just keep repeating the same fallacies. At this point I'm not sure whether you're just trolling or you don't really understand how this stuff works at a technical level. I'll give you the most generous interpretation and assume it's the latter, but I'm not sure we're going to make much progress here if you're not willing to listen to why your arguments are not compelling.
Of course they're not compelling to somebody unwilling to consider them .

If you want details of their algorithm, you could always ask the guys at OTOY. Who knows, maybe you'll learn a thing or two.

Incidentally I implemented SVGF recently so I'm pretty familiar with it :) You may have missed it but every DXR demo heavily used denoising (often 3 or more separate, tuned denoisers for different terms!) so that's hardly a silver bullet for path tracers.
You're the one complaining about noise. Even at 1spp the quality far exceeds rasterization, at least for soft shadows and diffuse GI. I'll take it.
 
Back
Top