DirectX Ray-Tracing [DXR]

That demo is quite terrible, IMHO. Other than the poor showcasing of real-time ray trace lighting, the physics (reaction to wind) is totally wrong. So many little things just made this demo off-putting.
Not really sure what they raytrace on that demo, certainly not reflections.
Are these demos running 4k? Can we assume performance requirements would be lower at 1080p or 720p?

What about the ai denoising, what's the word on performance gains and required performance? Can we assume it benefits from half precision, as I've heard half precision is sufficient for some aspects dNN software? what about quarter precision?
Most demos seemed to be 1080p, not really surprise if one sample ambient occlusion on single volta is ~5ms.
For games, I wouldn't wonder if tracing resolution would be half for most FX.
 
Not really sure what they raytrace on that demo, certainly not reflections.
Just lighting.

And I agree with @Shortbread . The demo doesn't seem to "demo" properly what's supposed to showcase/not impressive. The interesting part is that it's a new Metro game and that it seem we'll have more and more ray-tracing methods in games.

@mods: couldn't we retitle this thread so that it's a ray-tracing thread, not just Directx'?
 
Really need a before and after comparison to see the difference, not a good candidate for RTX showcase honestly. Games with tons of reflections and wetness would sure be more impressive. But again RT is probably not worth it all together consider the unhealthy amount of power requirement which would otherwise grant you tons of gpu particles, fluid dynamics and proper hair, clothes simulation which are all much more visually prominent to the players.
 
Really need a before and after comparison to see the difference, not a good candidate for RTX showcase honestly. Games with tons of reflections and wetness would sure be more impressive. But again RT is probably not worth it all together consider the unhealthy amount of power requirement which would otherwise grant you tons of gpu particles, fluid dynamics and proper hair, clothes simulation which are all much more visually prominent to the players.
I agree, as I said in my previous post.
 
And I agree with @Shortbread . The demo doesn't seem to "demo" properly what's supposed to showcase/not impressive. The interesting part is that it's a new Metro game and that it seem we'll have more and more ray-tracing methods in games.
It's just that static environments are not a good candidate to showcase GI, precalculated GI achieve the same thing or even better actually. They should have selected a more dynamic environments with moving spot lights, reflective surfaces, and shadow interaction.
 
So when can I expect an actual demo I can run on my own hardware?

MS official (presumably vendor neutral) API is good, mixed raster/raytrace is cool to the extent it can provide various effects noticeably better/cheaper than current raster tricks/approximations.
But I find the fullscreen raytrace demo vids pretty underwhelming :-|

We've seen realtime raytracing claimed to be the future & available to consumers Real Soon Now™ year after year as shown by various demo vids posted, yet still we're seeing demos that require $150,000 hardware & look pretty janky (at least in parts), especially so if they're only running @1080p.
 
For games, I wouldn't wonder if tracing resolution would be half for most FX.
Yeah I was thinking that doing the actual ray tracing at lower resolutions would be yield a very significant speedup. For specular you could run it at half res and for diffuse you could probably get away with running it at a quarter res or even less for bounce lighting.
 
Yeah I was thinking that doing the actual ray tracing at lower resolutions would be yield a very significant speedup. For specular you could run it at half res and for diffuse you could probably get away with running it at a quarter res or even less for bounce lighting.
A full screen ray trace with a meagre on ray per pixel, when properly filtered, is gonna look at least as good as a half-res one woth four rays per sample (which account for 4 final screen pixels). So under that light, full res withe less rays and more filtering is always best. It gives a broader spacial distribution of starting points for the rays.
 
A full screen ray trace with a meagre on ray per pixel, when properly filtered, is gonna look at least as good as a half-res one woth four rays per sample (which account for 4 final screen pixels). So under that light, full res withe less rays and more filtering is always best. It gives a broader spacial distribution of starting points for the rays.
You could also use 1spp and filtering for the lower resolution buffers. Sure, there would be some quality loss but how noticeable could it be, specially for diffuse bounced lighting? It would still be far superior to current game GI techniques.
 
It's just that static environments are not a good candidate to showcase GI, precalculated GI achieve the same thing or even better actually. They should have selected a more dynamic environments with moving spot lights, reflective surfaces, and shadow interaction.

This is the thing that kind of worries me about real-time ray tracing not being implemented correctly. That shadowing (especially multilayered) will not react appropriately between multiple lighting sources. I still haven't witness any proper real-time penumbra and antumbra shadowing in games, just some odd techniques or approximation of how it should look. I would like to see characters, NPCs, cars, buildings, and so-on, with actual proper shadowing down the road.
 
Last edited:
A full screen ray trace with a meagre on ray per pixel, when properly filtered, is gonna look at least as good as a half-res one woth four rays per sample.
Conceptually, quarter res with four rays stochastically sampled and jittered each frame should provide more information for reconstruction. Well, I guess jittering the single rays per frame at native res would accomplish the same sort of thing. Rasterising an ID buffer would provide decent object edges. The concern is material blurring across borders I guess, but with suitable samples you could reject those samples from the wrong surface ID. Probably.

Let's be honest - there are far smarter people tackling this than me!
 
? Please explain your metrics. Bearing in mind that that price is not the real price.

Read the comment which is a few comments above. Four Tesla V100s wouldn't cost $150,000. In the price comparison it was listed for $10,000. That would still be significantly cheaper than the $150,000.

Anyway, I was right about the $150,000 warning way too much.
 
Which demo is that? Someone was saying that the original SEED demo was run on IBM Power7s, according to 'developers at the show talking on Discord.'
 
Looking at the PICA PICA slides, is there any point in doing AO when you're already doing raytraced diffuse GI?
 
Conceptually, quarter res with four rays stochastically sampled and jittered each frame should provide more information for reconstruction. Well, I guess jittering the single rays per frame at native res would accomplish the same sort of thing.

Thats exactly my point. 4rpp at 1/2^2 res vs. 1rpp at full res still provides the same amount of rays for every screen pixel. The only difference is on the second case your rays are more evenly distributed spacially, which is more info for your filtering (it will just need to have a sapling area twice as wide as the quarter res buffer's filter to provide somilar results)
Spacial and temporal jittering go without saying of course.
 
Technically there's nothing stopping you from offsetting a ray's start. You could have a buffer 4x4 pixels and sample at each pixel's centre. You could get the same sampling 2x2 pixels with four rays per pixel, each a 'half pixel' separated (-0.25, -0.25), (0.25, -0.25), (-0.25,0.25),(0.25,0.25).

Of course, that ends up being the same number of rays, so doesn't make any difference to the ray tracing effort! It'd just save some RAM requirements storing the buffer, but you'd lose fidelity too. Makes zero sense - just render 'native pixels' and do whatever you want with the raw per-pixel resolution data.

I think the real take-home is the concept of pixels is even more meaningless in ray tracing. You have a set of rays cast into the scene and mapped onto a 2D grid of 'pixels', but each sample can originate from anywhere and in any direction (VR Picasso, here we come!). A super funky sampling might cast 1 million uniformly-random-location rays per frame to produce some sampling data and transform that into a 1080p or 4K 2D display buffer using ML'd reconstruction. Before the reconstruction AI turns on the user and kills them.

Would enable things like true fish-eye though. Should be better for VR projections too as truer to the optics.
 
Back
Top