Next gen lighting technologies - voxelised, traced, and everything else *spawn*

https://auzaiffe.wordpress.com/2019...acing-worth-it/amp/?__twitter_impression=true

Point of view of a game developer, he works at Unity and he worked on the BMW raytracing demo.

EDIT: some concerns are the same than JoeJ, too high level API and not enough low level access

On the other side, the current raytracing APIs is a high level one: very little is exposed to the developers (BVH structures are opaque, no control over the recursive TraceRay coherence and dispatch, cheating is required to support skinned meshes and particles, transparents are a nightmare, material LOD along recursion is hard, etc.).

Edit2: It is interesting but it will not solve all realtime rendering or like AI, he thinks they are good tools but too much overhype.
 
Last edited:
To my layman's perspective raytracing feels like trying to solve rendering with only vertex colours and no approximate footprint sampling in texture space. It might make sense in the future, but not now.

LOD accelerated approaches for sampling are just a headache of hacks upon hacks upon hacks which make just point sampling and denoising feel elegant in comparison ... but I don't think the elegance will translate into image quality in the near future compared to the mountain of hacks.
 
Last edited:
In practice, they are announcing that they will be supporting the fallback layer that they dropped in November 2018 (given that there is no dedicated BVH traversal hardware on those).
That's simply wrong, fallback layer is a universal and unoptimized wrapper, which provided basic compatibility without any perf optimizations, hence it was dropped. Driver must provide much more robust and optimized support.
 
What I find most surprising about raytracing in games is it still looks like games. Raytracing in offline rendering tends very much towards photorealism, but RT'd games aren't anything like and instead look like current games with some improvements. I wonder what it'll take to change so games start to look like RT'd CGI in terms of photographic quality?

This isn't a remotely fair comparison. Offline rendering they can afford to throw a supercomputer (or render farm as they call it) at it and allow minutes or hours of render time per frame. Also, you can assume that said supercomputer has all the benefits of GPGPU and RTX just as much as any gaming computer.

Also, if you look back at movies where they had computing power more in line with a modern high end desktop, you'll find that the CGI doesn't look anywhere near as realistic as you remember. Have you watched the Star Wars prequels lately? Even something more recent like Lord of the Rings has fairly noticable CGI if you look at it from today's standards.
 
This isn't a remotely fair comparison. Offline rendering they can afford to throw a supercomputer (or render farm as they call it) at it and allow minutes or hours of render time per frame. Also, you can assume that said supercomputer has all the benefits of GPGPU and RTX just as much as any gaming computer.

Also, if you look back at movies where they had computing power more in line with a modern high end desktop, you'll find that the CGI doesn't look anywhere near as realistic as you remember. Have you watched the Star Wars prequels lately? Even something more recent like Lord of the Rings has fairly noticable CGI if you look at it from today's standards.
Which would be a great argument if we didn't already have demos pushing realtime CGI visual. ;) Heck, the original Star Wars demo was a clear step forwards.

I think the real differentiator will be area lights firstly, and then GI, the first of which definitely seems in the realm of doable on RTX.
 
Tech details about the Troll demo:


Also, near the end they show a bit of the other photorealistic demo but with ray tracing on.

What I find most surprising about raytracing in games is it still looks like games. Raytracing in offline rendering tends very much towards photorealism, but RT'd games aren't anything like and instead look like current games with some improvements. I wonder what it'll take to change so games start to look like RT'd CGI in terms of photographic quality?
Artists need some time to adapt. They're too used to doing things based on rasterization constraints. Look at SFM movies, they have the CGI look and all it took was area lights/shadows and lots of motion blur.

https://auzaiffe.wordpress.com/2019...acing-worth-it/amp/?__twitter_impression=true

Point of view of a game developer, he works at Unity and he worked on the BMW raytracing demo.

EDIT: some concerns are the same than JoeJ, too high level API and not enough low level access

Edit2: It is interesting but it will not solve all realtime rendering or like AI, he thinks they are good tools but too much overhype.
The best part:

"I may have sounded hopeless, but that is far from being my current state of mind. I think this is just the beginning and the exciting part is ahead of us. Papers that take advantage of this new API in a smart way are starting to pop, and I am pretty excited with the all the possibilities that this offers."
 
Last edited:
Ray Tracing is definitely going to be be next thing in video games. Whether it’s with or without hardware acceleration, with or without DXR, it’s going to happen.

I think that context needs to be appreciated. We aren’t moving towards RT for shits and giggles. If rasterization demos were doable in game, developers wouldn’t waste time doing RT.

There is clearly a boundary, grey area where the two might meet performance wise, but there are clear use cases for both.

I rather this thread keep on trucking about all topics on lighting and shadows than to loop back to opinions on som devs who don’t like RT.

It’s not perfect. But.
It’s clear developers want it.
It’s clear the industry will support it.

As it stands, in the near future, when vertical slices of games release with RT, I’m more likely to believe that will be the actual final product, and not expect sweeping “downgrades” for final release.
 
Artists need some time to adapt.
Adapt what? You could take any game scene, chuck it in Arnold, and get something that, at worst, looked like a photo of miniatures. It'd still look like a photo. There'll be a degree of material complexity that can't be matched, especially in organics, which is something games will look gamey in, but for metals and plastics and stonework, games should be able to match the look (not necessarily complexity or fidelity) of CGI using raytracing.

As I said though, I think it's just the lighting. Once we get area lights, we reach the next step of realism. This was very true in offline raytracing. The first package I know with raytraced area lights was Realsoft3D and it had a whole level of impressive over the other renderers. That and proper specular (reflected objects) thanks to an HDR-capable pipeline instead of just phong. Realsoft3D was overtaken in the realism stakes when other renderers went physically-based, and of course games have PBR materials so are all ready for improved lighting to look amazeballs.
 
I feel like a lot of the work that was put into perfecting denoising to make raytracing viable, may end up looping back into denoising less accurate traditional shader based (incorrectly called rasterisation based) techniques, for example, highly stochastic shadowmap based soft shadows, which will end up being even cheaper at perceptually similar results. Ironically, it would end up making it so that the denoising made raytracing less viable again, comparatively.
 
Yeah. Lighting overall needn't be sampled 1:1. There are going to be a lot of possibilities to explore.

Well, the industry has flerted with the idea for more than a decade now. Inferred lighting, lower-res buffers for AO and PP, jittering to hide low sampling, TXAA, etc...
But it took ray tracing to motivate a lot of serious in-depth research on the field of denoising sparse and jittery lighting data robustly, in ways game devs never managed to take as much time to.
I bet UE4's new SSGI is an example of just that. After all the work was put on denoising RT GI samples, someone in there asked, "hey, what if we just use this same denoising system with raymarched screen space rays?" and it turns out it doesn't look half bad either.
 
Last edited:
Does it ever get sharp? Would it be possible to update hardware texture scaling with something better than bi/trilinear too, and maybe have some gaussian element? Using super low buffer still ends up with obvious texel squares. Presently we need to sample in the shaders, but as it's such a frequent feature and probably moreso going forwards, is there value in updating the texture sampling process to get better results at the atomic level?
 
If rasterization demos were doable in game, developers wouldn’t waste time doing RT.

NVIDIA devrel sweetens the pot just a tiny bit ... if you don't want this discussion, don't reply to that factual statements and let sleeping dogs lie. You bring the discussion you want to avoid upon yourself by stating relatively questionable opinions as facts.
 
NVIDIA devrel sweetens the pot just a tiny bit ... if you don't want this discussion, don't reply to that factual statements and let sleeping dogs lie. You bring the discussion you want to avoid upon yourself by stating relatively questionable opinions as facts.
This is a thread about lighting techniques. Let me know how a developer thinks DXR sucks has any affect on the discussion of lighting techniques in games.

If you have to keep bringing in anti vendor crap into every thread I think you’re the one off topic.

And yes, please show me a video game that plays like Infiltrator demo from 2013. Cutscenes aren’t gameplay.
 
And yes, please show me a video game that plays like Infiltrator demo from 2013. Cutscenes aren’t gameplay.

But this has nothing to do with RT vs rasterization? Or am I missing something here? Or are you implying that RT will make it easier/faster to get from Demo quality to Game ? (which is not going to be the case at all) .
 
Let me know how a developer thinks DXR sucks has any affect on the discussion of lighting techniques in games.

For the most part we are guessing where the industry is going based on their statements and research ... yes, of course a game developer's opinion on the subject is relevant, especially for the console realm where there's no hardware support. To me it seems approaches with a little more pre-filtering rather than brute force point sampling and post filtering have a lot of legs yet, especially on consoles.

I'd be careful trying to bait mods, it's easy to become collateral damage ... it's also just not very nice.
 
But this has nothing to do with RT vs rasterization? Or am I missing something here? Or are you implying that RT will make it easier/faster to get from Demo quality to Game ? (which is not going to be the case at all) .
I'm implying that traditional T&L we've still yet to see games get there and we are using better technology with even more power and we still aren't there. And it's not because the techniques aren't there yet, or that we have a lack of power. We just can't build these demos even 6 years later with nearly 4x the power. There fundamental reasons why games can't look like that during gameplay, and I don't think it has to do with a lack of heart or talent. I do believe RT (or RT-like) type techniques will get us there a lot closer then we've ever been.
 
For the most part we are guessing where the industry is going based on their statements and research ... yes, of course a game developer's opinion on the subject is relevant, especially for the console realm where there's no hardware support. To me it seems approaches with a little more pre-filtering rather than brute force point sampling and post filtering have a lot of legs yet, especially on consoles.

I'd be careful trying to bait mods, it's easy to become collateral damage ... it's also just not very nice.
This thread has already undergone a revamp, if you scroll up. I'm not baiting mods, but I don't want to go back to the original thread that was named RTX lighting or whatever it was called. We've had a whole thread on a developer being purely against the implementations of DXR. It's all over this thread prior to the name change. It wasn't progressing.

The discussion of next lighting techniques helped move everyone forward, I don't want to go back to that.
Ray Traced games can look realistic if they want to look realistic. Compute and T&L can look realistic if the artists want to make it look realistic. Realism isn't bound to the technology, and games look and are styled by the way the art directors want it to look. How else do I protest against that type of statement. It's not a fair statement to say RT based games still look like video games, no one ever said RT based games should look like real life.

No one ever said DXR is a perfect implementation, but if it's going to help games have hybrid RT, then let's talk about what hybrid RT titles look like and what they are bringing to the table.
 
Back
Top