Next gen lighting technologies - voxelised, traced, and everything else *spawn*

I think that both sides of the coin are right and wrong at the same time. =)

Indeed, there is marketing misleading and exaggeration about how truly powerful the current cards and their implementations are.

But then on the other side this is unfair to compare small demos confined into tiny scenes, hand optimized to insane levels and using specialized hacks tuned for those specific scenes.

Looking from the optimistic side, both are awesome. Specialized demos show a lot of skills from their authors, but DXR/RTX show great progress in the direction of generic ray-tracing on scenes of any kind.

Maybe in the near future some of those demos techniques could boost DXR on more general cases, who knows.
 
but DXR/RTX show great progress in the direction of generic ray-tracing on scenes of any kind.
I think it's the exact opposite: Because DXR is restricted to the classical model of single threaded rays against static triangle models, it prevents us from enabling scenes of any kind.

Think of a nature scene with many trees and bushes. In BFV even approximating bushes with alpha textures (requiring any hit shaders) turned out a bottleneck.
Likely modeling branches with proper geometry becomes impossible, because the classical model becomes very inefficient with diffuse geometry, and FF HW can't help it.

This is why we would need more flexibility so such problems can be addressed with LOd and approximation, but when following the path of FF it could take much more time to get there. If RT progresses like rasterization, we will never get there.

But then on the other side this is unfair to compare small demos confined into tiny scenes
Any research begins with small demos and tiny scenes. But we can design algorithms so they scale better to large worlds and complex scenes, which is what DXR/RTX forgot to address at all.
The question is: Would those alternatives progress faster than FF? Depending on adoption we might never know.
Also, i think it's more unfair to assume small one man research projects could compete with million dollar productions and pushed technology.

There are much to many assumptions on both sides, and because anything here is too new, nobody can avoid this entirely.
Most arguments against one side hold against the other just as well.
 
Have you played the game?
The game retains some of the best baked GI lighting ever conceived, even better than most of the best games out there, it simply looks gorgeous without RTX.

However what RTX does is correct the limitations of any baked GI solution, specifically in relation to dynamic moving objects, that's what most of the difference comes from, and that's the biggest difference captured in any comparison screenshots. If you are having a hard time detecting RTX it's because you are mostly looking at static objects, once stuff moves the difference becomes stark.

RT also irons out the inevitable locations left untouched by the baked GI, of which there are several in any open world kind of game.
The non-RTX GI is not baked and yes it gives nice bounce light to many scenes.
 
I think it's the exact opposite: Because DXR is restricted to the classical model of single threaded rays against static triangle models, it prevents us from enabling scenes of any kind.

Think of a nature scene with many trees and bushes. In BFV even approximating bushes with alpha textures (requiring any hit shaders) turned out a bottleneck.
Likely modeling branches with proper geometry becomes impossible, because the classical model becomes very inefficient with diffuse geometry, and FF HW can't help it.
This is a demo scene with maaany trees:

Running on a Quadro RTX, though. :rolleyes:

I have a question regarding what you said about triangles, since I've read that in other posts, but I don't remember the full explanation. So, if geometry were made of, say, SDFs instead of polygons, would RTRT be faster but we couldn't use DXR? Do you think that DXR will evolve in a near future to accept other things different than polygons?
 
This is a demo scene with maaany trees:
Yes, i know it and it may proof i have too much worries. Not sure - it's slow and smears due to stochastic updates, so they can't do one ray per frame and pixel. (It's a path tracer AFAIK to be fair.)

So, if geometry were made of, say, SDFs instead of polygons, would RTRT be faster but we couldn't use DXR? Do you think that DXR will evolve in a near future to accept other things different than polygons?
SDF would already work with custom intersection shaders, voxel volume too.
The problem is you can not generate this data in realtime easily so you end up using instances of static trees for example.
The consequence is you have more dynamic triangles which are then hard to blend into a static lower LOD representation.
The triangles are static too, if we look at a single frame.

What i wanted is the option for geometry that is dynamic not only in motion but also in LOD. Both is equally important. Point hierarchies are one example, progressive meshes with geomorphing another. (The quadrangulation stuff i'm working can be a generalization of both)
But DXR will never evolve to this, because such data structure represents geometry, LOD and acceleration hierarchy already and so completely replaces the ancient BVH idea RTX is built upon.

So the question 'What should DXR support?' is already wrong if we think ahead. The better question would have been 'What do you need so you can implement fast RT yourself?'
Ofc for NV the FF path is more beneficial - it just works, is easy to use and with a little help people will adopt.
 
I make mention of what I am decrying right in the sentence I wrote - the implication of "actually trying" in the video title card which I find non-enrichening in the discussion around RT and overly antagonistic. That was what I was referencing.
I do not think the sentence has its origin on this forum. You hear this sentence a million times if you pay attention the disappointed gamers. This disappointment is not about RTX, it is the general problem of boredom, repetition, cut scenes and quick time events, never ending franchises instead something new.
It as also about microtransactions, downgrades, alternative stores, lazy devs and greedy publishers, etc. Tech is our least problem and not the origin of the disrespect against the industry. It's just that tech discussion often slips down to the same lower and toxic level in public, so RTX is affected too like everything else as well.

I did not watch the video - if i read 'faking RTX GI without RTX', then be sure i'm more pissed than you are, and i have to stop immideatly :(
 
Gonna double post because I was editing my post and time ran out!!!

But the content of that video was not my original point - rather my main point was about the antagonistic attitude regarding the current implementation of RT on the GPU and the posturing from enthusiasts and tech journalism on the web surrounding it. I have been following graphics tech for 20 odd years at this point and I have never seen such a loud and frankly immature push back against an implementation like this.
I would actually say it's normal behaviour, and usually a shift like this would cause this type of push back. We're talking about a completely different way of rendering for video games that we've never really invested in because of performance issues. And a lot of people will be willing to push back on it for a variety of reasons, and I've seen this everywhere in every industry. A simple change to a process for order entry can amount the same amount of backlash. Electric cars are every bit as polarizing.

There are obviously some massive cons with a lot of these 'alternatives' and developers are currently willing to make trade offs to meet them.
I'm okay with that as long as part of the discussion acknowledges the homogeneity of it.

There are several eventualities that need to be acknowledged.
Firstly, RT and DXR will overtake traditional lighting and compute based lighting, this is a matter of time.
Eventually RT will leave FF hardware into something more flexible, this is a matter of time, but I'm not sure how long.

There are several discussion points here that some are more interested in compute based lighting being sufficient enough, or at least the trade offs are sufficient enough that we don't need hardware RT.
The second being that the FF implementations of RT is not good enough, DXR is not good enough, and want to jump to the end state.

Both of these arguments do tend to ignore the elephant in the room that is time to deploy, and cost to develop. But they are still alternatives, and it's interesting to see/read about how they will compete against DXR.

We already have a great deal of many companies struggling to get titles out as well as they want without all this added graphical complexity, just looking at Bioware for instance, or all the cancellations that EA has had. Coupled with having to do world building to support all these faked features or to have to wait until consoles are strong enough for a flexible solution are not realistic requirements that need to be part of the discussion.
 
For whatever reason the discussion around RTX is very emotional. People have hard set opinions about the direction the industry will take. I generally prefer the wait and see approach. If compute works out to be more functional, useful than DXR, then devs will just ignore DXR and release games using compute-driven ray tracing. There's nothing stopping them from doing so. If DXR is a good way to go, then devs will release games using DXR. It's likely that DXR will have more features added, more flexibility added over time, just like it was with programmable shaders. Devs want flexibility. Nvidia and AMD generally try to give devs what they want. Nvidia and AMD. Microsoft wants to give both devs and Nvidia, AMD what they want. APIs are never perfect, but I think the fear of DXR being set in stone for eternity are unfounded. The idea that DXR is optimal or perfect way of doing things is equally ridiculous. V1 is never anywhere close to perfect. I'm sure DXR has a ton of limitations that will need correcting. Nvidia already has a lot of RTX intrinsics for DXR, and I'm sure AMD will have their own. DXR could die on the vine, if some better general computing solutions come along. In the end, people can speculate about the right direction, but the proof is in the results of released consumer products. If there's a better way, then produce it and put it in the hands of consumers.
 
For the sake of this thread and now it’s separation from nvidia, I hope we are focused on hybrid ray tracing in games / and or alternatives and not specifically the hardware or vendors or APIs that support it.
 
In the end, people can speculate about the right direction, but the proof is in the results of released consumer products. If there's a better way, then produce it and put it in the hands of consumers.
Exactly, why is this hard to understand?

No one is stopping anyone from releasing RT compute games or pushing for them. And I am sure if some developer found a way to do RT on compute with solid results they would have shown it by now. One man demos don't apply here, how much shader power is dedicated to running these compute RT demos? If you are using 100% of your ALUs time to do RT compute then you are really not doing RT compute any favors.

Comparing full hybrid DXR games with massive scene variety, complex geometry, myriad of shaders and materials, animation, physics, to private demos hand tuned to the prim, with lackluster scene complexity and often not impressive performance is not really a good way to showcase RT compute.
 
If there's a better way, then produce it and put it in the hands of consumers.
Part of the discussion is about exploring avenues to invest in, in particular what FF RT acceleration does and whether that's necessary in a future compute design. I don't think the discussion is helped at all by trying to identify 'the best/right way' and prove it with real-world examples. It should instead just talk about what techniques exist and how well they work and what the pros and cons are. In think since RTX launched, about three posts on this board have been about what changes could be applied to compute, if any, to achieve the same sort of results, and I think only one guy with actual experience trying to develop a next-gen lighting solution had any ideas which centred on the compute APIs and making them more versatile and efficient. ;)

At the end of the day, there shouldn't be a champion or victor. At the end of the day, the best solution may well be a combination of FF acceleration and compute, raytracing volumes and whatnot.
 
Part of the discussion is about exploring avenues to invest in, in particular what FF RT acceleration does and whether that's necessary in a future compute design. I don't think the discussion is helped at all by trying to identify 'the best/right way' and prove it with real-world examples. It should instead just talk about what techniques exist and how well they work and what the pros and cons are. In think since RTX launched, about three posts on this board have been about what changes could be applied to compute, if any, to achieve the same sort of results, and I think only one guy with actual experience trying to develop a next-gen lighting solution had any ideas which centred on the compute APIs and making them more versatile and efficient. ;)

At the end of the day, there shouldn't be a champion or victor. At the end of the day, the best solution may well be a combination of FF acceleration and compute, raytracing volumes and whatnot.

I don't think there will be a champion or a victor either. Developers are free to do what they want. They'll pick their solutions based on the advantages and disadvantages. There is some information about the performance of DXR. Remedy has frame time costs in their presentation, even though it's old. We'll see more and more info come out. Many of the demo vids just show fps benchmarks, but what slice of the frametime is what? The sponza test scene is kind of popular partially because if you run the same scene for all of your tests its easy to see the cost of an implementation. If your performance goes from 60fps to 90fps in the test scene with a new implementation, then your new implementation costs 5ms less. If I'm comparing 1440p Metro with RT High and a whole bunch of graphics options on Ultra, what is the cost of RT? I could compare RT on vs RT off and try to get an estimate of how much RT costs in particular scenes. But then I try to make a comparison to a demo scene using some different general compute based RT solution, and that comparison becomes very hard. Say I figure out Metro 1440p with RT on High costs me around 5ms and the frame rate is stable at 60fps (these are made up numbers) and then I compare that to a demo scene that's 60fps with a compute-based approach. Well, maybe that demo scene is spending 11ms on similar RT features, even though they're both 60fps. You don't know unless they tell you what the cost is. Maybe one is faster and less accurate. The cost doesn't mean anything unless you know what you're comparing. Ultimately ideas have to be proved out in consumer products. That's how you know they're ready for prime time, when they actually get used. Right now DXR is being used to varying degrees. That Kingdom game has some voxel gi, and I'd be interested in a real comparison of the pros and cons of their gi solutions with an accurate representation of the cost. I haven't seen that. There's not much else to go on. Minecraft vids and small demo scenes are interesting, but it's not an equal comparison to a full game like Metro, and I'm not seeing any in-depth of analysis of performance other than fps (useless).
 
Part of the discussion is about exploring avenues to invest in, in particular what FF RT acceleration does and whether that's necessary in a future compute design.
One more idea:
Similar to rasterizing a triangle, it would make sense to raster a line into a 3D (hashed) grid. The grid then allows to fetch all rays that cross a certain box in space. To get rid of finite grid bounds hashing would be a necessary option.
This would still allow custom BVH for the scene - the grid is just for the rays.

I need to think if emulation with triangles could make sense here, would require conservative rasterization support and might beat a compute shader... good you asked again :D
 
I'm not seeing any in-depth of analysis of performance other than fps (useless).
Yep. The Remedy video is the best we have. I tried to figure out the cost of Quake2 denoising, with the guy who has some experience with it here: https://www.gamedev.net/forums/topi...g-on-a-2014-gpu/?tab=comments#comment-5401600
He said according to the source code they have full implementation of the paper, and the cost here was 10ms (!)
I guess it's faster than that, but i take back my earlier assumption Q2 could ran with 5 bounces, because it can run with 10 mirror reflections. (1 bounce = 1 geometry ray plus 1 towards a light at the hit)
Likely the denoiser is turned off for mirror mode because it is not needed. And likely the single bounce already caps performance at what we see.

...guessing. Unfortunately i won't work on RTX anytime soon, and the guys who would know better can't talk because it may fall into NDAs.
 
There's not much else to go on. Minecraft vids and small demo scenes are interesting, but it's not an equal comparison to a full game like Metro, and I'm not seeing any in-depth of analysis of performance other than fps (useless).
A fault here is that this thread has evolved to become a next-gen liighting thread rather than the original metrics thread, and some are talking about general ideas where we haven't got data, and linking to data-free demos of general concepts. I'll change the title and see if it steers things into a more general discussion
 
Yep. The Remedy video is the best we have. I tried to figure out the cost of Quake2 denoising, with the guy who has some experience with it here: https://www.gamedev.net/forums/topi...g-on-a-2014-gpu/?tab=comments#comment-5401600
He said according to the source code they have full implementation of the paper, and the cost here was 10ms (!)
I guess it's faster than that, but i take back my earlier assumption Q2 could ran with 5 bounces, because it can run with 10 mirror reflections. (1 bounce = 1 geometry ray plus 1 towards a light at the hit)
Likely the denoiser is turned off for mirror mode because it is not needed. And likely the single bounce already caps performance at what we see.

...guessing. Unfortunately i won't work on RTX anytime soon, and the guys who would know better can't talk because it may fall into NDAs.

I'm hoping GDC has a bunch of real solid info about DXR implementations and alternatives. Because it's industry info, they do usually share some performance metrics along with the details of solutions.
 
Ultimately ideas have to be proved out in consumer products. That's how you know they're ready for prime time, when they actually get used. Right now DXR is being used to varying degrees.

Not when there's so much money involved in pushing solutions for competitive advantage.
 
More UE4 DXR goodness. Shadows and GI (1 and 2 bounces) in a large scale terrain:

I love the look of the lighting in all such demos. The clean white focusses entirely on the beauty of the light and shadow in terms of form. A few games have gone with this style and they'd be well served with a next-gen lighting solution.

One thing I find curious about RTX examples is the temporal element. Here, it's quite apparent in the lag in the shadows. The volumetric solutions suffer from lighting lag as a result of the algorithm, but I don't understand it in the RT examples. Is it a result of the denoising accumulating temporal samples?
 
Back
Top