My belief is that the step change in per-pixel quality and consistency that ray tracing based techniques can bring will split devs into two camps: those that will continue as if nothing happened and those who are willing to go back to the drawing board. I expect that traditional fixed-function hardware will underlie this new perspective but it will be plugged-in to support ray-traced rendering. There'll be a lot of navel-gazing focused on what that hardware can really do when used properly.
I think we're exiting the "ray tracing is tacked-on" mode of graphics development. I'm hopeful that in a couple of years we'll play the fruits of this.
Ha ok, i thought your assumption was that RT would motivate devs to 'get optimization or performance right', which would not make so much sense.
Regarding camps i see those main topics:
* Question of cost / benefit ratio, which surely gets to much attention and heat.
* Learning the basics: Trying to model lighting correctly is not new to us, but things like importance sampling maybe are, and understanding related math is harder than the things we did before in realtime gfx. This makes RT less accessible to hobby / indie development for example. Some will not invest the time to learn it in all details for this reason.
* New optimization ideas related to games: Here we'll likely just adopt from the decades of work already done on the subject. The only really new field is aggressive denoising (which enables to use RT in realtime at all). The other option to help performance, optimized sampling strategies, is well researched from the offline guys. Still ongoing research ofc., but that's not really a topic related to HW or specifically about differences of realtime vs. offline.
* Improvement on APIs: A big one for me personally, since i can not use RT at all although i want to.
What i don't see is potential progress from 'using the HW properly' and experience, because there is not so much to research or try out, thanks to fixed function HW handling the costly parts.
It's different on consoles because AMDs RT leaves traversal to the shader cores, so it's programmable. But it seems NVs way is just better overall, and likely those options will disappear in the future.
Related discussion of traversal shaders on all future HW likely makes it come back at some time, but for efficiency we want something like a callback only on certain BVH nodes. We don't wont to handle the full traversal on our own.
And as this raises the question on which BVH nodes should do the callback, we likely want to open up BVH as well at the same time or before that.
So that's all future and potential stuff. At the moment our problem is not how to use the HW properly - it's just a TraceRay function. The function is ofc. expensive, but we can't do much to reduce it's cost, other than bundling similar rays to minimize divergence, which is nothing new.
But some people do have a problem with the important data structures (BVH) being not accessible. This could be solve on software side in form of API changes and providing vendor specifications of their data structures. It's about problems related to LOD, streaming BVH instead calculating it, or issues like animated foliage. So that's wher i expect the most progress coming from, if they get it right, or done at all.