Hey guys, new to this forum!
I'm a self-taught graphics programmer, released some indie games, worked on offline animation but not for the games industry.
I work on realtime GI for a decade, tried many techniques and ended up with something that is slightly similar to the Many LODs paper. Still months of work to proof it's good and fast enough for current gen consoles...
I've not implemented path tracing because i consider it a very inefficient algorithm: Bad caching caused by random memory access, divergent workloads hard to distribute efficiently.
It's simple and elegant, but it's brute force and slow. I doubt this can (or should) be fixed with future hardware acceleration for rays alone.
So, personally i think the solution is to combine multiple techniques, with recent DXR demos being good examples in this direction.
But on the other hand i understand OCASMs opinion, because the paper and video he mentions implies realtime pathtracing is not just around the corner but totally possible just now.
To proof that with some math, i could naively say:
Take a 16x16 neighbourhood of pixels and a history of 16 frames, and boom: 16^3 = 4096 samples, enough for noise free images.
So the question is: What are the hidden limitations the paper and video do not point out?
I'd love to here what you guys think about it.
Personally i can only do some speculation:
The video shows just static scenes. We have no way to see how fast things converge with dynamic scenes.
There are some scenes showing the 'detaching shadows' problem from moving columns, but this scene is too simple. We can not assume it is as good for a scene from a game.
Reflections are - of course - blurry.
Rasterized primary rays - no depth of field and motion blur. (Saying the game guys can do this in screenspace is a weak consolation.)
That's all i can think about. But those are pretty weak points - it's not enough to proof OCASM wrong!
In fact my own technique has the same limitations and i accept them without much worries.
So again, any thoughts welcome...