Meh, will believe it when I see it. Nobody else seems to be talking about this console RT flexibility. Certainly nobody is using it.
Notice i did not provide an answer to the question:
Can we have a workload which is large enough so it can run faster on console than on PC, while still being realtime?
... for your sanity.
I only made a dumb an simple example to illustrate why improving APIs, lifting blackboxes, etc, might give a benefit not only to developers, but also to endusers. Because it increases the numbers of beans they then can count afterwards on whatever their favorite HW is.
Nobody talking about it does not mean it's not used. It's just not interesting to HW fetishists how the stuff works, they only care which HW gives most fps.
I'm pretty sure console flexibility is used already, e.g. to stream BVH from disk instead calculating at runtime like PC has to do.
Then there is A4 games which said they used custom traversal code on consoles. IDK for what - likely not for LOD like i would.
And finally, PC holds consoles back too. If you make a crossplatform game, it's not attractive to develop completely different solutions to a problem on each platform. Thus, extra console flexibility ends up mostly used for low level optimizations, but the true potential isn't utilized.
All this is just obvious and makes sense, no?
I would not need to repeat it a 100 times if you guys would not constantly claim i'm just talking bullshit.
I'm asking from a consumer point of view and with games out there using Raytracing already, I'm sure there must be some sort of LoD already, right?
Yes. We can use the same standard practice of discrete LODs with RT. Which means having multiple models for a character, each with less triangles than the other. At some point we just swap those models if they come close / far.
This works, and there are even tricks to hide the visible popping by rasterizing and tracing the model twice, but switch LODs per pixel in a stochasic way. (The free RT Gems 2 book has a chapter on this.)
Problems / limitations with this standard approach:
* To hide the popping we need to render twice, so decreasing perf instead increasing it. Consequently, we make the transition zone small. So not everything needs to be rendered twice all the time. Visually that's a compromise. The transitions are still visible because they phase in and out. In contrast, mipped texture mapping does indeed always combine two mips of texture, so a transition is never visible. We just can't afford to do the same correct solution with geometry.
* But the big problem is this: Discrete LOD only works for small models. Imagine terrain. We can not swap the geometric resolution of the whole terrain, because it is always both close and far. So we would need to divide the terrain into chunks, like 10x10 meter blocks. But if we have blocks at different resolutions, there will be cracks between them. So we need to stitch the vertices to close those gaps, either with extra triangles, or moving vertices for example (Nanite has a brilliant way to avoid stitching, btw). It's a pretty difficult problem in many ways.
The complexity increases after we realize 10x10 blocks is not enough. At some distance even those become smaller than a pixel so we still have no solved the problem. So what we really need is a hierarchy: 10x10, 20x20, 40x40 blocks and so forth.
You can imagine the problem is hard to solve. And pay attention our whole 'static' terrain becomes a dynamic model as the camera moves forth and back. It's surface constantly changes, detail goes up and down at different areas of the terrain model.
Now the problem is: Although BVH is a hierarchy itself, we can not map this hierarchy to the hierarchy of detail, because we have no access to the BVH data structure at all.
All we could do is constantly rebuilding the whole BVH for each block which changes detail. We can not even refit it, because it's different geometry. And we can not keep the block detail levels which go out of sight in memory in case the player comes back.
Rebuilding all BVH for the entire scene constantly over time is too expensive and not practical (or if it was, it would be a waste of HW power).
As a result, DXR API restriction by blackboxing BVH prevents a solution of the continuous LOD problem, which is one of the key problem in computer graphics.
Nanite successfully solves the LOD problem only for the rasterized geometry.
For RT, they have to use low detail proxy meshes. Those proxy meshes have the same problems, now unsolved, but because their resolution is low and they may not change detail rarely or not at all, it works well enough to add a RT checkbox to the feature list.
Now you may ask: Is this some kind of a new problem? Never heard this is such an issue?
We need to go a bit back in history. There were many such continuous LOD algorithms in the early days. I remember the ROAM algorithm to have adaptive geometry resolution on terrain, then there are 'clip maps', or the 'transvoxel algorithm', and many more. There also was the Messiah game which used such algorithm for characters, which were very detailed for the time. But it always was complicated, and solutions were limited to support just heightmaps but no general topology, or processing characters on CPU was expensive and needed a upload to GPU every frame, while games like Doom3 achieved similar detail with just normal maps on low poly meshes.
And GPU power was increasing, allowing to postpone a true solution to the problem over many decades. Discrete LODs became the standard and was good enough.
Until recently, when Epic showed us we are all fools by ignoring this for too long.
If the whole industry adopts this newer and better LOD solution or tries similar things like i do, soon there should join some more guys my fight about lifting API restrictions.
If the industry sticks at good enough solutions from the past, my request will remain a rare one.