From a selfish point of view, I appreciate you pointing that out
The two things that really let you see the differences between high detail geometry vs. normal maps are 1) silhouettes and 2) shadows. Screen space shadow traces can get you a ways, but if you try and push them too far they produce various artifacts. One of the things that has actually improved a lot since last years demo is that we had to rely more heavily on screen space traces last year, whereas this year they are very short rays with a large amount of detail being represented at high precision directly in the virtual shadow maps.
The discussion of raytracing still seems slightly orthogonal to me... you can't just assume you have a fully-built BVH sitting around down to the level of detail that these scenes enjoy for any real game scene. It's completely infeasible both in terms of memory but even in terms of tracing. You still want LOD for RT because you don't want every ray to fully diverge by the time it hits the leaves. And you still need to stream those LODs and move objects around freely which means you must consider the cost of iteratively *building* the BVH's and the associated tradeoffs that you must make between build vs. trace performance.
Building these data structures is not dissimilar to rasterizing things. This is doing a lot of the same things that Nanite is doing, but with an additional layer of complexity and unsolved problems added on top. To be clear, we all want to see those problems solved and allow for more efficient indirect ray traces into high poly Nanite geometry (especially for Lumen, etc.), but it is not a "RT or Nanite" situation, it's how do we solve the *additional* problems that RT adds on top of the ones that Nanite already addresses. IMO it's almost guaranteed that an efficient RT solution would also use a lot of the mesh simplification and streaming machinery that Nanite employs.
Getting more efficient RT is more relevant in a discussion of Lumen than Nanite.
Same team. Since I've been looking at it every day for a couple years now, it's very obvious when it is missing. Good art and screen space effects (contact shadows, etc) can help narrow the gap a bit, but normal maps in place of geometry are just not good enough anymore for me...
Right. Many of the quick demos people are showing off so far are less Nanite geometry demos and more GPU culling demos. Even with low detailed geometry you should enable Nanite on everything that it supports as this is key to both way more efficient instancing, but also the multi-view/mutli-res/sparse rasterization that makes virtual shadow maps way more efficient.
The discussion of displacement and to what extent we still want that in the future is a more interesting one. It may well be we still want displacement for a variety of compression/art/animation/streaming and similar reasons, but that will effectively just add more detail on the very closest levels.
I can't see *not* wanting the automatic fine-grained LOD stuff that Nanite is doing in the future... it seems pretty fundamental to efficient rendering IMO and solves some real problems.