If they are doing something like that, using a SDF-like representation as an intermediate and then doing compute to polygonize them at run time
That's very unlikely.
SDF is volume data, and usually takes more memory than surface data like a mesh. It's also difficult to associate volume data with 2D texture, and a simple 3D texture would need even more storage.
Extracting iso surface from volume data is pretty fast, but it still has a cost. If you work with triangles, your data is very likely just that - aside some alternative for sinele pixel draws like a point hierarchy.
In the debug view of the video we see there are discrete pops of triangle clusters. Many of them. I assume they have clusters of triangles at multiple discrete lods.
Likely UV seams are preserved. Quixel models have few islands of UV patches. They waste a lot of texture space but they have a low count of UV boundary edges.
Preserving them avoids the problem of requiring new UVs and texture as the model changes. It works well for models of low genus, and quixel models are mostly genus 0, nature prefers this in general.
Some pages above i posted my theory each triangle cluster would require 4 modes of triangulation to avoid cracks at cluster boundaries. (I've already forgotten how it works, but if it does this is really simple and can be precomputed with little space for permutations to store the modes.)
Up to this point the system is compatible with raster HW, and it scales to either HW or production limits where micro polygons are not welcome. There are pop ups, but TAA will hide them very well.
After that the revolutionary stuff of 'single pixel triangles'. I assume that's a hierarchy of points and not much different to Lidar point cloud rendering, the splatting mentioned in the Dreams paper, the splatting of high poly model beating the HW rasterizer in the Many LoDs paper from 2010, etc.
Pretty simple in comparison to the above. No textures are necessary - just points with material, and streaming can ignore the lower levels of the hierarchy at some distance.
Remaining questions:
Do they also implement hidden surface removal? With compute rendering they could by coarse front to back draw. They could render small tiles of screen and terminate if they are filled and further data is guaranteed to be behind the current depth.
So it's likely they do it. But that's not compatible with HW rasterized triangle clusters. So, two options:
1. Raster HW triangles first, get z buffer and start the splatting pass. Eventually divide this process into multiple ranges of depth.
2. Raster triangle clusters in compute (eventually along with the points), but only for depth and HSR result. If they are potentially visible, append them to a later draw call to render them with HW and full texturing.
Second option is unlikely because deferred texturing method would make more sense. And they claimed they do still use the HW rasterizer, so i guess it's somewhat option one.
It's cool to see HSR becomes topic again. It was the holy grail of realtime rendering, then GPUs came up and we silently ignored it for most.
If they do it, the rendering has to follow coarse front to back order. This eliminates simple unordered splatting, and we come close to something like Unlimited Detail. Too bad Bruce Dell left the forum. I guess he was quite shocked too after seeing this