There has to be a degree of independence between the virtualised geometry and the particular dynamic resolution from frame to frame. Or at least, that's the way I see it.
We may need new nomenclature to describe what is happening. A couple to describe first:
- Epic mentioned that they use special normal maps for models. Not your usual type though. So what can it be? For REYES, you need a micropolygon map of your geometry.
- Geometry maps - what can they be? Depending on granularity, they can be a map of fragmented meshes, or a map of the aforementioned micropolygons at a 1:1 ideal triangle to pixel quality.
If texture is to texel:
What is micropolygon to...? A microcel?
If texture LOD is to mipmaps:
What is mesh LOD...? A meshmap?
They could store meshmaps to represent geometry LOD.
They could store microcels (special normal maps) to represent geometry maps.
At runtime, they will have a target resolution and load the appropriate texture and geometry LODs and maps.
For every object to draw, they can test against filling their normal maps ( appropriate microcel, micropolygon map).
Then keep a counter of its fill level for that frame. If falling behind, choose a contingency lower quality meshmap.
Shade texture with appropriate texel and mipmap.
So, if falling behind with your frames, you'll see geometry artifacts as shown earlier.
Kinda like that is what I'm thinking...