I'm well aware of it as you might imagine from my past life Been there since... I think Skylake but perhaps the one after it. The main issue with using TR for VSM or other sparse rendering is that the rasterizers in GPUs are not optimized with any sort of acceleration structure, or even early-outs related to the tile mappings. So it will still do all the work to rasterize the unmapped pixels and throw them out in the best case before PS launch, but worst case in the ROP itself... Either case is not particularly useful when doing depth-only rendering.@Andrew Lauritzen I have an *optimization* idea for virtual shadow maps & texturing using the tiled resources API on modern Intel graphics hardware. They have a hardware feature called Tiled Resources Translation Tables (TR-TT) where you can bypass the overhead of updating the page table mappings which makes the UpdateTileMappings API run fast ...
And of course the hardware mappings don't necessarily match the optimal page sizes and so on.
TR unfortunately basically always falls into the realm of "theoretically cool and interesting, but in practice too many issues to get any sort of real gains vs. software methods". The main use it has these days is sadly just bypassing WDDM overheads and legacy "fragmentation" nonsense to be able to back resource heaps with smaller chunks of allocations.