That seems to be the basis for this
Nvidia patent. Page misses on texture requests are serviced for subsequent frames but the current frame tries to find the next best texel that's available in local memory.
That is exactly what I personally would want from the hardware (from a developer perspective). I don't want to be checking if TEX instructions fail in the shader (that is insane IMO), and I also don't want my shader to have to be restarted with different MIP clamps per "subtile" to insure efficient computation after a page fault (to insure that either I don't have to check, or I don't get repeated page faults on future shader accesses).
Found an older thread on LRB and final post from TomF on the
Molly Rocket Forums,
"-In the second pass (which is now the only pass), you don't need 14(?) shader instructions and 2 texture reads, you just read the texture with a perfectly normal texture instruction. If it works, it gives the shader back filtered RGBA data just like a non-SVT system. If it doesn't (which we hope is rare), it gives the shader back the list of faulting addresses, and the shader puts them in a queue for later and does whatever backup plan it wants (usually resampling at a lower mipmap level)."
Not this says "texture instruction" (talking about doing mega textures on Larrabee compared to current GPUs). The question is how exactly is the texture unit giving the shader a list of faulting addresses?
And later he writes,
"... My understanding is that Rage's lighting model is severely constrained because every texture sample costs them so much performance, they can only read a diffuse and a normal map - they just can't afford anything fancier..."
I just don't buy this later comment. I'd bet, if they are limited to diffuse and normal, it's probably because of lack of ability to store all the data (DVD's for 360), or decompress and recompress enough of it to service the real-time streaming requirements. Or lack of ability to do high enough quality re-compression to pack more into two DXT5s. Should be able to get diffuse+monospec into one, and a 2 channel normal with two channels for something else in the other (the Insomniac trick for detail maps) ...
I'll be more clear on my original point, going back to the texture size limitation, DX11's 16364x16384 max isn't enough even with virtual texturing to do mega textures with a single texture. And with just two DXT5s, that's 4GB of data, ie the full 32bit address space. So likely you'd still need a level of indirection in the shader to get around this problem (beyond optionally dealing with software page faults). And this is exactly why I'm not sold on virtual paging for mega texturing, unless the card supports 64-bit addressing for texture memory, and then I could split my megatexture up into many tiny mega textures and more draw calls.
In light of the above problem, I think LRB's virtual texture paging would make more sense with a more classical engine "material system" like say Unreal, where you still use tiled textures + lightmap, or Uncharted with its usage of dual tiled textures + blend mask. But in this case, if with LRB, my shader has to deal with page faults, I'd likely want to factor all that work out into a texture streamer and manually stream textures so I don't eat any unnecessary costs (ie page faults) during shading ... even if just for the reason that I want a consistent cost to render when stuff like textures applied to surfaces which get un-occulded.
But who knows, I might be singing another song if I was actually playing with the real hardware