Spatio-Temporal upsampling and/vs object space texture shading - amateur's view..

Nevod

Newcomer
I am no professional at 3d graphics, but I am intrigued by these approaches and what they might lead to. I'd like to hear a word from specialists and just knowledgeable people on them.

For those not somehow familiar with the methods, here's a link to spatio-temporal upsampling paper:

http://www.mpi-inf.mpg.de/~rherzog/Papers/spatioTemporalUpsampling_preprintI3D2010.pdf

From what I see, it is an advanced reprojection approach, which carries over and transforms relatively unchanged pixels from previous frame, and uses special pattern to pick pixels that are to be fully rendered in areas that are to be refreshed, then filling the whole area by upsampling from sparse rendered pixels. Pixels to render are picked depending on area detail intensity. Samples are accumulated over time, which leads to temporal upsampling.

Object space shading has been mentioned by Timothy Lottes:
http://timothylottes.blogspot.ru/2013/02/virtual-texture-space-shading-multi.html
It suggests doing texture shading in world space independently of rendering process, with rendering step just sampling from fully ready textures, and not doing further processing (mostly). That allows decoupling of shading from frame rate, and with some quirks could reduce required shading rate below shading rate in standard approaches. Timothy mentioned that textures to be used in frame have total size 4x the framebuffer size for enough quality, so non-reusing object space shading would be equal to 4x SSAA.
Funnily that until that exact moment I thought that that's actually how rendering is done - i.e. everything is processed in object space incrementally, not regenerated fully for each frame, and then scene is just projected to display plane.

I'd like to assess the performance of these methods and guess how they might complement each other. Again, as I'm no specialist, this is just a guesstimation.

The upsampling method shows framerate improvements of 10-15x. Assuming (just out of blue) that half of time interval is used for new pixel rendering and other half is used for the method's analysis itself, it's a 20-30x reduction of shading rate. Given that we don't really need an exceedingly high FPS, we would use more effects instead.

Now, how would that compare with fully retaining object space shading? Assuming that projection of shaded pixels is free and that we have no view-dependent stuff to shade, only some static global illumination, we could just shade textures when they are loaded by virtual texturing system.
I'll try to estimate two main situations which cause fetching of new textures - camera movement and camera rotation.
Let's say that our VT system tracks camera speed and doesn't load too detailed LODs when camera moves fast enough. Estimating, 4 wedges of 7 textures of typical 128*128 resolution have to be loaded and preprocessed on the nearest LOD each second. There are also other LODs, but they are updated at lower rate. Just for sake of safety, let's say all distant LODs provide same fetch rate as the nearest.
2*4*7*128*128 = 917504 pixels, about 44% of 1080p. That is roughly 1/8 of texture buffer that is supposed to be required for 1080p, resulting in 8 seconds for a completely new screen - seems to be about right for most cases. That is for one second, so at 60fps it produces 15300 pixels per frame = 1/135 of shading rate with no multi/supersampling AA at 1080p. The advantage seems to be insanely high, but that is ideal case. Clearly, still, object space shading has its point even with reprojection in use.

Now, camera rotation. Assuming this is an FPS case, camera rotation rate should be limited by human's rotation rate. I'm capable of turning 720 degrees in one second at best, that'd be 12 degrees of turn per frame at 60 fps. At FOV 90, it's 1/7.5 of screen width, so that is 4/7.5 of normal shading rate. That is clearly not that much of a difference with normal shading. Hence, to provide proper advantage, the VT system would have to hold a texture cache not only for the FOV, but for whole 360*360 around. Although that at such high turn speed, motion blur is so high that much less precise LODs would be needed, still, we don't have enough time to shade off-screen like with movement, as we have to display proper image the next frame movement has stopped.

How could the reprojection/upsample method help us? As there is usually view-dependent stuff in frame, it's pointless to shade that in object space, probably. The reprojection would decrease the amount of pixels we have to fully shade each frame, leaving more time for VT shading. The abovementioned turnaround could also be facilitated by reprojection.

The worst situation for both methods is a scene-wide change of lightning - when some global source is changing color/luminosity/is moving. Rapid changes place maximum toll, as they force complete reshading of whole scene, but they can be masked by post-processing carefully implementing pecularities of human vision. Slow changes could be amortized by reprojection. During such a change, object space shading could be not used at all, as it is more suited for more or less static lightning.
For example, in a scene there are several torches in a room. Room is shaded in VT system as torches don't move relative to it, as such case is very amortized in time, global illumination could be used. A player takes one of the torches and moves. The processing of that torch's illumination would be moved from VT system to standard render system. Prior to that, this torch's lightning could probably be 'unshaded' from textures in cache, so that light from other torches could be reused and only the moving torch would be processed in realtime (amortized by reprojection system). If there's enough processing power, GI could be used, if not, use some AO.

I'd like to know if I'm right in at least some of these assumptions, and if not, then what other problems exist with these approaches?
 
Back
Top