Does it ever get sharp? Would it be possible to update hardware texture scaling with something better than bi/trilinear too, and maybe have some gaussian element? Using super low buffer still ends up with obvious texel squares. Presently we need to sample in the shaders, but as it's such a frequent feature and probably moreso going forwards, is there value in updating the texture sampling process to get better results at the atomic level?
To make it sharper he could increase SM resolution or use unique traditional SMs when required.
But ofc we always end up with quantization artifacts. (Just like in voxel based techniques)
To fix / improve this there exist some options (generally speaking - independent of shown demo):
Prefilter the texel contents, e.g. using MSAA which is just downsampling / averaging a finer quantization. Or doing it 'right' which looses the performance, similar to the problem of cone tracing which is better done by averaging many rays.
Hiding hard transitions, e.g. exponential shadow maps, blurring etc.
Hybrid tech like raytracing problematic texels.
Temporal accumulation - so jitter the SM and average over time like TAA. (I wonder why there is so little research about this, considering it's a similar problem as RT denoising)
Personally this problem was reason for me to use RT for visibility. I tried little framebuffers as in the ManyLoDs paper, but a light can go from 1 pixel to zero pixel, and neither can capture the intensity well enough for a robust radiosity solver.
But there are always cases where the problems are acceptable, if only it helps to reduce more expensive rays for example. The problem i face all the time is this: All tech / algorithms we have suck. Nothing works. But can we combine some things to get acceptable results with acceptable complexity?
A similar techniche was proposed in 2014 for virtual shadow map buffers for many lights using traditional geometry and heavily tiled shadowmaps
I guess you mean this, or other works of Olsson?:
http://www.cse.chalmers.se/~d00sint/more_efficient/clustered_shadows_tvcg.pdf
That's a good example of my questioning of acceptable complexity. You really end up thinking: "All this, just for efficient shadows?" Ofc. it always depends on what you need.
The big problem i see with SMs is the need to update all of them every frame if they see some movement. We want stochastic updates everywhere to trade IQ/lag vs. constant frame times. RT shadows can have more options here, but they are not perfect either - nothing is... :|
IMG has a bi-cubic Vulkan extension.
I really want this. Could help a lot with making lower resolutions look much better. I guess we'll see this from NV / AMD as well sometime...
Cutscenes aren’t gameplay.
Yeah, death to cutscenes! Make games, not movies! Movie makers should be envious about what we can do, not the other way around!
Looking the fairy UE4 RT movie, there shows a frame time graph. They tailor every scene so they get the EXACT frametime they can use. Interesting - surely a big reason real games never look as good as those (almost pointless) demos.