In terms of silicon use to implement, DLSS seems inefficient to me.
whoa, hold up on judgement for a second there.
a) DLSS upscales all the way to 4x the resolution here and it can scale even higher if desired, it's not just anti aliasing, it's super scaling. And it's got better accuracy in the process.
b) DLSS can be done on regular compute as well, it doesn't need to be completed on tensor cores but it's performance is going to be 20x more easily in this function and any other function that is specific to DNN models.
c) If you consider how much tensor cores can do, then it's easy to see it do RT denoising (which it hasn't been enabled yet) with DLSS, and any other AI models, like animations, or physics, or even on the fly content creation.
to put things into perspective: Tesla self driving us the PX system which is 2x Tegra X2 and I think possibly 2 pascal GPUs (not sure what this could be) but X2s are 1.5TF each, so.. looking at a total of < 10 TF for that whole system. The combined FLOPs for that system is nowhere near the tensor core power of what's in a 2070 rated at 60 Tensorflops. NN accelerators can be so simply built that, Tesla is also now designing their own ASIC DNN AI accelerators because it's more power for less silicon.
You'd be lucky to have very effective computer vision at such lower computing power, let alone write compute algorithms to solve it. NN's are super efficient not because the algorithm is some magical thing, but it just turns out that tons of data scales performance very well.
TLDR, you're not going to get 60TF of NN power out of pure compute, so this is actually very efficient.
The downside is it needs Tensor silicon to implement and a supercomputer to train the neural nets.
This is false or at least should be false. If it's like this, it's because nvidia mandates it to be like this for $$$. When DirectML is released, it would work on any hardware setup. Training can be done on any setup, the faster the setup the faster the training.
If you have time, I do recommend watching
http://on-demand.gputechconf.com/si...-gpu-inferencing-directml-and-directx-12.html
Hardware requirements (any GPU that supports DX12) to run DirectML.
The hardware in question is very low end, I think it's a laptop, looks like a lenovo
Nvidia only provided the model to be run against the hardware.
AI Hair
AI Denoising for Shadows, AO, and Reflections