Aren't the tensor cores a key component in their RTRT solution for de-noising?
Also VR tracking.
Aren't the tensor cores a key component in their RTRT solution for de-noising?
hm...nVidia have a VR tracking solution?
And in this scenario, not sure how much power is needed to run AI models, like the expectation of it. I can assume that more power will run things faster ie higher frame rate or higher resolution. But it’s not exactly a 1:1. If you put too much power it may never be used.
lol sure.Superscaling GameDVR/streams?
How about training bots to fill in the gaps for larger multiplayer games?
Thread: Should cloud AI players be created to beef up multiplayer games.How about training bots to fill in the gaps for larger multiplayer games?
300 Spartans :v
To make room for the DLSS to be used on the "Tensor Cores".BFV does not use tensors here - they implemented their own denoising in compute. (Why?)
I don't believe that was the reason at all. They started implementing RT into their engine long before (8 months+) Turing was available so their de-noising was developed over compute.To make room for the DLSS to be used on the "Tensor Cores".
They developed it on Volta, which had Tensor Cores as well. NVIDIA was probably planning DLSS into the mix long before Turing was unveiled.I don't believe that was the reason at all. They started implementing RT into their engine long before (8 months+) Turing was available so their de-noising was developed over compute.
considering that we have a demo of a iGPU that can do it, i have little doubt that it requires the usage of 110 TensorFlops of power to perform DLSS. Pretty sure ML in the middle of a render pipeline is not ready until DirectML is released.They developed it on Volta, which had Tensor Cores as well. NVIDIA was probably planning DLSS into the mix long before Turing was unveiled.
iGPU running DLSS-like feature? Care to link, I want to see thisconsidering that we have a demo of a iGPU that can do it, i have little doubt that it requires the usage of 110 TensorFlops of power to perform DLSS. Pretty sure ML in the middle of a render pipeline is not ready until DirectML is released.
http://on-demand.gputechconf.com/si...-gpu-inferencing-directml-and-directx-12.htmliGPU running DLSS-like feature? Care to link, I want to see this
Cheers, they didn't mention what it was running on though? (at least when skipping to 14x)http://on-demand.gputechconf.com/si...-gpu-inferencing-directml-and-directx-12.html
I think 14:35 for the actual demo. Everything before it is how it works, why directML exists.
It looked like a Lenovo laptop. Lol. That’s more or less what I thought it was. I could be wrong though.Cheers, they didn't mention what it was running on though? (at least when skipping to 14x)
Yeah, but many Lenovo laptops include dGPUIt looked like a Lenovo laptop. Lol. That’s more or less what I thought it was. I could be wrong though.
Seemed to be running the game at low settings and not super well either. Honestly not sure. They mentioned that it was a seemingly regular setup.
oh LOL I didn't know hahaha. that probably explains itYeah, but many Lenovo laptops include dGPU