TopSpoiler
Regular
Could be driver issues, could be too heavy optimizations. But WTH is that blurfest on NVIDIA side?
Captured from different, resized YT videos:
Could be driver issues, could be too heavy optimizations. But WTH is that blurfest on NVIDIA side?
DLSSCould be driver issues, could be too heavy optimizations. But WTH is that blurfest on NVIDIA side?
Do you have a supporting link it's DLSS? It looks more artistic on part of the developers.DLSS
He just told you he took the screens from a compressed resized YouTube video.DLSS
Captured from different, resized YT videos:
Source:We propose neural control variates (NCV) for unbiased variance reduction in parametric Monte Carlo integration. So far, the core challenge of applying the method of control variates has been finding a good approximation of the integrand that is cheap to integrate. We show that a set of neural networks can face that challenge: a normalizing flow that approximates the shape of the integrand and another neural network that infers the solution of the integral equation. We also propose to leverage a neural importance sampler to estimate the difference between the original integrand and the learned control variate. To optimize the resulting parametric estimator, we derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice. When applied to light transport simulation, neural control variates are capable of matching the state-of-the-art performance of other unbiased approaches, while providing means to develop more performant, practical solutions. Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
Not sure where to put this.
New Research paper from Nvidia that shows AI accelerated ray tracing
Source:
https://research.nvidia.com/publication/2020-11_neural-control-variates
Youtube video:
Extremely smart
4.3 Dynamic Programming
While the naive algorithm is straightforward, it is also woefully inefficient. For instance, n = 7 results in a total of 1.15 million recursive function calls and an even larger number of temporary solutions that are immediately discarded afterwards. To transform the algorithm into a more efficient form that produces an identical result, we make three important modifications to it:
• Remove the recursion and perform the computation in a predetermined order instead.
• Represent S and P as bitmasks, where each bit indicates whether the corresponding leaf is included in the set.
• Memoize the optimal solution for each subset, using the bitmasks as array indices
I doubt NVidia has waited 9 years to put this research into practice.It would be interesting to see what DPX could give benefit to this..
The TRBVH (technique described in the paper) were actually used in the OptiX until version 6.x but I don't know it's still used in the 7.x since the API has changed entirely to support hardware raytracing.I doubt NVidia has waited 9 years to put this research into practice.
Optix 7 dates from 2019 - the bitwise instructions in Turing look like candidates for the implementation of these techniques - or at the very least, some of them.The TRBVH (technique described in the paper) were actually used in the OptiX until version 6.x but I don't know it's still used in the 7.x since the API has changed entirely to support hardware raytracing.
Now we know Nvidia has a patent about hardware SBVH traversal, and they have DPX instructions that can (potentially) accelerates TRBVH construction speed. Even if the performance is not good enough for the games they can bring back it for OptiX anyway.
So what does DPX bring that's not in Turing that's covered in TRBVH?
Why though? A Vk extension seems like a good idea.No DXR/VK support though.
Yeah, Of course it's possible with vendor specific extension.Why though? A Vk extension seems like a good idea.
I don't get it, why isn't a DXR 1.1 inline shader that does programmable traversal enough?