Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

First RTX On OctaneBench results by Otoy. Up to 3x higher scores depending on the scene.

Jules Urbach also writes: „This speed is scene specific, and most scenes don’t get more than 1.25x RTX with path tracing. That might change with more tuning (we never thought we’d get near 3x in any real world scene a few months back). I would say it’s safe for 1080 Ti users for a while, as it will take many releases before we get RTX finalized for RNDR.“

 
Oh, it seems we already forgot he claimed 8x speedup when Turing was launched.
This guy has a kind of reputation for overblowing, second to only Jen-HSun himself :)
He did compare it to Pascal and not to turing without RTX, so that might be part of the cause.
And yes, early limited tests are not 'always' indicative of the final performance. :D
 
He did compare it to Pascal and not to turing without RTX, so that might be part of the cause.
And yes, early limited tests are not 'always' indicative of the final performance. :D
Voxilla is right thought. Ubrach is known to be an overhyping lunatic in the VFX industry.
 
Last edited:
Improving Temporal Antialiasing with Adaptive Ray Tracing (Presented by NVIDIA)

We discuss a pragmatic approach to real-time supersampling that extends common temporal antialiasing techniques with adaptive ray tracing. We have integrated our solution into Unreal Engine 4, and demonstrate how it removes the blurring and ghosting artifacts associated with standard temporal antialiasing, achieves quality approaching 16x supersampling, and operates within a 16ms frame budget.

Takeaway
Attendees will learn how to add next generation antialiasing to their game engine by improving TAA with adaptive real-time ray tracing.

https://schedule.gdconf.com/session...aptive-ray-tracing-presented-by-nvidia/865249

 
Voxilla is right thought. Ubrach is known to by a overhyping lunatic in the VFX industry.
Good to know and will take additional pinch of salt in future.
Improving Temporal Antialiasing with Adaptive Ray Tracing (Presented by NVIDIA)



https://schedule.gdconf.com/session...aptive-ray-tracing-presented-by-nvidia/865249
Will be interesting to see how this works.
Although I do wonder how usable this will be in complex scenes.
 
February 18, 2019
Nvidia recently released a new version of Optix, which finally adds support for the much hyped RTX cores on the Turing GPUs (RTX 2080, Quadro RTX 8000 etc), which provide hardware acceleration for ray-BVH and ray-triangle intersections.

First results are quite promising. One user reports a speedup between 4x and 5x when using the RTX cores (compared to not using them). Another interesting revelation is that the speedup gets larger with higher scene complexity (geometry-wise, not shading-wise):
http://raytracey.blogspot.com/2019/02/nvidia-release-optix-60-with-support.html
 
If it's not just a database artifact, attributing a fixed number of TCs to each full n/64 of shader count, similar to TMUs.
 
Can we expand the thread title to incorporate 1660 Ti? If it gets it's own thread, crossposting will ensue.
 
If it's not just a database artifact, attributing a fixed number of TCs to each full n/64 of shader count, similar to TMUs.

If the estimates people have made in this thread about the die size are correct, TU116 scales fairly linearly with the cuda core count compared to the TU106. Perhaps it even has the RT-cores, but disabled.
 
If the estimates people have made in this thread about the die size are correct, TU116 scales fairly linearly with the cuda core count compared to the TU106. Perhaps it even has the RT-cores, but disabled.
Or it has a higher SM count with disabled units, and the area difference comes from the BVH and tensor units.

Or a combination of the two.
 
Personally I would expect 1660Ti to be fully enabled, especially if they have 2 lower bins of the chip, but yeah it's a possibility.
 
Back
Top