Nvidia Turing Architecture [2018]

Ice Lake (i7-1065G7) does. Pre-Everything (drivers, 3DM VRS feature test etc.)
„3DMark VRS 66,89 (VRS off) - 94,09 fps (VRS on)“ in it's 25W-configuration, so around 40 % uplift.
 
At Siggraph 2019, RTX was introduced to the following apps:

  • Adobe Substance Painter is relying on RTX ray tracing to accelerate baking performance up to 192× faster than CPUs according to Nvidia.
  • Autodesk Flame: RTX Tensor Cores improve visual effects and compositing workflows by accelerating Autodesk’s new machine learning technology in Flame v2020.1 helping artists isolate, extract, and change common objects in moving footage.
  • Blender Cycles: Nvidia OptiX 7 (RTX) with CUDA accelerates Blender’s open-source renderer.
  • Dimension 5 D5 Fusion: RTX ray tracing via UE4’s implementation of DXR allows architects and designers to simulate ground truth lighting and shadows.
  • Daz 3D Daz Studio: Nvidia Iray allows creators to assemble scenes with interactive RTX accelerated ray tracing to quickly build their artistic composition and render out in full fidelity.
  • Foundry MODO: RTX performance through OptiX in the completely redesigned MODO path-traced renderer offers a significant performance boost over CPU rendering.
  • Luxion KeyShot: RTX accelerated ray tracing and AI denoising for photorealistic visualization of 3D data for product design reviews, marketing, animations, illustrations and more via OptiX support in KeyShot 9.
  • Adobe Dimension CC and Chaos Group V-Ray provide users the ability to create fluidly, with great realism powered by RTX technology's RT Cores. Nvidia says Adobe Substance designer can deliver speed increases of up to 200 percent compared with earlier CPU-based technology by integrating RTX through DXR for light baking.
  • In addition, BlackMagic and Nvidia say DaVinci Resolve with DaVinci Neural Engine has implemented AI using the Nvidia AI libraries and is getting acceleration from Tensor Cores found on Nvidia RTX GPUs to accelerate inferencing.

    https://www.jonpeddie.com/news/nvidia-brings-ray-tracing-and-ai-to-siggraph-again
Blender-RTX-Performance-Comparison.png
 
3DMark for Windows update v2.11.6846 available - UL adds Variable-Rate Shading Tier 2
December 5, 2019
Today, they're adding a new option to use a more versatile and sophisticated form of Variable-Rate Shading in the VRS feature test, Tier2.
...
With Variable-Rate Shading, a single pixel shader operation can be applied to a block of pixels, for example shading a 4×4 block of pixels with one operation rather than 16 separate operations. By applying the technique carefully, VRS can deliver a big performance boost with little impact on visual quality. With VRS, games can run at higher frame rates, in a higher resolution, or with higher quality settings. You need Windows 10 version 1903 or later and a DirectX 12 GPU that supports Variable-Rate Shading to run the 3DMark VRS feature test. Tier 1 VRS is supported by NVIDIA Turing-based GPUs and Intel Ice Lake CPUs.

Tier 2 VRS is currently only available on NVIDIA Turing-based GPUs.
https://www.guru3d.com/news-story/download-3dmark-for-windows-v2-11-6846-available.html
 
tested using https://www.testufo.com/ghosting#ba...tion=160&pps=960&graphics=bbufo.png&pursuit=1

recorded via NVENC Geforce Xprience and NVENC+CPU OBS running on GTX 1660 SUPER driver 441.41 w10 1909

  • NVENC OBS

  • NVENC GFE

  • X264 CPU OBS
Recordings made using NVENC results in stutters while recording made with X264 CPU was smooth.

Googling around, people has been complaining about NVENC stutter since first half of 2019

https://www.google.com/search?q=nvenc+stutter

https://www.google.com/search?q=RTX+nvenc+stutter



is NVENC on Turing (RTX and GTX 1660 SUPER) are defective in some cards? How's yours?
 
3DMark for Windows update v2.11.6846 available - UL adds Variable-Rate Shading Tier 2
December 5, 2019

https://www.guru3d.com/news-story/download-3dmark-for-windows-v2-11-6846-available.html
Nice.

With high quality VRS the visual drop is pretty much invisible, but performance gain is very scene dependent.
Medium does get quite nice boost and with small amount of smoothing at places.
Low quality is quite blurry on some detail, yet high contrast areas do keep their look quite nicely.

Would love to see VRS demo with proper TAA, pretty sure that with materials which can handle minification properly this should look very nice.
Oh.. and DoF/MB with VRS to reduce shading of very blurry areas.
 
Last edited:
tested using https://www.testufo.com/ghosting#ba...tion=160&pps=960&graphics=bbufo.png&pursuit=1

recorded via NVENC Geforce Xprience and NVENC+CPU OBS running on GTX 1660 SUPER driver 441.41 w10 1909

  • NVENC OBS

  • NVENC GFE

  • X264 CPU OBS
Recordings made using NVENC results in stutters while recording made with X264 CPU was smooth.

Googling around, people has been complaining about NVENC stutter since first half of 2019

https://www.google.com/search?q=nvenc+stutter

https://www.google.com/search?q=RTX+nvenc+stutter



is NVENC on Turing (RTX and GTX 1660 SUPER) are defective in some cards? How's yours?

Workaround fix : limit the fps to 60 via RTSS Frame limiter
 
Nvidia has some short videos on RT basics. The 3rd one talks a bit about RT core capabilities.

Yup, really nice series.
2nd video has nice comparison on rasterization and RT. (Doesn't do the usual RT simulates light crap.)

One interesting thing on 3rd video is the limit of 1 layer for instancing, should be something I hope we will see improvement in future.
I remember seeing video more in depth with this subject, but sadly couldn't find it again. (Basically RT needs compute when going from layer to layer.)
 
Nvidia GeForce GPU refresh on the cards as Asus lists 8GB RTX 2060
It looks like Asus is planning on updating the Nvidia RTX 2060 with a full 8GB of 14Gbps GDDR6 video memory. That will push it further ahead of the latest Radeon, the AMD RX 5600 XT, and could even potentially put it on par with the AMD RX 5700. It’s possible that this is coming directly from Nvidia, and there will be 8GB versions of the RTX 2060 coming from all of the green team’s graphics card partners, but so far we’ve only seen details of three different Asus Republic of Gamer cards.
rtx-2060-8gb-740x416.jpg


So, where would these new 8GB cards sit? At the moment, thanks to our graphics card comparison charts, you can see that the current RTX 2060 sits in between the RX 5600 XT and the RX 5700. With a bit of a memory upgrade you could see the newer edition pulling alongside the higher spec AMD Navi card and leaving the other in its wake.

https://www.pcgamesn.com/nvidia/geforce-rtx-2060-8gb-asus-rog-refresh
 
Last edited:
Anandtech: DirectX Ultimate Feature set
March 19, 2020
To be sure, what’s being announced today isn’t a new API – even the features being discussed today technically aren’t new – but rather it’s a newly defined feature set that wraps up several features that Microsoft and its partners have been working on over the past few years. This includes DirectX Raytracing, Variable Rate Shading, Mesh Shaders and Sampler Feedback. Most of these features have been available in some form for a time now as separate features within DirectX 12, but the creation of DirectX 12 Ultimate marks their official promotion from in-development or early adaptor status to being ready for the masses at large.
...
All told – and much to the glee of NVIDIA – DirectX 12 Ultimate’s feature set ends up looking a whole heck of a lot like their Turing architecture’s graphics feature set. Ray tracing, mesh shading, and variable rate shading were all introduced for the first time on Turing, and this represents the current cutting edge for GPU graphics functionality. Consequently, it’s no mistake that this new feature level, which Microsoft is internally calling 12_2, follows the Turing blueprint so closely. Feature levels are a collaboration between Microsoft and all of the GPU vendors, with feature levels representing a common set of features that everyone can agree to support.

Ultimately, this collaboration and timing means that there is already current-generation hardware out there that meets the requirements for 12_2 with NVIDIA’s GeForce 16 and 20 series (Turing) products. And while AMD and Intel are a bit farther behind the curve, they’ll get there as well. In fact in a lot of ways AMD’s forthcoming RDNA2 architecture, which has been at the heart of this week’s console announcements, will serve as the counterbalance to Turing as far as 12_2 goes. This is a feature set that crosses PCs and consoles, and while NVIDIA may dominate the PC space, what AMD is doing with RDNA2 is defining an entire generation of consoles for years to come.
https://www.anandtech.com/show/15637/microsoft-intros-directx-12-ultimate-next-gen-feature-set
 
Last edited:
DLSS 2.0 is officially announced

DLSS 2.0 Features :

Superior Image Quality - DLSS 2.0 offers native resolution image quality using half the pixels. It employs new temporal accumulation techniques for sharper image details and improved stability from frame to frame.

Customizable Options - DLSS 2.0 offers users 3 image quality modes (Quality, Balanced, Performance) that control render resolution, with Performance mode now enabling up to a 4X super resolution.

Great Scaling Across All RTX GPUs and Resolutions - a new, faster AI model more efficiently uses Tensor Cores to execute 2X faster than the original, improving frame rates and removing restrictions on supported GPUs, settings, and resolutions.

One Network for All Games - While the original DLSS required per-game training, DLSS 2.0 offers a generalized AI network that removes the need to train for each specific game. This means faster game integrations and more DLSS titles.

DLSS 2.0 is already used in Wolfenstein Youngblood and Deliver Us The Moon. Results are very impressive. Mechwarrior 5 is based on Unreal Engine 4.0 and will get the DLSS2.0 treatment through the new SDK.

It was a long wait, but looks like this feature will be implemented in lot of coming games

Edit: oh I forgot to mention that DLSS2.0 doesn't need to be trained anymore by game, which will accelerate the adoption
 
Last edited:
DLSS was critized when it launched but now it seems really amazing. Is this tech also in the XSX, or is that a long shot since it's NV based? Much performance gains to be had.
 
DLSS was critized when it launched but now it seems really amazing. Is this tech also in the XSX, or is that a long shot since it's NV based? Much performance gains to be had.
This specific implementation is NVIDIA specific, but Microsoft has demonstrated similar super-resolution tech on DirectML which would run on any compatible hardware, including XSX.
 
Back
Top