Nvidia Turing Product Reviews and Previews: (Super, TI, 2080, 2070, 2060, 1660, etc)

RTX operates on top of DXR/vulkan, it doesn't replace them and based on everything told so far, all they've shown should work regardless of your videocard manufacturer assuming they support DXR. Vulkan might be trickier since there's no standardized way yet (nvidia is offering their extensions for this I think)

Correct, Microsoft's DirectX Raytracing (DXR) and Nvidia's RTX are two different things. And that is my over-all point...

If you want to make use of Turing's properties, you will need Nvidia's proprietary API.
 
No, that's what we're trying to point out, you don't.
That's right. All RT operations go through DXR and are cross vendor/GPU arch compatible but on Turing GPUs some of the calls are automatically translated to OptiX (CUDA) through the driver and accelerated buy the (still mysterious) RT Cores.

Here's an example with ChaosGroup (VRay) project Lavina's real-time RT renderer which also interestingly doesn't use OptiX AI denoising but their own cross vendor AI denoising solution (VRay Next does support both for production rendering):


"What is Lavina built on?

Project Lavina is written entirely within DXR, which allows it to run on GPUs from multiple vendors while taking advantage of the RT Core within the upcoming NVIDIA “RTX” class of Turing GPUs. You will notice that there’s no noise or “convergence” happening on the frames, which is thanks to a new, real-time Chaos denoiser written in HLSL that also allows it to run on almost any GPU. With this, we aim to eventually deliver noise-free ray tracing at speeds and resolution suitable for a VR headset with Lavina."


So, how much faster is it?

Lavina is already seeing a big boost from the RT Core on NVIDIA’s Turing GPU. How much of an increase is a little tricky to calculate right now because we don’t have a version that doesn’t use it. But, with some sleuthing, we believe we’re seeing about a doubling of performance beyond what the new GPU generation is already giving us – which is the equivalent of leapfrogging several years in hardware evolution. One thing’s for sure: the performance curve plotted by Moore’s Law just got a vertical stair step added to it, and Chaos is set to exploit it!"

https://www.chaosgroup.com/blog/ray-traced-tendering-accelerates-to-real-time-with-project-lavina
 
Last edited:
Given how long it usually takes for new techniques to make it into games, even if they shipped software a year after getting access to production hardware it would still be impressive. I really don’t understand what people are complaining about (except the prices lol). Developers seem to be really stoked at the prospect of having another powerful tool in the box and I’m optimistic we’ll see exciting stuff soon.

These "complaints other than price" you mention, where are they?

All I saw was a completely wrong statement that was promptly corrected. Raytracing on BFV didn't take 2 weeks. It took 8 months.
Where are the posts complaining about 8 months being too long?
 
I love that graphic preset 'Overkill' in the Cyberpunk menu! :cool:
 
Cyberpunk 2077 will support both NVIDIA's RTX and NVIDIA’s Hairworks, as the graphics menu is captured in a yet to be released footage:
https://www.dsogaming.com/news/cybe...-real-time-ray-tracing-and-nvidias-hairworks/

I haven't checked, but I hope hairworks has significantly improved since The Witcher 3, weird strands of rope aren't hair : (

That being said, I do hope raytraced reflections show up in Cyberpunk. It looks great, except for how diffuse everything is and how many reflections are missing. The efficacy of raytraced reflections for smaller enclosed levels without time of day changes or etc. like BFV are questionable, especially for how much they cost (ooh look a puddle has somewhat better reflections!). But for open world games they're a good solution for a very difficult problem.
 
NVIDIA GeForce RTX 2080 3DMark TimeSpy Score Leaked – Clocked At 2GHz And Beats A GTX 1080 Ti Without AI Cores

NVIDIA RTX 2080 Time Spy preliminary benchmark leaked – 37% faster than a GTX 1080 and 6% faster than a GTX 1080 Ti without using AI core (DLSS)
...
Since synthetics cannot at this time take advantage of DLSS, it is worth noting that a very large part of the die (namely the Tensor cores) are not being used during this run. This means that what you are looking at is probably the lower bracket of performance uplift you can expect using Turing.
NVIDIA-GeForce-RTX-2080-TimeSpy-Benchmark-Leaked.png

https://wccftech.com/nvidia-geforce...ghz-and-beats-a-gtx-1080-ti-without-ai-cores/
 
If that's the 2080, that's pretty concerning about the performance. I would expect it to beat the 1080Ti at stock clocks.

If that's the 2070, sure that seems great.
 
Last edited:
If that's the 2080, that's pretty concerning about the performance. I would expect it to beat the 1080Ti at stock clocks.

If that's the 2070, sure that seems reasonable.

None of those clocks appear to be at stock for any of the cards. Not exactly sure what that pic is showing. Are those max boost clocks?
 
Expectations are quite high if people want the 2070 to be faster than a 1080ti IMO.
 
Well Pascal is 2 years old (even if Ti is only one), and nVidia basically jumped Volta and are releasing a n+2 gen.
How does Turing qualify as a n+2 gen? Both Volta and Turing would have been in development concurrently and they're both using the same process and very similar structure. Just because Volta was released last year doesn't make it a complete generation between Pascal and Turing. What makes a "generation" is a significant architecture change usually combined with a process advancement.

Volta should just be ignored, especially since there was no consumer version.
 
How does Turing qualify as a n+2 gen? Both Volta and Turing would have been in development concurrently and they're both using the same process and very similar structure. Just because Volta was released last year doesn't make it a complete generation between Pascal and Turing. What makes a "generation" is a significant architecture change usually combined with a process advancement.

Volta should just be ignored, especially since there was no consumer version.


My guess is they delayed Volta / cancelled it for the gamers and kept pushing Pascal because Vega was a dud. So, they worked on turing instead, and deliver it now. In a world were AMD / RTG would be good, it would have been Vega vs Volta, and they would have waited for 7nm to release turing vs AMD next gen thing.
But without anyone in front of them, they did this.

And for me Turing is a gen after Volta because of RT, it's a pretty big change....

Yes of course, it's only my opinion.
 
Last edited:
My guess is they delayed Volta / cancelled it for the gamers and kept pushing Pascal because Vega was a dud. So, they worked on turing instead, and deliver it now. In a world were AMD / RTG would be good, it would have been Vega vs Volta, and they would have waited for 7nm to release turing vs AMD next gen thing.
But without anyone in front of them, they did this.

And for me Turing is a gen after Volta because of RT, it's a pretty big change....

Yes of course, it's only my opinion.

Why would Volta be a dud? Turing is basically Volta + RT + Tensor so Volta rasterization performance with the same number of SMs should be very similar with a smaller die size.

If Volta would be a dud then so is Turing.

It does seem though that nvidia recalibrated its releases after seeing Pascal’s competition. This was probably the best time to gamble on RT transistors.

Thanks @Geeforcer for the fix, meant Volta not Vega :)
 
Last edited:
Back
Top