Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
RT is definitely not a cure all. Even in the BFV demo there was a lot of low detail and flat geometry. Then there’s the relatively poor state of physics and low levels of interactivity across the board.

A ton of developer and artist effort in the past few years has been focused on really smart and really complex ways to fake good lighting and shadows. Hopefully much of that energy can now be repurposed to other important things.
 
I wonder if Nvidia will come up with a decently fast / optimized DXR driver for Pascal, or if instead the DXR fallback (known to be not fast, more like a reference implementation) will be used.
Also wondering if the raytracing comparisons Turing vs Pascal shown were based on the DXR fallback.
 
I wonder if Nvidia will come up with a decently fast / optimized DXR driver for Pascal, or if instead the DXR fallback (known to be not fast, more like a reference implementation) will be used.
Also wondering if the raytracing comparisons Turing vs Pascal shown were based on the DXR fallback.
Yup the question is: Will any non Turing GPU (NV and AMD) get proper DXR drivers or simply use the fallback path?
But don't pay too much attention to any data point or performance info that isn't from a third party which doesn't have any financial interest in the matter. Those raytracing comparaisons are meaningless without any info on the scenes used, build version of the demos etc.
Ie: Star Wars Reflection demo was different from the one at GDC (it was most probably optimized & tweaked a lot since March) etc..
GDC (old):
pMUoQtp.jpg

Siggraph/GC (new):
XvB3ib0.jpg
 
Yup the question is: Will any non Turing GPU (NV and AMD) get proper DXR drivers or simply use the fallback path?
Of course they will when they have the hardware. MS doesn't have a habit of building a complete new DirectX API -part for one vendor
 
Of course they will when they have the hardware. MS doesn't have a habit of building a complete new DirectX API -part for one vendor
Unless we misunderstood ourselves here I'm talking about current non Turing GPU's (Pascal, Volta, Fiji, Polaris, Vega etc..) getting publicly available DXR drivers not future ones.
 
Unless we misunderstood ourselves here I'm talking about current non Turing GPU's (Pascal, Volta, Fiji, Polaris, Vega etc..) getting publicly available DXR drivers not future ones.
Oh, sorry, I did misunderstand

Yup the question is: Will any non Turing GPU (NV and AMD) get proper DXR drivers or simply use the fallback path?
But don't pay too much attention to any data point or performance info that isn't from a third party which doesn't have any financial interest in the matter. Those raytracing comparaisons are meaningless without any info on the scenes used, build version of the demos etc.
Ie: Star Wars Reflection demo was different from the one at GDC (it was most probably optimized & tweaked a lot since March) etc..
GDC (old):
pMUoQtp.jpg

Siggraph/GC (new):
XvB3ib0.jpg

Forget the pointed out door - look at the stormtrooper in front, that's like a day and night difference. Was the Turing-demo actually running lowres and upscaled?
 

This looks good, but there's a lot of whining about performance. However for RTX this is no different then a launch title for new console generations. Of course, performance isn't going to be great. Devs are working with beta drivers, beta tools (as far as theses features go), and new hardware. Overtime people will learn what works best and how to utilize the RT cores most efficiently. This freaking out about 1080p and sub 60fps is complete nonsense.


Maybe someone can correct me if I'm wrong, but my understanding is that the gigarays performance quote is in addition (parallel) to the shader performance. You get ~14 Tflops and 8 gigarays from the RT cores. At least that's how NVIDIA portrayed in the graphs. Could be total bullshit...
 
This freaking out about 1080p and sub 60fps is complete nonsense.
When it comes to it, many PC graphics enthusiasts -me being one of them- have no trouble running games @30fps locked, or up to 60fps unlocked, if it means access to better visuals. We already have several games where consistent locked 1440p60 is impossible on a 1080Ti @Max visuals.

You can't consistently run Kingdom Come Deliverance at max settings @1440p60 even on a 1080Ti, for better consistency your only option is lock to 30, or bear the sub 60 performance. You should do the same in Ark Survival Evolved if you run at Epic visuals. Same for Gears Of War 4 with Insane Reflections and DoF, 1440p60 is attainable on a 1080Ti. Quantum Break suffers the same fate too if you run it at Ultra with non-scaled resolution, 1440p60 is not really possible on a 1080Ti.

Final Fantasy 15, Agents Of Mayhem and Watch_Dogs 2 also can't maintain 1440p60 @Max settings and GameWorks effects. Mainly VXAO and HFTS or both together, they really put a toll on the fps, especially if you run them with TXAA.

Some people upgrade visuals at the cost of performance through mods as well, especially in games like Fallout 4, Skyrim, STALKER, GTA 4 and GTA 5. Some people run close to a 100 mod in Skyrim and Fallout 4 for a vastly superior visual experience, but performance @1440p will suffer greatly.

During this early period running 1440p@30 locked with RTX or 1440p unlocked up to 60fps is totally doable for these people, in Single Player games of course.
 
I wonder if Nvidia will come up with a decently fast / optimized DXR driver for Pascal, or if instead the DXR fallback (known to be not fast, more like a reference implementation) will be used.
Also wondering if the raytracing comparisons Turing vs Pascal shown were based on the DXR fallback.

I would be shocked if they did. There’s no reason to spend engineering hours optimizing a DXR compute shader path that would still be very slow.

It would be interesting to know whether the RT cores are programmable in any way. From a layman’s point of view BVH construction and traversal seems pretty similar to voxelization and cone tracing. Could SVOGI / VXGI / VXAO potentially get a boost from the RT cores as well?
 
That's true, but I'm just not expecting that in the first round of 7nm chips.

That will likely depend on AMD's competitive position at the time. If Navi breaks through the 4096 SP limit of GCN then we just might have a battle on our hands. If not, then I'm inclined to agree with you. I think in that case NV would return to the release pattern of previous cycles - midsize GPU first, then big GPU 9 months or so later.
 
Way too much chit-chat, IMO, and little detail about the questions asked, actually.

And it's kind of annoying this recent attitude of "thanks to RT everything feels real and like you're actually in the game, so before this moment everything was crap".

It's kind of true. To a degree, at least. Lighting was the last unconquered realm of real-time graphics IMHO, in terms of crossing over into the uncanny valley. The performance probably won't be there on 1st gen hardware but I could see ray tracing being with us to stay, unlike say stereoscopic 3D or perhaps even VR.
 
Yup the question is: Will any non Turing GPU (NV and AMD) get proper DXR drivers or simply use the fallback path?
But don't pay too much attention to any data point or performance info that isn't from a third party which doesn't have any financial interest in the matter. Those raytracing comparaisons are meaningless without any info on the scenes used, build version of the demos etc.
Ie: Star Wars Reflection demo was different from the one at GDC (it was most probably optimized & tweaked a lot since March) etc..
GDC (old):
pMUoQtp.jpg

Siggraph/GC (new):
XvB3ib0.jpg

Details and shadowing seems worse. But the reflections are actually better. WTF! LOL.
 
I'm not convinced raytracing can not be fast on Pascal.

If current performance numbers are correct, raytracing isn't really fast on Turing... how could it be on Pascal without dedicated RT cores?
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top