What im very curious about is what NV's next (ampere) will pack. Their RT from 2018 seems comparable to RNDA2 in ray tracing (judging by the XSX and PS5 showcasings). I guess that ampere will only improve from there.
I'm finding it hard to get direct comparisons of RT performance, as all we've got at the moment is the MS Minecaft demo, vs lots of stuff online with DLSS involved! I get the feeling that pure RT performance from RDNA2 at the XSX level is probably above a 2060 (maybe?), but I guess we'll find out whether Nvidia's RT cores have less impact on the rest of the GPU for hybrid rendering in due course.
Ampere will be very interesting, and I'm personally eyeing it up for my next GPU. Given the way DLSS is coming on it's going to be hard to justify RDNA2 unless it's a lot cheaper.
Part of the reason I said 720p or 900p was because it doesn't have the same level of tensor performance of RTX cards.
Would be nice to have an indication if it would have enough to realistically do it from 720p or lower.
Even if it was based on the what is currently the best case scenario of DLSS2.0.
As you say would also be interesting to hear how motion vectors can be integerated into engines and what else they can be used for etc
It's really hard to know isn't it. DF have an interesting DLSS graph that shows even though the 2080Ti has about double the INT8 performance of the 2060S, it's only about 50% faster at 1080p DLSS output from 540p (I think). As the base resolution and and output resolution increase, then more of the 2080Ti's Tensor performance seems to come into play. Maybe there's some fixed function element getting in the way at lower resolutions.
A 2060S' Tensor cores should be about 7 times faster at INT8 than a hypothetical Lockhart at 4TF FP32, but that's pitting the entire MS GPU against just the Tensor cores in the Nvidia chip. But then again, at a base resolution (before MLSS) of 540p you'd probably have low utilisation of the 3D pipeline and be able to make good use of async compute to regain some of the utilisation lost to 540p rendering (huge pixels compared to polygons so inefficient for rasterisation and all that).
While we're on the speculation train, lets stay on for one more stop!
2060S in Death Stranding is giving DF figures of a 0.736 ms cost for the DLSS. Taking this at face value (assuming it's not a separate stage with the full cost hidden), if half your Lockhart GPU time was taken up with ML upscaling, and the figures between the two are directly comparable (probably not), that'd be about 14 x 0.736 ms = ~10.5 ms. Or less than one third of a 30 fps frame. Would this be better than native 1080p or 900p with sharpening for a 30 fps game?
Errr ... maybe? (And it might let you get away with shockingly low res textures and less time lost to RT too...)