It's not unfortunately,
latest testing on Final Fantasy 15 reveals it has a hit of about 35% from native 1440p, however it gives 25% more performance than native 4K with TAA.
35% is a whole lot, but irrelevant for me. I want to invest this time into something more useful than upscaling (Notice with texture space shading upscaling is no longer necessary).
I was following AI based tech. I remember papers about SSAO, AA and more screenspace tech done with AI. It took twice the time and resulted in half the image quality.
Spending 35% of power just for upscaling is a joke. But how can this happen at all? I would assume Tensor cores process the old frame async while the rest of the GPU works an the next. Another disappointment.
I really can't get excited about any of this. This is like Apple selling one button solutions to the masses as a stats symbol, just like 'It's hip, easy and it works - show you can afford it'.
That matches quite well with 29 FPS I calculated (+- for rounding error)
Sorry if i did not follow your thoughts exactly, same to David about the 400% perf + with Turing. Sure there is improvement, but in the end it does not contribute.
I need to make one more correction - this time not the math or number, just the unit names:
2070: 19.8 fps = (1.4 x 2.5 res) / 50ms per frame = 0.07 RT work per
millisecond
1080ti: 10.1 fps = (3.8 x 2.1 res) / 100ms per frame = 0.079 RT work per
millisecond
My math here is much easier to grasp than yours:
GTX 1080 traces twice the pixels and gets half the framerate, thus both equally fast.
Period. And no need to look what affects those sums in detail and how. Only the sums matter.
I was talking about 2560x1440 resolution (I mentioned this several times), while 80 FPS with RTX Ultra can be achieved on Titan V only in 1080p and below, that's 1.78 less rays to trace
No, according to the poster he uses all his many NV GPUs at the same resolution. His intention is not to downplay NV at all. He is a fan just enjoying how everything works.
Sure there is constant frame cost like transforms and BVH update. I ignore that intentionally. Within my own GI work the cost of those things is totally negligible. With RTX it surely is higher but never high enough to justify equaling end results.
No matter what other pipeline details you bring up to discuss - they never can contribute enough to justify this.
So all this comes down to a simple thing - how heavy the ray-tris intersection part
I'm not aware of a single proper raytracing algorithm where ray triangle checks are the bottleneck. Instead the ray-box checks already cost more, and cache missing traversal the most, if we assume a simple implementation.
A complex implementation (MBVH, sorting many rays to larger tree branches, sorting ray hits to materials, etc.) has additional costs but are key to good performance.
We do not know what NV here does at all, and how it is distributed between RT core and compute under the hood. I guess RT Cores can do box and triangle checks in parallel, and compute does the batching stuff. Pure speculation - so is yours.
The guy mentions however that Titan performance drops at some spots if i get him right, not sure. But such problems can be solved. RT cores are useful but not justified, IMHO. I could even say they are just cooling pads looking at those results.
I also agree geometric complexity will show RT core beating the older GPUs. But the proper solution here is a data structure that combines LOD, BVH and geometry. RTX does not allow for this. If compute is equally fast, i prefer just that to solve problems directly instead hiding their symptoms with brute force.
I hope AMD and Intel draw the proper conclusions here: DXR support yes, improved compute sheduling yes, RT cores no - better more flexible CUs or drawing less power.
And i hope next gen consoles will have 'old school' GCN, and the devs find time to work on RT based on that. Personally i'll do my best here...