Wouldn’t make sense to issue FP16 instructions to tensor cores. They’re setup for much denser matrix math. The “normal” FP32 CUDA cores can do double speed FP16 just fine.
I thought Nvidia already supported double-rate fp16.
This is 1080p at 1spp. Already looks far superior that rasterized lighting, IMO:I'm trying some reverse math... This at best may be a very rough approximation. But has there been any indication of how may rays per pixel you need to be able to calculate for ray tracing to look good enough?
With 10 Gigarays per second (2080Ti) at 4K and 60 fps, this translates to 20 rays per pixel (10e9/3840/2160/60). Is that good enough? Maybe in the real world assume half that plus the denoiser...
AMD needs to announce something DXR-related soon imho.
I'm trying some reverse math... This at best may be a very rough approximation. But has there been any indication of how may rays per pixel you need to be able to calculate for ray tracing to look good enough?
With 10 Gigarays per second (2080Ti) at 4K and 60 fps, this translates to 20 rays per pixel (10e9/3840/2160/60). Is that good enough? Maybe in the real world assume half that plus the denoiser...
AMD needs to get their "traditional" rendering performance up to speed on PC solutions well before they go after a feature that will only be present and barely useable on $1k graphics cards and 3 games.
User-centric market values:Well, the 2070 is priced considerably lower than the 1080 Ti...
Then maybe that would be a good time to get a new GPU.Well... RT is such a step up. Having a non RT capable gpu will be like having nothing in 1-2 gpu refresh.
GPUs in GeForce line have different SMs to the SMs in P100 and V100 so it's not artificially limited there simply is no capability to do it fast. Tensor cores are "weird" in more ways then one if you want to look at them for something like this. As already said they are optimised for matrix operations meaning you cant simply pack arbitrary scalar code on to them. They are CUDA only and there is no equivalent for that in D3D (but you can interop). They require huge amounts of internal bandwidth to the point that using them naively in CUDA will result in only about half of the peek performance. Also its not completely FP16. It takes FP16 input, but the math and output is done at FP32.Yeah, Nvidia already supports that but they artifically limited fp16 on Geforce line. Since tensor cores in 2080ti apparently has capability to push 110Tflops on FP16, I was wondering how it can be used to assist other graphical functions other than the purpose of denoising on RT.
I will skip this gen but it’s extremely impressive that there are at least 3 top tier titles in the works and launching soon. Hopefully it doesn’t end here.
I see a lot of griping about nvidia wasting transistors on tech that will be hardly used or that will be too slow. But what’s the other option? If we wait until general compute is fast enough for raytracing it’ll probably not happen for at least another decade.
The real kicker will be the next console generation. If there’s no support for RT acceleration in consoles (or by AMD) then it might as well be dead until 2030. I’m optimistic that raytracing is actually easier to implement than all the hacks and that there are enough raytracing fanboys in the developer community to have DXR really take off.
User-centric market values:
1080 Ti -> 700 EUR (settled prices)
2070 -> 640 EUR (one-vendor only price)
Yes, I am aware that the presentation spoke of 499 US-$. But that's without tax, without tariffs and most importantly that's for the cheapest-possible AIB models (blower fan, plastic shroud, maybe dumbed down PCB). The better AIB models won't get near that mark for a while.
Then maybe that would be a good time to get a new GPU.
Yeah, Nvidia already supports that but they artifically limited fp16 on Geforce line. Since tensor cores in 2080ti apparently has capability to push 110Tflops on FP16, I was wondering how it can be used to assist other graphical functions other than the purpose of denoising on RT.
There are more games with raytracing? I got the impression that only those few used raytracing. They announced a bunch more with the "developed for RTX" moniker, but it seems this can be as vague as just supporting deep learning super sampling.Not only 3 games, there are more coming.
Still, yes it's going to be a niche for sometime, but AMD needs to support it eventually.
For RTX to succeed, they need proper implementation. And it seems that NVIDIA has been working hard with many game developers to get the new technology supported. The slide shown shows 21 titles with Shadow of the Tomb Raider and Battlefield V having support at launch from what we heard.
Note that I believe most of them are DLSS, not RT. Marketing at work with their "RTX".NVIDIA is listing 21 Games with RTX Support