For me, I think it's important to be able to separate the two features - ML isn't necessary for ray tracing, it's effective at approximation for highly complex tasks. In this case, BVH is an actual requirement for ray tracing, the 2080TI can do ray traced games at 1080p60 fps without ML. Getting that to ~4K, however, would require some sort of ML algorithm.
True, and we still don't have any solid metrics to determine whether denoising, AA, and upscaling are better done via ML or algorithm. Nvidia's DLSS is a perfect example: we know it provides solid results, but how comparable is it to the same mm2 of compute?
I agree that it's important to distinguish the two, but given its inclusion in Nvidia's idealised RTX, I'm willing to believe it's likely that ML will play some role in RTRT. It might not, but you and others have posted enough compelling cases for me to believe it.
And that's why I'm excited. Assuming BVH acceleration can be added to CU's, that's all that's needed to evolve the M160 into a capable RTRT GPU. I know that's a massive oversimplification, and that it would probably perform worse than Nvidia's RTX cards, but it still puts some form of RTRT hardware tantalisingly within the grasp of the next generation consoles.
ML can be used for a variety of tasks, it's entirely possible using MI60 as a template, you could have next gen with something similar to it. Where ML helps improve games and move them forward from a variety of possibilities, and to be flexible enough to not force developers to have to use it.
Exactly. Even just an M160 GPU would be interesting for a console. Hybrid algorithm and ML approaches to all kinds of things, most obviously the things I mentioned above: AA, upscaling, and denoising. I think you've posted a couple of videos in reference to other uses too.
And the fact you've highlighted, of not forcing developers to use anything, is key for console success, which is another reason I'm quite excited by this development.
The 'multi' function cores are an interesting topic, because earlier we were discussing about the waste of silicon for fixed functions. But here we get significantly more flexibility, though with less performance in those areas, but all the silicon could be put to full use whether you decided to use the features or not. Tensor cores are very specific, they're specific to running tensor flow applications, I don't know how useful they are outside of tensor flow, but tensor cores are limited to 16bits. Some ML algorithms may require up to 64bit. So the answer is, I don't know, I don't know what games would require or how it would be used.
I don't know either, but the important thing is: no-one knows. Who knows what amount of ML will be useful for every game, every genre, even? No-one can answer that, but put the control over that ratio in the hands of developers, and they can answer it on a per title basis.
The same can be said for RTRT. Assuming the CU'S can be modified to accelerate BVH's, developers can do the same there too: some ratio of rasterisation, some ratio of ML, and some ratio of RT.
In the PC space, Nvidia's fixed function approach is fine, and probably yields better results. People can just keep upgrading their rig, or changing and disabling per game settings.
In the console space, they have to provide a platform that straddles the broadest markets for a minimum of 6 years, and developer flexibility is key to that. "Time to triangle" popped up time and again in PS4 launch interviews with Mark Cerny, and it's a philosophy that's served them well: it's undone the negative perception of PlayStation caused by the PS3, and secured them shed loads of content. At the same time, the flexibility of compute has granted one of their studios, MM, the chance to render in an entirely new way with SDF's.
We have an indication with the M160 that the same approach is likely to be taken next generation. If BVH acceleration can be bolted on to the CU's, ML+BVH's may be that generation's compute.