Yeah it would have to be almost real-time to be used in instant replays.The "live" nature of TV sports transmissions would make any post processing steps less apealing, IMO
Yeah it would have to be almost real-time to be used in instant replays.The "live" nature of TV sports transmissions would make any post processing steps less apealing, IMO
I’m pretty sure he was using the slide to show that RT has been accelerated (the time graphs), not what the real layout looks like.Just in case you are not being sarcastic: That's just a grossly oversimplified illustration to show how different parts of the calculation can overlap. In fact, RT- and Tensor-Cores are integrated into the individual SMs. Even under the assumption that the chip shot is just an equally oversimplified artists interpretation and not resembling reality at all, it would take large amounts of energy to move all that data around for a single frame. The 24 bright spots in the upper and lower horizontal middel for example are most likely the Raster Backends/ROPs.
If your technical judgement is so clouded by emotions, why bother to engage in these kind of discussion?i learned not to believe anything that nvidia or others affiliated with them say unless i see a poc that fiasco of the 5xxx series was literally the last nail for me when it comes to that company
Yeah it would have to be almost real-time to be used in instant replays.
That deep slomo network requires at least an order of magnitude more calculations and cannot be done in real time. (At least not on a single GPU.)Curious we didn't hear yet about another NN possibility, namely Deep Learning Super Slomo.
Cue Jensen "The more you buy the more you save"That deep slomo network requires at least an order of magnitude more calculations and cannot be done in real time. (At least not on a single GPU.)
It’s a large, deep network.
Here’s the paper: https://arxiv.org/pdf/1712.00080.pdf
Yes they are used for that but Jensen also suggested it was used for raytrace denoising or attempt could be in some capability. Nvidia is a bit vague about it.I think the tensor cores are used for what they're calling DLSS (deep-learning super-sampling), but which seems to be an upscale instead of real super sampling.
You seem to imply Tensors couldn't be used for both?Yes they are used for that but Jensen also suggested it was used for raytrace denoising or attempt could be in some capability. Nvidia is a bit vague about it.
Maybe it just something they are still developing and we will see AI based denoising of raytracing at a later time.
You seem to imply Tensors couldn't be used for both?
https://image.slidesharecdn.com/jhh...rce-rtx-launch-event-21-638.jpg?cb=1534805756The Tensor Cores are used for Denoising is practically all of the RT demos shown Pica Pica, Star Wars, Cornell box, etc etc) Now can they do both Denoising and DLSS at the same time? We don't know yet.
Wonder if it's a future feature since it's included in the NGX SDK. Currently there is only details for DLSS and AI Painting, but also included in the stack are placeholders for AI Slow-Mo and AI Res-Up.That deep slomo network requires at least an order of magnitude more calculations and cannot be done in Real time. (At least not on a single GPU.)
I think a big limiter will be bandwidth in the 7nm parts. NVIDIA is already using 14 Gbps GDDR6, going to the current max of 18 Gbps is only a modest bump.
They may need to go to HBM in the 102 class GPU to get the bandwidth.
A theoretical 18Gb/384-bit part would see memory bandwidth up to 864GB/s from 616GB/s, a healthy 40% increase and well within the realm of possibility.
Way too much chit-chat, IMO, and little detail about the questions asked, actually.