Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
Everyone is saying that DLSS upsamples a sparse render and therefore increases performance vs rendering the equivalent number of pixels.

Why couldn’t that same tech be used to simply AA an image with no upscaling funny business similar to FXAA?
 
With tensor cores, you can only reduce the amount of rays by filling in the gaps.

You can also intelligently bias where you're going to focus your packets of rays. A de-noising filter is nice, but it's probably not going to handle the edge cases very well that really exemplify what ray traced global illumination can do that rasterization can not. In particular I'm talking about cases where almost all of the scene is lit indirectly from a small area source (a crack in a doorway into dark room, light through a keyhole, etc.) Markov Chains and "Metropolis Light Transport" are what spring to mind as more classic statistical attempts to help this, and I'd imagine having a DL neural net attack that problem might be a faster hack.
 
Everyone is saying that DLSS upsamples a sparse render and therefore increases performance vs rendering the equivalent number of pixels.

Why couldn’t that same tech be used to simply AA an image with no upscaling funny business similar to FXAA?
Infiltrator doesn't use raytracing and it takes advantage of DLSS. It's a super-resolution type of technology, not a denoiser.
 
14.jpg


This clearly shows that the Ray Tracing is using a separate part of the Turing to do the Ray Tracing (Green) and that the Tensor Cores (Purple) are on a different part of the chip.
Just in case you are not being sarcastic: That's just a grossly oversimplified illustration to show how different parts of the calculation can overlap. In fact, RT- and Tensor-Cores are integrated into the individual SMs. Even under the assumption that the chip shot is just an equally oversimplified artists interpretation and not resembling reality at all, it would take large amounts of energy to move all that data around for a single frame. The 24 bright spots in the upper and lower horizontal middel for example are most likely the Raster Backends/ROPs.
 
Epic representatives again repeated the same thing, the demo took 4 Volta GPUs originally, now it took one Turing. There is no conspiracy here. Just more optimizations, DLSS inclusion and RT acceleration on the RT cores.
i learned not to believe anything that nvidia or others affiliated with them say unless i see a poc that fiasco of the 5xxx series was literally the last nail for me when it comes to that company
 
i learned not to believe anything that nvidia or others affiliated with them say unless i see a poc that fiasco of the 5xxx series was literally the last nail for me when it comes to that company

You're still mad about the ~15 year old FX series?
 
I think that largely depends on two factors: technological departure and competitive environment. As such, I think G80 presents a good analogue:
  • Significant departure from previous architecture
  • Very large die size by the day's standards (both were the largest consumer GPUs to date at launch)
  • Relative competitive vacuum: launched as clear-cut performance leaders with competition's counterparts many months away.
  • While not a simultaneous 3-tier release, the "super high end" part did come within about 6 months into largely intact competitive landscape despite R600 launch


View attachment 2643

I think the biggest mistake is calling $1,000 card "TI", a price point which has historically been reserved for Titan-class cards and, which is is by far the biggest departure form $650-700 price point these cards have historically been at. Should have called it a "Titan" and then released similar-specked "TI" card early next year at $800 price point and raffled a lot less feathers.

Depending on the performance, $1000 may or may not be a fair value 2080 TI but regardless they messed up their own price tier/naming convention for no good reason.

The point where I'd disagree for the above is that the difference between G80 and G70 based SKUs went up to 3x times in performance differences in ultra high resolutions of the time, while at the same time the first delivering several significant improvements in terms of image quality. It's not like Pascal or recent former generation GPUs had as crapalicious filtering quality as everything NV4x/G7x based.
 
Infiltrator doesn't use raytracing and it takes advantage of DLSS. It's a super-resolution type of technology, not a denoiser.

Curious we didn't hear yet about another NN possibility, namely Deep Learning Super Slomo.
Inconveniently this also abbreviates to DLSS. Deep Learning Super Framerate also could do :)
This would be another one in the series of Deep fakes, but it can look surisingly good.
The advantage in game rendering would be, you only need to render at 30 FPS, to get fluid 60 FPS.
It would not reduce the frame latency (or make it even worse) though.
 
DL super slowmotion looks fantastic and I can see that being picked up for sports very quickly. The NFL should be all over that.
 
On the countrary, it's always better to actually film @ very high fps in the first place. I'd expect media and other professionals to be able to afford that.

I view DL super slowmotion as instead a fit for hobyist and more importantly, cloud services like Google photos. The latter can just scan through your material and propose ehancements such as slow-motion-ifying portions of your latest holliday videos
 
Editor's day stuff ...
NVIDIA Turing gets a bigger L2 cache ... 2x Pascal cache
https://videocardz.com/newz/nvidia-upgrades-l1-and-l2-caches-for-turing

Assuming the block diagram is a 1:1 representation of the port arrangement of the LSUs for each respective architecture, it appears NV has transitioned from a single 4-wide LSU to dual 2-wide units, each connected it's own slightly smaller pool of L1/shared mem and a 2x larger block of L2. Not a GPU architect but I would guess this has to do with the new instruction/core types featured in Turing.
 
On the countrary, it's always better to actually film @ very high fps in the first place. I'd expect media and other professionals to be able to afford that
Since apparently they don't have it already, or not very high fps, then I'd say it would be a lot easier and cheaper to add in a post process at the end of the chain rather than replace a lot of equipment throughout the chain.
 
Since apparently they don't have it already, or not very high fps, then I'd say it would be a lot easier and cheaper to add in a post process at the end of the chain rather than replace a lot of equipment throughout the chain.

I'm not familiar with the market of high fps digital videorecorders, but I'd presume it's recently growing in adoption and variety (so "they don't have it already" is not such a strong argument). The "live" nature of TV sports transmissions would make any post processing steps less apealing, IMO
 
Status
Not open for further replies.
Back
Top