Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Or the elegance of DLSS as an outside generic solution.
What elegance? Devs have to send nVidia a whole load of game data for their massive supercomputer to process and for that data to be applied in game using large amounts of silicon. In-engine reconstruction techniques use perfect in-engine data for comparable results with a titchy silicon footprint. NN based image reconstruction makes a lot of sense when applied to movies and the like where there no deep-data to process, but it strikes me as grossly inefficient for game rendering, at least as it currently stands.
 
What elegance? Devs have to send nVidia a whole load of game data for their massive supercomputer to process
I meant the part where it requires minimal efforts by the developers. It's all on NVIDIA. Developers on PC have been reluctant to engage with checkboarding, temporal injection, and the likes. Now NVIDIA is handling those efforts.
but it strikes me as grossly inefficient for game rendering, at least as it currently stands.
Which circles back to the point that DLSS was never an after thought experiment by NVIDIA. There are more efficient and cost conscious afterthought experiments for NVIDIA to create other than DLSS.
 
Last edited:
They didn't add Tensor and RT cores because games need those, they've added RTX and DLSS because they had to make use of the additional hardware to justify the cost increase.
I’ve always been a fan of ray tracing and I’m delighted to learn that it’s important and profitable enough in the HPC space to justify adding cost to gaming, Nvidia’s largest revenue generating segment by far.

Would you mind clarifying which HPC application is so heavy on ray tracing?
 
So there are RT cores only because of the price and marketing? Neither do the RT cores cost so much, nor was the architecture changed for that alone. If one wants a solution for yesterday there is already Pascal. It doesn't go forward if one repeats the same thing again.
 
Which circles back to the point that DLSS was never an after thought experiment by NVIDIA. There are more efficient and cost conscious afterthought experiments for NVIDIA to create other than DLSS.

Do you have any evidence that tensor cores can coissue with shader cores in view of this post ?
DLSS as a piece of software, surely must be an after thought as tensor cores were created to speedup neuralnetwork training in Volta. It might have been a consideration to bring the tensor cores to Turing if that is what you mean.
 
I meant the part where it requires minimal efforts by the developers. It's all on NVIDIA. Developers on PC have been reluctant to engage with checkboarding, temporal injection, and the likes. Now NVIDIA is handling those efforts.
Why didn't they create a compute based upscaling system, same as they did TXAA? Perhaps because they didn't need to when they could sell their new hardware on it's ability to upscale via a NN solution on the already-present Tensor cores...

Which circles back to the point that DLSS was never an after thought experiment by NVIDIA. There are more efficient and cost conscious afterthought experiments for NVIDIA to create other than DLSS.
Huh? Do you agree that DLSS is "grossly inefficient for game rendering, at least as it currently stands"? That there are leaner solutions that could have been employed instead?

This discussion probably needs to be refocussed. Let's pose another question - if nVidia's goals were to produce a GPU capable of hardware reconstruction to get better framerates at the same comparable quality, would the best way to do that be to include NN learning-focussed cores, or to work on improving compute-based solutions and maybe add specific hardware improvements to that end?
 
tensor cores were created to speedup neuralnetwork training in Volta. It might have been a consideration to bring the tensor cores to Turing if that is what you mean.
Of course it was, but it was brought to Turing to help with RTX, not as an afterthought of a marketing gimmick.
 
DLSS as a piece of software, surely must be an after thought as tensor cores were created to speedup neuralnetwork training in Volta. It might have been a consideration to bring the tensor cores to Turing if that is what you mean.

DLSS is an application. Like Super-Res, Slowmo and every other possiblity. And nVidia talked about it at Siggraph last year: https://blogs.nvidia.com/blog/2017/07/31/nvidia-research-brings-ai-to-computer-graphics/

TensorCores are here to stay. nVidia is even rebranding their GPUs to "TensorCore" GPUs.
 
Why didn't they create a compute based upscaling system, same as they did TXAA? Perhaps because they didn't need to when they could sell their new hardware on it's ability to upscale via a NN solution on the already-present Tensor cores...
Because it requires engine support and requires developers to implement it with them. TXAA is nothing like temporal reconstruction, TXAA is just TAA combined with MSAA.
Huh? Do you agree that DLSS is "grossly inefficient for game rendering, at least as it currently stands"? That there are leaner solutions that could have been employed instead?
Yes I agree, but for NVIDIA, not for the developers.
if nVidia's goals were to produce a GPU capable of hardware reconstruction to get better framerates at the same comparable quality,
NVIDIA's goal is to produce a GPU capable of ray tracing. RT and Tensor cores were needed for that.

to work on improving compute-based solutions and maybe add specific hardware improvements to that end?
That again would require heavy developer involvement. DLSS doesn't require that.
 
Is deep learning the future of realtime rendering? Link to presentation from last years siggraph below. To me this looks like a long term play that is at it's infancy. Following conferences there is interesting research that seems to be turning into reality in shipping games roughly around now. It will be very interesting to see how 2019 gdc and siggraph turn out.

 
TensorCores are here to stay. nVidia is even rebranding their GPUs to "TensorCore" GPUs.

It's definitely a different strategy mobile SoC vendors follow.
They resort to a dedicated more fixed function coprocessor to accelerate neural network computation, outside of the CPU or GPU.
Even Nvidia also does do this with Xavier where there are neural network coprocessors outside of the GPU, they are more power efficient and can reduce memory bandwidth by using weight compression/decompression.
Xavier based on Volta kept the tensor cores in the GPU, but I would expect these get removed, once neural network technology gets more mature.
 
Xavier based on Volta kept the tensor cores in the GPU, but I would expect these get removed, once neural network technology gets more mature.
I think the tensor cores will be in Xavier for quite a while based on recent AGX Xavier "wins" in the manufacturing industry. AI requires tensor cores and the expansion to "autonomous machines" lends credence to the fact there is a use case outside autonomous vehicles.
What’s so special about AGX Xavier?
The big difference with the new platform is that it’s “based on the Xavier SoC, while the previous Drive PX was based on the Parker SoC,” said Magney. “In addition, the DevKit comes in single or dual SoC versions (Pegasus).”
...
Nvidia’s efforts in developing these AI infrastructure building blocks are paying off nicely as Nvidia expands its eco-system for development. Magney said, “You get the hardware, software tools, access to libraries, and network training tools. Of all the suppliers, Nvidia’s advantage is in the diversity of the DevKit as no one has this collection of hardware, software and development support.”
 
Last edited:
NVIDIA's goal is to produce a GPU capable of ray tracing. RT and Tensor cores were needed for that.
Yet realtime graphics aren't based on raytracing and won't be for a year or two minimum (needs to become mainstream affordable), suggesting that the creation of this GPU wasn't for the game market. However, rather than create a new GPU designed for the game market, nVidia has looked to using their RT-focussed part in ways to augment non-raytraced rendering, resulting in the development (from existing offline AI based image reconstruction that's been going for a while now ever since the explosion of ML) of an upscaling system.

A posit you dismiss out of hand as 'extremely silly'. :???: You can disagree with the theory, but to dismiss it as nonsense is just biased thinking. It's a very plausible option.
 
Yet realtime graphics aren't based on raytracing and won't be for a year or two minimum (needs to become mainstream affordable), suggesting that the creation of this GPU wasn't for the game market. However, rather than create a new GPU designed for the game market, nVidia has looked to using their RT-focussed part in ways to augment non-raytraced rendering, resulting in the development (from existing offline AI based image reconstruction that's been going for a while now ever since the explosion of ML) of an upscaling system.

That theory doesn't make any sense to me. Nvidia is certainly not shy about building chips that are fit for purpose. We also know that they can add/remove SM features at will so there was literally nothing stopping them from creating Turing RT GPUs for Quadro and Turing non-RT chips for Geforce. I also think it's silly to believe they bent over backwards to invent gaming use cases and worked with Microsoft / game developers on DXR as a second thought for hardware that they could've easily just cut out of the chip(s)......come on.

Why is it so hard to accept that nVidia is just genuinely pushing real-time RT in games? Isn't that a simpler, more plausible version of reality? The argument that RT is "too early" because it runs at 1080p 60fps is a ridiculously arbitrary standard given that 1080p is still by far the most popular gaming resolution. It's here, it works and it's not going anywhere.
 
The argument that RT is "too early" because it runs at 1080p 60fps is a ridiculously arbitrary standard given that 1080p is still by far the most popular gaming resolution. It's here, it works and it's not going anywhere.
Well the counter-argument is that it takes a $1200 GPU to run at that speed. Those enthusiast gamers aren't running at 1080p.

Nvidia going Turing on full stack (so far down to 2050?) seems to be an indicator that they're going all in on RT, despite seemingly the lower end versions aren't going to be very useful for RT games. Perhaps once we see some RT implementations with quality settings for amount of samples, resolution of reflections etc.
 
but to dismiss it as nonsense is just biased thinking. It's a very plausible option.
Nope it's not. Fact is -looking at history- NVIDIA NEVER does anything for pure marketing reasons, they do it because it will give them competitive advantage, they did that with PhysX, Tessellation, CUDA, GSync, HBAO+, PCSS+, HFTS, VXAO, TXAA, and now RTX and DLSS. That's how they operate and that's how they do things. They are always pushing advanced effects that hammer the GPUs and add visual flair. RTX is no exception. And will not be the last.

Yet realtime graphics aren't based on raytracing and won't be for a year or two minimum (needs to become mainstream affordable), suggesting that the creation of this GPU wasn't for the game market
When 11 games start supporting RTX then I say it's about damn time. NVIDIA won't stop at 11 games though, there will be more on the way. And with both DXR and Vulkan being ready, I say it's about good damn time as well.

NVIDIA just doesn't care if the medium end GPUs supports their tech, the 8600GT tanked with PhysX on, the 460 didn't do extreme Tessellation well. The 970/1060 can't do VXAO and HFTS with decent fps. That's how NVIDIA introduces their tech. RTX is again no exception, it might be absent this gen from mainstream cards. But next gen it won't.

This discussion probably needs to be refocussed.
I will summarize all the points below:

*RTX and DLSS are an afterthought marketing gimmick transported from the professional line to save costs.
-Doesn't make sense after the increased support required from NVIDIA to properly implement and spread RTX and DLSS, which raises costs significantly.

*NVIDIA's splitting of gaming and professional lines increase costs.
-NVIDIA made that model extremely profitable and it drove them to maintain the leader position far longer than before. Doesn't make sense to risk abandoning that model for a marketing trick that could potentially jeopardize their competitive advantage, with larger dies, higher power demands, high production costs, high prices AND the added problems of supporting RTX and DLSS.

*RTX is created to sell more cards.
-RTX is the culmination of a long line of GameWorks effects that culminated with DXR and RTX. RTX is just the hardware acceleration of a standard DXR API. RTX is designed to gain the upper hand in that API.

*DLSS is created as a marketing gimmick to justify the tensor cores
-Tensor cores were repurposed and brought in to assist with RTX, DLSS is just icing on the cake if the game doesn't use RTX.

*DLSS could be replaced with a compute solution.
-It could but it would require the deep participation of developers, something they seem unwilling to do.
 
Last edited:
That theory doesn't make any sense to me. Nvidia is certainly not shy about building chips that are fit for purpose. We also know that they can add/remove SM features at will so there was literally nothing stopping them from creating Turing RT GPUs for Quadro and Turing non-RT chips for Geforce. I also think it's silly to believe they bent over backwards to invent gaming use cases and worked with Microsoft / game developers on DXR as a second thought for hardware that they could've easily just cut out of the chip(s)......come on.

Why is it so hard to accept that nVidia is just genuinely pushing real-time RT in games? Isn't that a simpler, more plausible version of reality? The argument that RT is "too early" because it runs at 1080p 60fps is a ridiculously arbitrary standard given that 1080p is still by far the most popular gaming resolution. It's here, it works and it's not going anywhere.
They are pushing RT in games. The question is, why? My suggestion is because that better serves their goals in the more lucrative markets, and having settled on that design for those markets, nVidia looked at maximising profits by considering how to use that same design in a huge-margin 'prosumer' PC GPU. They also didn't 'bend over backwards to invent gaming uses' - they are already working on these as part of their extensive ML campaign, and realtime raytracing has value in productivity. Everything in Turing was happening anyway, whether 2070/2080 were released or not. Releasing them to the gaming space helps in several ways.

The reason to suggest it's too early - these dies are massive and expensive! Games have to target a viable mainstream tech level, and realtime raytracing is far beyond that mainstream (<$300 GPUs?). There are lots of techs that could have been implemented earlier in history if IHVs had ignored economic limits and crammed crazy amounts of silicon in. What's different now is nVidia can afford to cram crazy amounts of silicon in for the professional markets, creating a bonkers big chip, which they can also sell to a tiny portion of the PC gaming space by using that chip to drive a couple of features.

Nope it's not. Fact is -looking at history- NVIDIA NEVER does anything for pure marketing reasons,...
But no-one ever suggested that. :???: It's not done for marketing. Turing was developed for their professional, highly lucrative businesses. They then looked at ways to use that same hardware in the gaming space, and came up with DLSS.

When 11 games start supporting RTX then I say it's about damn time. NVIDIA won't stop at 11 games though, there will be more on the way. And with both DXR and Vulkan being ready, I say it's about good damn time as well.
When the only reason devs implement raytracing is because nVidia are funding it, and they wouldn't otherwise because the costs aren't recovered by the miniscule install base paying for it, it's too early. It's nice of nVidia to invest in realtime raytracing, but it's (probably) going to be a long time before raytracing become mainstream.

I will summarize all the points below:
With a hideously slanted, prejudiced, and confrontational interpretation.

The real suggestion is Turing was developed 100% for the professional markets - ML, automative, and professional imaging - with that design being intended to occupy the new flagship range of gaming GPUs to make more profit from the same design in the PC space. nVidia have looked at the hardware features of Turing and considered how to best make them work in the PC space, resulting in the development of DLSS to make use of the Tensor cores. The end result is a profit-maximising strategy from the one uniform architecture.

Whatever, I'm out of this discussion now. Peoples gonna believe what they wanna believe and no-one's going to change their mind; certainly not if an alternative to their opinions is whole-heartedly considered childish or ridiculous. I'm going back to the more intelligent and considered discussions of the console forums. ;)
 
Last edited:
They are pushing RT in games. The question is, why? My suggestion is because that better serves their goals in the more lucrative markets, and having settled on that design for those markets, nVidia looked at maximising profits by considering how to use that same design in a huge-margin 'prosumer' PC GPU. They also didn't 'bend over backwards to invent gaming uses' - they are already working on these as part of their extensive ML campaign, and realtime raytracing has value in productivity. Everything in Turing was happening anyway, whether 2070/2080 were released or not. Releasing them to the gaming space helps in several ways.

The reason to suggest it's too early - these dies are massive and expensive! Games have to target a viable mainstream tech level, and realtime raytracing is far beyond that mainstream (<$300 GPUs?). There are lots of techs that could have been implemented earlier in history if IHVs had ignored economic limits and crammed crazy amounts of silicon in. What's different now is nVidia can afford to cram crazy amounts of silicon in for the professional markets, creating a bonkers big chip, which they can also sell to a tiny portion of the PC gaming space by using that chip to drive a couple of features.

But no-one ever suggested that. :???: It's not done for marketing. Turing was developed for their professional, highly lucrative businesses. They then looked at ways to use that same hardware in the gaming space, and came up with DLSS.

When the only reason devs implement raytracing is because nVidia are funding it, and they wouldn't otherwise because the costs aren't recovered by the miniscule install base paying for it, it's too early. It's nice of nVidia to invest in realtime raytracing, but it's (probably) going to be a long time before raytracing become mainstream.

With a hideously slanted, prejudiced, and confrontational interpretation.

The real suggestion is Turing was developed 100% for the professional markets - ML, automative, and professional imaging - with that design being intended to occupy the new flagship range of GPUs to make more profit from the same design in the PC space. nVidia have looked at the hardware features of Turing and considered how to best make them work in the PC space, resulting in the development of DLSS to make use of the Tensor cores. The end result is a profit-maximising strategy from the one uniform architecture.

Whatever, I'm out of this discussion now. Peoples gonna believe what they wanna believe and no-one's going to change their mind; certainly not if an alternative to their opinions is whole-heartedly considered childish or ridiculous. I'm going back to the more intelligent and considered discussions of the console forums. ;)

I agree 100% with what you just established and I think its the more informed/reasoned viewpoint backed by technical facts and an understanding of basic business/profit maximization.
At $800/$1200, these are essentially glorified Quadro cards. The lowest entry Quadro was cheaper than $800.
They essentially pushed ever segment into higher prices and margins.
None of the cards released are capable of doing Ray tracing in parallel perfectly w/o an impact on FPS.
The higher the FPS, the lower the time window to do ray tracing and the uglier the results ... Thus the negative impact on FPS.

Lastly, DLSS is nothing more than another bold push to tether people's hardware to a cloud service.
It's a demonstration that their cards aren't up to snuff and thus they have to pre-bake compute.

Across the board, I am learning to keep my opinions to myself when it comes to how foolish certain products are from a valuation standpoint.
Let the people who want to blow their money do so.
I'm better served in buying the stock of companies that have the biggest skews and milk their consumers the most.

This is a big card release for developers and offline rendering..
For realtime, it's a clear and present money grab and the people who reason about it frankly don't care.. which is fine.
 
Last edited:
None of the cards released are capable of doing Ray tracing in parallel perfectly w/o an impact on FPS.
If you expectrd ray tracing to be free of charge you need to have your expectations checked.
With a hideously slanted, prejudiced, and confrontational interpretation.
Apologies if I sounded confrontational, didn't mean to. However there is nothing prejudiced in what I said, it's just basic counter arguments.
 
They are pushing RT in games. The question is, why? My suggestion is because that better serves their goals in the more lucrative markets, and having settled on that design for those markets, nVidia looked at maximising profits by considering how to use that same design in a huge-margin 'prosumer' PC GPU. They also didn't 'bend over backwards to invent gaming uses' - they are already working on these as part of their extensive ML campaign, and realtime raytracing has value in productivity. Everything in Turing was happening anyway, whether 2070/2080 were released or not. Releasing them to the gaming space helps in several ways.

The reason to suggest it's too early - these dies are massive and expensive! Games have to target a viable mainstream tech level, and realtime raytracing is far beyond that mainstream (<$300 GPUs?). There are lots of techs that could have been implemented earlier in history if IHVs had ignored economic limits and crammed crazy amounts of silicon in. What's different now is nVidia can afford to cram crazy amounts of silicon in for the professional markets, creating a bonkers big chip, which they can also sell to a tiny portion of the PC gaming space by using that chip to drive a couple of features.

But no-one ever suggested that. :???: It's not done for marketing. Turing was developed for their professional, highly lucrative businesses. They then looked at ways to use that same hardware in the gaming space, and came up with DLSS.

When the only reason devs implement raytracing is because nVidia are funding it, and they wouldn't otherwise because the costs aren't recovered by the miniscule install base paying for it, it's too early. It's nice of nVidia to invest in realtime raytracing, but it's (probably) going to be a long time before raytracing become mainstream.

With a hideously slanted, prejudiced, and confrontational interpretation.

The real suggestion is Turing was developed 100% for the professional markets - ML, automative, and professional imaging - with that design being intended to occupy the new flagship range of GPUs to make more profit from the same design in the PC space. nVidia have looked at the hardware features of Turing and considered how to best make them work in the PC space, resulting in the development of DLSS to make use of the Tensor cores. The end result is a profit-maximising strategy from the one uniform architecture.

Whatever, I'm out of this discussion now. Peoples gonna believe what they wanna believe and no-one's going to change their mind; certainly not if an alternative to their opinions is whole-heartedly considered childish or ridiculous. I'm going back to the more intelligent and considered discussions of the console forums. ;)

I've been saying this (her) all along. Turing is a "Quadro" GPU and its architectural design was primary based on the needs of content creators (10x increase in baking maps, replacing CPU based render farms etc) and machine learning.
 
Back
Top