Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Curious to see if the titles just get better and better with each title. So perhaps transfer learning is happening.
Or; it really is a per title basis and they have to restart.
I suspect it shouldn’t matter. They are trying to train it against SSAA. So ideally there’s enough for AI to catch title to title.
 
Also curious how this DLSS "sharpness slider" works in Monster Hunter. Not sure if this the first implementation or whether another game used this feature.
 
Also curious how this DLSS "sharpness slider" works in Monster Hunter. Not sure if this the first implementation or whether another game used this feature.
There's no indication the Sharpness slider is tied to DLSS? Many games have a sharpness slider.
 
There's no indication the Sharpness slider is tied to DLSS? Many games have a sharpness slider.
Yea, it is in many games. I just noted that they mentioned "another DLSS feature with a new sharpness slider" so wasn't quite sure.
 
Yea, it is in many games. I just noted that they mentioned "another DLSS feature with a new sharpness slider" so wasn't quite sure.
Yeah not sure either. No idea if their article is purely based on the screenshots and the active Slider option just below DLSS is why they made that assumption or whether it's something Nvidia has added. Since some of the negativity towards DLSS has been due to the blurring, one wonders if they (Nvidia) decided to add a post-process filter like CAS to help.
 
If we're sticking technical here, DLSS will continue to improve as they get better at it. I'd rather this topic not become RIS vs DLSS (or the politics of feature support). They are 2 entirely separate techniques and can have very different outputs on some items in which we would expect similarity. I might be able to actually talk to DLSS technically especially if we see more screenshots and what not come up, but I'm not going to do this RIS vs DLSS thing.

There are a lot of opportunities for machine learning to be leveraged in games, DLSS is an interesting technique, but it's greatest weakness is the fixed compute time to run through the NN (we would call DLSS an edge based NN, similar to saying 'Hey Google', those devices are locally equipped with NN to detect it). In a typical upscale/AA/sharpening based scenario where we either had (a) way more compute or (b) much more time, DLSS would produce sufficiently superior results. But that's not the interesting topic to discuss in the field, since in a way that's already solved, what's interesting is getting superior results in a frame time that needs to fit in 16ms or less.

DLSS is a good topic for that type of discussion.
 
What's with all the hate towards nvidia's solutions regarding RT/DLSS? I think they are or will be very similar to what AMD has or is going to have.
None of us hate the technology, especially RT. What many of us dislike is marketing bullshit, which Nvidia are experts at.

Anyone should be able to question how something is implemented rather than simply blindly bow to the green God.

The jury is still out on DLSS IMO, especially given the CAS feature AMD has provided to devs which is seemingly producing as good quality and performance with a lot less complexity. Not seem to have the strengths but CAS wins out by far on implementation time/complexity.

RT itself, well it's the first implementation we've seen as a hardware accelerated dedicated RT for consumers so there's not much we can compare to or analyze. There has been some question as to the black box approach and lack of flexibility but it's still very early days in the gaming world.

So is what we're seeing exciting for gamers? Definitely. Should we simply regurgitate Nvidias marketing? No. Question everything.
 
What's with all the hate towards nvidia's solutions regarding RT/DLSS? I think they are or will be very similar to what AMD has or is going to have.
There's no hate. It's about getting correct data. The impact of DLSS is measured both in quality of result and industry adoption. A list of titles quantifying industry adoption is only as useful as it is accurate. If that list is inaccurate, you get a misrepresentation on the state of industry adoption. There is thus hate towards marketing numbers that obfuscate the truth and make correct analysis difficult/impossible.

It's a given some titles would get DLSS support at launch as heavily backed by nVidia to launch the tech. Since then, is there any independent movement within the industry, or are devs seeing adding DLSS support as an inefficient use of time/resources. Is the external training requirement a significant barrier to entry? Is the limitation of the technique to RTX cards uneconomical? If so, does that mean NN based solutions don't have much of a future?
 
It's a given some titles would get DLSS support at launch as heavily backed by nVidia to launch the tech. Since then, is there any independent movement within the industry, or are devs seeing adding DLSS support as an inefficient use of time/resources. Is the external training requirement a significant barrier to entry? Is the limitation of the technique to RTX cards uneconomical? If so, does that mean NN based solutions don't have much of a future?
Still too early to judge this one. But with DirectML now being out in the wild, developers could be aiming to produce their own in house variants of AA or upscaling.
I've no delusions on the number of Machine Learning engineer jobs in game studios, been looking around and there's quite a bit, doesn't necessarily mean it's upscale and AA, but we're definitely in for seeing more usage of machine learning in games in the coming 4-5 years.
 
Opinion (which I've already expressed a year ago when DLSS was announced) : ML AA/Upscaling/Halucination for real-time gaming scénarios (un-recorded/not-pre-rendered) is the dumbest thing ever. Nvidia knows that (or else they wouldn't have demonstred it only with pre-recorded demos/benchmarks as they had all the time in the world to train it on any other game of their choice or even build a simple playable tech-demo.. They never did.) top of te line CBR implementions and now things like CAS (which can be used for upscaling when directly implemented in engine) are certainly the more logical and cost effective way to go. DLSS is/ was one way to justify the silicon cost of the Tensor Cores in consumer GPUs.
 
I'm not sure why it's considered the dumbest thing ever; it's clear that it works. Every new technology will have growing pains, nvidia was the one to take it on (foolishly I would say). A lot of developers may not be interested in supporting a feature that is RTX only. But that doesn't mean a third party company couldn't come along and do the exact same thing using DirectML and supporting all GPUs.

I don't see DLSS as being any more expensive or less cost effective than other solutions. You build the game as you see fit, you let the AI company do the work. You take their model and you integrate it back into your own engine add it to the tail end of your pipeline. Effort on behalf the developer is quite minimal.

Solving this problem via as a general problem sounds trivial, but there will be edge cases that will require refinement on all titles. That will also still be on nvidia to solve. But as they get better at solving these problems they can apply this model forward to the next title and the next title.
 
I'm not sure why it's considered the dumbest thing ever; it's clear that it works. Every new technology will have growing pains, nvidia was the one to take it on (foolishly I would say). A lot of developers may not be interested in supporting a feature that is RTX only. But that doesn't mean a third party company couldn't come along and do the exact same thing using DirectML and supporting all GPUs.

I don't see DLSS as being any more expensive or less cost effective than other solutions. You build the game as you see fit, you let the AI company do the work. You take their model and you integrate it back into your own engine add it to the tail end of your pipeline. Effort on behalf the developer is quite minimal.

Solving this problem via as a general problem sounds trivial, but there will be edge cases that will require refinement on all titles. That will also still be on nvidia to solve. But as they get better at solving these problems they can apply this model forward to the next title and the next title.
No sane developer will hand out it's game code to a third party like that unless they are getting $$ in exchange. Which is literally what is being done now. Unless I'm mistaken.. Every single one of the games with DLSS are part of Nvidia's TWIMTBP (whatever this crap is called now) program.
There's not one advantage to this thing. Not one, especially given the poor results after months (!) of training relative to other straightforward solutions available which have better results.
 
I'm not sure why it's considered the dumbest thing ever;
I'd certainly like some clarification on the technical complaints rather than just a straight poo-pooing of the idea.

I don't see DLSS as being any more expensive or less cost effective than other solutions. You build the game as you see fit, you let the AI company do the work. You take their model and you integrate it back into your own engine add it to the tail end of your pipeline. Effort on behalf the developer is quite minimal.
In terms of total effort, it's apparently a lot with the training being processor intensive and time consuming. I'm also not sure it's minimal effort - if so, why aren't more titles adding it?

Solving this problem via as a general problem sounds trivial, but there will be edge cases that will require refinement on all titles. That will also still be on nvidia to solve. But as they get better at solving these problems they can apply this model forward to the next title and the next title.
But as a way to solve the upscaling problem, any progress in that department is in competition with the reconstruction methods that are proving to be far more efficient overall and with better adoption. At this point, one really needs to point to legitimate reason to invest in ML solutions. What advantages will they bring to the table? We can look at the costs/benefits of ray-tracing and see the place it has in the future of graphics, but I'm struggling to see any advantages to ML based upscaling or IQ enhancement. If it could be advanced enough, it could be a one-size-fits-all solution for both upscaling and frame interpolation, but I doubt it could ever be advanced enough.
 
For the time being, both NVIDIA and Microsoft (through DirectML) think they have a strong future. Microsoft repeatedly expressed that in various presentations.
nVidia can be discounted because they're trying to sell hardware, so will promote whatever USPs they have. As for MS, I don't recall anything in particular about game upscaling solutions. DML is about open ML to be used however devs want without any particular emphasis. Upscaling is presented as a use for ML, but that doesn't consider any evaluation on its suitability. Have you a link to MS suggesting ML-based upscaling is/will be superior to other solutions?
 
That's referencing nVidia's work. The example is a static photo being upscaled. We're now past that SIGGRAPH 2018 talk and into the realm of real-world application of nVidia's ML upscaling. MS obviously believe there's a fture in ML (there is) but I'm still not seeing anything to say that they believe the future of realtime game upscaling is ML based.
 
There's not one advantage to this thing. Not one,
That's a gross overstatement، sharpness filters (including CAS) have poor IQ problems in many areas. They exaggerate shimmering and noise as well. Also their 1440p to 4K scaling is atrocious. NN solutions provide better IQ in this specific area.
 
MS obviously believe there's a fture in ML (there is) but I'm still not seeing anything to say that they believe the future of realtime game upscaling is ML based.
Who here said it is the only solution? You some how turned the argument from DLSS has no future to DLSS is the only solution for up scaling. No one said it's the only solution, we are responding to the baseless premature judgment that DLSS and NN solutions are dead in the water.
 
Back
Top