Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

I don't see DLSS as being any more expensive or less cost effective than other solutions. You build the game as you see fit, you let the AI company do the work. You take their model and you integrate it back into your own engine add it to the tail end of your pipeline. Effort on behalf the developer is quite minimal.
Disagree. A while back someone here pointed ot the cost of DLSS at 4K (upscaled from 1440p IIRC) was about 6ms. That's really a lot, but i don't know if this number is similar for other games.
What would be the cost to do bicubic upscale and then something like CAS? I guess 2 ms or less. Would it look much worse, or worse at all? Likely not. How much time does it take to develop this? If you take AMDs code a day. Or a bit more if you do fancy temporal reconstructun to increase quality. And you save the weeks to wait on results from AI company.

With upscaling still being the only application of tensor cores, their existance remains questionable. Upscaling is too simple to require NNs, human written code can do it more efficiently.
I would prefer a smaller and cheaper chip, or more general purpose performance (which could do that tiny bit of ML we need in games as well).
Either this or finally a real application that sells me those cores.
 
In terms of total effort, it's apparently a lot with the training being processor intensive and time consuming. I'm also not sure it's minimal effort - if so, why aren't more titles adding it?
They may have. But it’s in nvidia to release drivers. Which is a second strike against their solution and not necessarily ML.

Developer requirements for ML may not need to be as extravagant as AA + Upscale. Nvidia is attempting a one size fits all solution here; not in the way that AI adapts, but in terms of what developers want from ML. They only offer a single solution. Entirely Post processed.

We invest in ML for the same reason we invested in ML for other industries. It’s an adaptable solution that responds well creating desirable behaviour for scenarios we can’t account for.

That doesn’t mean it necessarily happens right away. The industry is deep in research to develop solutions that are better than algorithmic ones. Part of the equation will require power. The other part will require ingenuity; the same years of ingenuity that has been at the forefront for graphics for the last several decades.

The fact is within a year of dropping DLSS it is competing very well. This is without console support which is where we expect a majority of its growth to begin. Right now we are still taking about a niche hardware market. If you think about a very long timeline in which ML hardware solutions have a chance to gain widespread adoption; developers no longer need to focus on AA or reconstruction. Just build the game native and let AI do the work. That would be an ideal end case scenario; right now DLSS is having to compete against established algorithms that are embedded deep into many engines.
 
Disagree. A while back someone here pointed ot the cost of DLSS at 4K (upscaled from 1440p IIRC) was about 6ms. That's really a lot, but i don't know if this number is similar for other games.
What would be the cost to do bicubic upscale and then something like CAS? I guess 2 ms or less. Would it look much worse, or worse at all? Likely not. How much time does it take to develop this? If you take AMDs code a day. Or a bit more if you do fancy temporal reconstructun to increase quality. And you save the weeks to wait on results from AI company.
First generation of tensor cores. Is taking 6ms to perform AA and scale up to 4K with strong results as of the latest Metro and Anthem showing.

If it is so trivial and simple to do; you are free to do a bicubic and follow up with CAS and see if you get better results than DLSS. The reality is; we should have seen it by now or will soon if this is true; we don’t leave low hanging fruit around for fun.
 
First generation of tensor cores. Is taking 6ms to perform AA and scale up to 4K with strong results as of the latest Metro and Anthem showing.

If it is so trivial and simple to do; you are free to do a bicubic and follow up with CAS and see if you get better results than DLSS. The reality is; we should have seen it by now or will soon if this is true; we don’t leave low hanging fruit around for fun.
Isn't this what is literally being discussed in the past days (and already posted in this thread).
The "best" DLSS implementation in Metro (months of ML training etc etc) vs RIS (https://www.techspot.com/article/1873-radeon-image-sharpening-vs-nvidia-dlss/) where regular upscaling + RIS is constantly better than DLSS in IQ?
 
Who here said it is the only solution? You some how turned the argument from DLSS has no future to DLSS is the only solution for up scaling. No one said it's the only solution, we are responding to the baseless premature judgment that DLSS and NN solutions are dead in the water.
Okay, let me rephrase that to say I don't see anything from MS saying they believe in the value of NN upscaling, contrary to your statement that both nVidia and MS believe NN upscaling has a strong future. MS have used upscaling as one example of the applications of ML AFAICS. I don't see anything saying they feel it has a strong future as an upscaling solution.
 
The example is a static photo being upscaled.
No not just a static photo, it was a live gameplay.

I don't see anything saying they feel it has a strong future as an upscaling solution.
Did you bother to even check the link?

We couldn’t write a graphics blog without calling out how DNNs can help improve the visual quality and performance of games. Take a close look at what happens when NVIDIA uses ML to up-sample this photo of a car by 4x. At first the images will look quite similar, but when you zoom in close, you’ll notice that the car on the right has some jagged edges, or aliasing, and the one using ML on the left is crisper. Models can learn to determine the best color for each pixel to benefit small images that are upscaled, or images that are zoomed in on. You may have had the experience when playing a game where objects look great from afar, but when you move close to a wall or hide behind a crate, things start to look a bit blocky or fuzzy – with ML we may see the end of those types of experiences.

Also:

This year, we’re furthering our commitment to enable ML in games by making DirectML publicly available for the first time.

Many new real-time inferencing scenarios have been introduced to the developer community over the last few years through cutting edge machine learning research. Some examples of these are super resolution, denoising, style transfer, game testing, and tools for animation and art.

https://devblogs.microsoft.com/directx/directml-at-gdc-2019/
 
The "best" DLSS implementation in Metro (months of ML training etc etc) vs RIS (https://www.techspot.com/article/1873-radeon-image-sharpening-vs-nvidia-dlss/) where regular upscaling + RIS is constantly better than DLSS in IQ?
TechSpot has always considered 1800p scaling + a sharpen filter to be better than DLSS, why would they say anything different after CAS? They basically replaced the sharpen filter with CAS and regurgitated the same thing: 1800p + CAS is better than DLSS. While admitting that 1440p to 4K DLSS is far better than CAS.
 
No. I'm free (and any body else) to discuss Nvidia DLSS IQ in the friggin thread titled "Nvidia DLSS antialiasing discussion"
I don’t have an issue discussing DLSS but nothing about the recent discussion has been around the technology. Largely just around the politics of adoption and the marketing surrounding it. The discussion isn’t about Its strengths or weaknesses. Its not about what it’s doing or how it’s done. Or how much compute is required. How feasible it is on non-rtx hardware.

It most certainly is about judging the technology.

Sure be at it then. I will gently ignore it.
 
No not just a static photo, it was a live gameplay.

Did you bother to even check the link?
Take a close look at what happens when NVIDIA uses ML to up-sample this photo of a car by 4x.

Static photo, like I said.

But I already mentioned that was nVidia's work. Everything else you've quoted is MS listing possible uses. Where does it say they feel there's a strong future in upscaling? You've put words into MS's mouth saying they feel NN upscaling has a strong future. ML has a strong future, for sure, but upscaling is not particularly backed or promoted. No-one knows how well NN upscaling will progress over the years because its virgin territory, hence why only nVidia are really pushing it as a solution as the only entrant with a horse in this race. If ongoing development with NN upscaling empowered through DML happens and it proves itself, the story may change. As for my original question though, does NN upscaling have much of a future, MS aren't backing it the way you suggest. When you said they've presented several presentations, I thought you had content focussed on upscaling rather than general ML.
 
Static photo, like I said.
Sigh, Nope! Check the GDC presentation itself! Is this really hard to do?
Where does it say they feel there's a strong future in upscaling?
MS literally listed Super Resolution (upscaling) as the first use case for ML in games, and repeatedly talked about it in two blogs, what does that say about it? They wouldn't bother mentioning it like that if they thought it was an exercise in futility.

No-one knows how well NN upscaling will progress over the years because its virgin territory
Oh really? then what were all those 100% sure statements that it's a dead in the water solution?
 
Last edited:
Sigh, Nope! Check the GDC presentation itself! Is this really hard to do?

MS literally listed Super Resolution (upscaling) as the first use case for ML in games, and repeatedly talked about it in two blogs, what does that say about it?
It says its a use for ML and something devs can look into. MS are talking about Machine Learning as a technology, giving examples for devs to investigate.
They wouldn't bother mentioning it like that if they thought it was an exercise in futility.
Of course not, but that doesn't mean it'll ever prove itself to be a useful solution. It's an area for devs to investigate. Saying, "this is possible," is not the same as saying, "this is brilliant." Work in ML upscaling is well proven for static images, but not in realtime situations where DLSS is bleeding edge and currently being evaluated for performance and viability.

Oh really? then what were all those 100% sure statements that it's a dead in the water solution?
I never said it was dead in the water. I only asked questions - the questions raised when looking at industry adoption of ML upscaling.
 
I'll try some stuff instead of playing games.
If curious the example that MS provides for DML is super resolution:
https://github.com/Microsoft/DirectML-Samples

I'll compare it to a standard tensorflow python super resolution algo and see what we're getting in terms of performance for a 1070.
This is just super resolution, to be clear.
Figuring out anti-aliasing is going to be rougher. But I think with enough reading of white papers perhaps I can come up with a shitty version. I'd be pretty happy if I can pull it off. Happier if I could do SSAA followed by the upscale. haha

edit: just so you guys know, not understanding DX12 and how it works is making this whole process brutal AF - reading through the code, there's a lot of code resource management before we even get to the ML code.
 
Last edited:
This may explain some results:

Something I’ll have to take note of; even if we moved DLSS off of RTX tensor cores; perhaps may not get the same result as using pure compute.
Massive implications for the amateur DS folks looking for hardware to train with; massive in-being blind sided of getting different results when moving off RTX.
 
Personally, if nobody mentions IEEE I expect flushing denormals and worst.
I don't think i respected IEEE until now; now that i see for certain applications the need for it.
Difference between being an armchair hobbyist and being a professional; perspective, needs and requirements are very different; and price points!

edit: finished reading through the deck. Overall pretty good for FP16, no needs for worry unless you're pushing FP32. Which is already 1/2 rate compared to FP16 on tensor cores IIRC.
 
Last edited:
I don't think i respected IEEE until now; now that i see for certain applications the need for it.
Difference between being an armchair hobbyist and being a professional; perspective, needs and requirements are very different; and price points!

edit: finished reading through the deck. Overall pretty good for FP16, no needs for worry unless you're pushing FP32. Which is already 1/2 rate compared to FP16 on tensor cores IIRC.

Nobody would say it's useless (you didn't, I'm exagerating). There are a variety of faster modes or specific instructions which cut precision corner in "dirty" places. The detail is that, as a IHV you have to be transparent about it, because without insight it's hardly possible to trust your own judgement regarding results and guarantees in code you wrote yourself.
Bizarrely this leads to similar rev.-eng. efforts from individuals, simply brute force testing through the whole number space to get an idea about what's going on. It's a waste of time, because the information could have been provided easily.
Limited precision FP-math is already very challenging. When you don't know what's actually going on, then science becomes handwaving. There certainly are limited cases of iterative algorithms which are behaving strictly inside bounds and have certain properties of error self-correction (ie. Newton-Raphson, Halley's Method). NN lend themselfs sufficiently well to corrective feedback, but all in all I'd place them in the class of (sometimes erratic) approximations - not because of the low precision, but inherently approximative.
I wonder if an algorithm dropped in those tensor cores can be even understood fully: "change a bit here and tell me the effect"-type of uncertainty. That definitely excludes it from utilization in some areas, but it's ( coincidence of choice? :) ) well suited for not so scientific workloads ... as games.

Allow me the moderate dismissive (and not tooo serious) stance here. :) A more serious proposal would be: for as long as you can understand the operation, you can use it for anything. Knowing or not knowing is the only thing that matters for constructive use.
 
Prelim Update: Why this probably won't work unless I get a new GPU.
The issue is my hardware, unless I can get my hands on a Maxwell this isn't going to be an objective test.
The directML demo is the same one shown at GDC that upscales a 540 image to 1080p. Approximately double resolution, 4x the pixels.
It requires 16F performance to be solid in which Pascal is significantly neutered compared to Maxwell. I'm looking at about 10fps right now. Looking at the hardware comparison, maxwell is 2:1 on half float shaders, pascal is 1:64. I'm getting terrible performance, I think a iGPU with proper 16f 1:1 performance would outperform my card here; easily.

Anyway, small roadblock; some preliminary thoughts here.

Nvidia did provide this pre-trained model over to MS. Likely trained on FH3. But they only trained the super resolution and not the anti aliasing section. So while I can't exactly determine how much the cost would be to do anti-aliasing then followed by super resolution; we can get an idea of how long the super resolution takes. (as soon as I can get a card that can do this test properly).

In that sense a pure upscale solution, leveraging this model would be closer to Tech Reports comparisons between native/RIS/DLSS. Mainly because together they are now all using the same AA methods etc, the only difference is the upscaling. You can do a proper comparison to the source. Where with the actual released DLSS, they are taking metro with no AA, TAA is stripped off, and they apply their own AA before scaling up. You can't even disable TAA in Metro. So how nvidia got their native shots is...? as i'm reading people are attempting to to turn it off using weird methods; still getting TAA but less of it. A very interesting discussion piece if developers are designing their whole pipeline around TAA... (to the point that players can't change their AA methods) then it could provide some insight as to why some developers are behind here on implementing DLSS or perhaps why DLSS could look dramatically different from the native game (that could be designed around TAA entirely).

It's an interesting note to be mindful of because if you're used to seeing TAA, then in screen shots you might be used to seeing artifacts caused by TAA and consider that level of noise/blur/ghosting acceptable since we're calling TAA native.

Anyway, gotta think about my next steps here. For me to add AA to the super resolution, I'd have to build a new trained model, I can leverage some of the layers from the code provided for super resolution, but I've got to figure out how they did the AA part. So that's the next step.

The goal here for me is to see if DLSS can be re-created in DML. I'm going to assume they made DLSS with Cuda, so that's been on my mind; but I could be wrong.
Anyway with similar performance. If this is close enough, then we could see how this would perform on RVII or Navi cards. Get an idea of the cost if they decide to use this on console etc.
 
Last edited:
So how nvidia got their native shots is...?
I believe the devs are provided a tool to hook into the game engine which generates the massive SSAA shots to be submitted to Nvidia for training.

I'm confused as to how DML compares to DLSS. Who is doing the training with DML? Is it using a general trained model that hopes to be used across all games and DML is used for client to apply a model for termporal upscaling? Surely this would be significantly inferior to a game-specific trained model like DLSS?
 
First generation of tensor cores. Is taking 6ms to perform AA and scale up to 4K with strong results as of the latest Metro and Anthem showing.

Has anyone done blind subjective testing between DLSS, checkerboarding, TAAU etc? Otherwise how do you distinguish strong from weak? Comparing against half a decade old TAA is a bit of a joke.
 
Back
Top