Nvidia DLSS 3 antialiasing discussion

I really like how you can have a very nice gain even with cpu bottleneck, that's smart. Now I hope we'll find that ia generated frames are very nice 99% of the time. Like DF said, it will be complicated to evaluate the quality of all this...

I will stay with my 3090 for a while, but, nice tech. Like nVidia or not, but they're on a roll, tech wise...
 
"We"? I think everyone here knows how Gamersnexus and other outlets will see DLSS 3, Raytracing and other features of Lovelace. Their opinion havent changed since Turing. So why should i care about some youtuber telling me that DLSS 3 isnt worth it? They said the same about DLSS and temporal upscaling. The present has proven them wrong. I hope they have learned from it.
^^This^^ one million times !
if you put yourself in the shoes of Nvidia for a single second, giving an exclusivity to DF was the right thing to do in order to explain the technology. Sites like Hardware Unbox, as of today, still denies RT and it took them 2 years to acknowledge DLSS benefit (and only because FSR2 was launched lol). These benchmark bar generators (not reviewers, they don't deserve this qualification) don't have any clue about technology, they just care about value for money at instant X. They obviously have an audience but they bring zero to the table for the industry and how graphic rendering is evolving. They always play the chicken and egg problem regarding to hardware/newtech and software/adoption. Terrible for a graphic enthusiast and anti progress :cry:

Regarding DLSS3, it's not perfect but it's extremely impressive and it brings another choice. Some will use it immediately everywhere, some only occasionally and some won't touch it from a mile. And that's fine! We are not all the same, but we must agree that more option is always better for the consumer. When looking at the screenshot below, I'm really surprised that DLSS3 easily destroys the reference AI offline solution. It means that it's a huge problem to solve and Nvidia did it 95% right from the start. Quite an achievement!

Chronos SlowMo V3 vs DLSS3 Spiderman pic1.png

Of course some quality is lost wen pixel pepping in still images, but overall, in motion, most people will activate it, like most people activate DLSS2 and that's a big win for Nvidia in the long run (already start waiting for AMD equivalent, which should come in... 2 years?)
 
Last edited:
I'm getting flashbacks...
This is an optional tech, with upsides, downsides, compromises and potential improvements upon further development.
In short, not dissimilar with what we've seen since the dawn of computer graphics.

There is validity in being sceptical, when it comes to latency / responsiveness under certain conditions.
Benchmarks with actual numbers, are not that far down the line.

The value of the complete outcome, is highly subjective.
The additive effect of allowing new or improved tech to be viable, is not.
Image reconstruction and ray tracing for example, go hand in hand.
We will soon know, if the addition of frame generation, can bring new, or better things to the table, as well as the tradeoffs that come with it.

Point being, dismissing new tech by speculation alone, seems to me at least, counterproductive, although weirdly enough, not in anyway unexpected if history is any indication.

When in a few months it's all said and done, and if your individual preferences dictate that none of those things matter to you, or in anyway detract from, instead of enhancing your experience, the benchmarks that focus on rasterization performance alone, are all you need, and will be readily available for you to make an informed purchasing decision.
 
I'm getting flashbacks...
This is an optional tech, with upsides, downsides, compromises and potential improvements upon further development.
In short, not dissimilar with what we've seen since the dawn of computer graphics.

There is validity in being sceptical, when it comes to latency / responsiveness under certain conditions.
Benchmarks with actual numbers, are not that far down the line.

The value of the complete outcome, is highly subjective.
The additive effect of allowing new or improved tech to be viable, is not.
Image reconstruction and ray tracing for example, go hand in hand.
We will soon know, if the addition of frame generation, can bring new, or better things to the table, as well as the tradeoffs that come with it.

Point being, dismissing new tech by speculation alone, seems to me at least, counterproductive, although weirdly enough, not in anyway unexpected if history is any indication.

When in a few months it's all said and done, and if your individual preferences dictate that none of those things matter to you, or in anyway detract from, instead of enhancing your experience, the benchmarks that focus on rasterization performance alone, are all you need, and will be readily available for you to make an informed purchasing decision.

I agree but there is an dynamic in this industry where people are bothered if a technology they don’t personally value is getting too much buzz. RT is a good example of this. Even if the tech itself is interesting there’s always some consumer market impact that people also care about.

With DLSS3 it’s even more complicated because it’s not just an evolution of the rendering pipeline. It fundamentally changes the rules of the game. It’s reasonable for people to be bothered by the prospect of interpolated frames inflating fps charts because there’s no objective way to demonstrate the value of those extra frames.

If we continue down this road what’s stopping Nvidia from generating two interpolated frames between each pair of rendered frames?
 
With DLSS3 it’s even more complicated because it’s not just an evolution of the rendering pipeline. It fundamentally changes the rules of the game. It’s reasonable for people to be bothered by the prospect of interpolated frames inflating fps charts because there’s no objective way to demonstrate the value of those extra frames.

If we continue down this road what’s stopping Nvidia from generating two interpolated frames between each pair of rendered frames?
I can say the exact same thing for DLSS.
What is stopping nVidia/AMD/Intel from upscaling from 340p to 4K.

Re-read my post...
 
Well...probably nothing
But is that a bad? The goal is to increase player comfort, if the hardware develop so much that it will be able to interpolate as much as 2/3/5/etc frames and with minimal artifacts, then...why not?

There isn’t anything inherently bad about the tech. The problem only arises when you start muddling things by claiming “4x performance!” 4x performance means nothing if it’s a mix of “good” frames and “ minimal artifact no state update” frames.

DLSS1/2 is easier to argue for because it is just another method of attaining the same result. By definition DLSS3 frame generation can never produce the same result as the game itself is not updating any faster. So putting both numbers in the same bar graph is extremely misleading.

I can say the exact same thing for DLSS.
What is stopping nVidia/AMD/Intel from upscaling from 340p to 4K.

Re-read my post...

If they can come anywhere near target IQ that would be great.
 
I'm getting flashbacks...
This is an optional tech, with upsides, downsides, compromises and potential improvements upon further development.
In short, not dissimilar with what we've seen since the dawn of computer graphics.

There is validity in being sceptical, when it comes to latency / responsiveness under certain conditions.
Benchmarks with actual numbers, are not that far down the line.

The value of the complete outcome, is highly subjective.
The additive effect of allowing new or improved tech to be viable, is not.
Image reconstruction and ray tracing for example, go hand in hand.
We will soon know, if the addition of frame generation, can bring new, or better things to the table, as well as the tradeoffs that come with it.

Point being, dismissing new tech by speculation alone, seems to me at least, counterproductive, although weirdly enough, not in anyway unexpected if history is any indication.

When in a few months it's all said and done, and if your individual preferences dictate that none of those things matter to you, or in anyway detract from, instead of enhancing your experience, the benchmarks that focus on rasterization performance alone, are all you need, and will be readily available for you to make an informed purchasing decision.

Checkerboarding with its obvious artifacts didnt receive the same responses here. Its a small but vocal group thats very much against new technologies it seems, on platforms they dont even play games on.
NV sees the writings on the wall, its going to be harder and harder to keep improving performance due to die/chip costs. You need new tech to keep improving performances across the board, both CPU and GPU wise (hence DLSS3). I mean look at UE5, its murdering the consoles 8 core x86 zen2 cpus, limiting to 30fps..... Yeah we can get UE5 60fps on mid range hw but then were giving in in fidelity..... CPU requirements skyrocket, dlss3 is mitigating that.
Ray tracing has enabled us to still have large graphics increases, without it we'd be stuck at lighting from one to two decades ago.... We'd never be able to achieve that without acceleration there, the performance cost would be too high, or too little fidelity if in SW mode.

In fact it seems only NV is pioneering the way for new technologies to move forwards, Intel is following suit, AMD will have to aswell (they are implementing HW rt just not on the same level yet). Ray tracing and AI are the new techs like pixel and vertex shaders two decades ago.... its going to improve.
And in all honesty, DLSS3 truly looks awesome, as Richard said. Who'd imagine getting such performance boosts at IQ thats almost impossible to say what is what.
Thats not to say NV is standing still raw compute/raster wise either, its still a healthy boost in raster performance.

Its prices that can be complained about, perhaps. Though the 4090 is actually priced very well against what the 3090 was at launch. Its just the 4080 thats the problem. Its too early to say how the Lovelace line will play out in the pricings down the line. Now NV has a backlog of a mountain of Ampere GPU's to clear. Which still are very capable GPUs nonetheless.

Seeing what kind of visual fidelity can be achieved, like Racerx or the Portal overhaul or even 2077's new settings, thats beyond anything current consoles will ever do. The fact that modders can freely give us enhanced versions of old games is truly something aswell. Something DF was all over about.
 
Last edited:
...

With DLSS3 it’s even more complicated because it’s not just an evolution of the rendering pipeline. It fundamentally changes the rules of the game. It’s reasonable for people to be bothered by the prospect of interpolated frames inflating fps charts because there’s no objective way to demonstrate the value of those extra frames.

If we continue down this road what’s stopping Nvidia from generating two interpolated frames between each pair of rendered frames?

We know the value of extra frames. Better apparent smoothness in animation, reduced camera judder, reduced motion blur. That's true for rendered frames and generated ones. The generated frames have the added benefit of better hardware scaling (more frames with less rendering resources in terms of compute and bandwidth).

The downside of generated frames is latency and image quality. Latency can be measured pretty easily with a tool like Nvidia LDAT. Image quality is the only subjective component in this. Do you notice these generated frame artifacts during gameplay? People are sensitive to different things.

As for whether the generated frames are better performance, I'd argue that it probably is. It just has different caveats than other technologies on the gpu. People use the term "performance" to broadly mean different things. I could uncap my frames and let my gpu hit 100% and I get higher fps but get way worse input lag, which is how review sites typically benchmark. Or I can cap my fps at some target where my gpu never hits 100% and the fps is significantly lower, but the input lag is always lower and stable. Which is performing better? Those two solutions make different trade offs in terms of image quality (smoothness, judder, blur) vs input lag.
 
The problem only arises when you start muddling things by claiming “4x performance!” 4x performance means nothing if it’s a mix of “good” frames and “ minimal artifact no state update” frames
Performance claims, are for competing teams of marketing departments.
What is more fun, is baselines and targets.

The baseline is where you are, the target is what you want to accomplish.
Once those are clear, (personally as in individual preferences, or even Industry wide) things get a lot more interesting.
 
Does it matter? What if you cant "feel" the difference in latency but you can save $500 on the CPU?

If who can feel? The reviewer drawing the bar graphs or the person reading the review? Or is it the Nvidia marketing guy making the slides?

The fact is putting different data points in the same graph is fundamentally misleading. The current definition of a frame includes a state update of the game. A DLSS3 frame doesn’t abide by that definition.
 
Nvidia brought a graphics revolution with rtx and dlss.. dlss 3 looks great takes away the cpu bottleneck.. i.m guess dlss 4 will upscale 1080p to 8k or 12k at 450fps lol✅😁
 
We know the value of extra frames. Better apparent smoothness in animation, reduced camera judder, reduced motion blur. That's true for rendered frames and generated ones. The generated frames have the added benefit of better hardware scaling (more frames with less rendering resources in terms of compute and bandwidth).

I’m not debating the benefits of the generated frames. I’m looking forward to 3rd party verification of those benefits.

As for whether the generated frames are better performance, I'd argue that it probably is. It just has different caveats than other technologies on the gpu.

Yes and one huge caveat here is that it does not actually update the state of the game each frame. I think that distinction fundamentally makes any performance comparison a moot point because you’re no longer comparing the same things.
 
Does it matter? What if you cant "feel" the difference in latency but you can save $500 on the CPU?

Of course it matters if what's being compared is not the same thing, but something that offers benefits while offering compromises.

By this rationale we could ditch 'native' rendering comparisons entirely, and just compare reconstruction - even if it's against a competitors native. Then of course if those compromises between reconstructed and native are largely irrelevant, so is DLSS's advantages compared to FSR/XeSS! Close enough, after all.
 
Re: Options

It's good to have options, yes! That's why we put up with the hassles of PC as a gaming platform - often brought about by all these options. :)

Things that have inherent compromises in bringing about a potentially better experience are always best as an option. People being 'against the technology' is such a silly strawman that's been erected in this thread. Once again, this notion that this technology exists in some vacuum that's not tied to a commercial product (and one that's been marketed as the prime selling feature of enthusiast-class cards that the majority of gamers will not have access to, especially in this economic climate) and thus we shouldn't employ the requisite skepticism is so odd. This is especially so when the company producing this technology has made a considerable effort to obscure the generational uplift you are receiving without employing this 'choice'.

I think what some see as 'against the choice' of DLSS3 is rather skepticism towards the notion that these cards - the 4080 16/12Gb in particular, are significant jumps over their predecessors because of DLSS3 and as such, their asking prices are therefore warranted. It's not either/or, you can be interested and even quite positive about the development of motion interpolation in games, but still somewhat balk at being excited this "choice" you've been given because it's currently tied to what is seen as a considerable price hike over previous new generations. When DLSS3 is more 'democratized' (in a sense) by being being available on midrange cards at some point in 2023, then the argument of "well just ignore it if you don't want to use it", and the comparisons to DLSS whereby it's a cost-saving measure for your res/fps target will be more applicable. Currently though, this technology is joined at the hip with the most expensive debut of any new generation of GPU's.

Although not necessarily on this forum, the other argument I've seen is that DLSS3 is actually reducing choice due to the silicon budget devoted to this. Basically, the argument is that Nvidia could have made considerable advancements in non-reconstructed rendering at the same, or lower price points if they just weren't so wed to their 'obsession' with AI and reconstruction tech in general. This is hardly compelling to me either, as it seems to me we are facing hard limits in available bandwidth, and the choice is either devote an assload of die space to very fast cache, or...? Myself and others have arguments against Nvidia's product segmentation choices with Ada, but this theory will be taken to task in just over a month - it basically requires you to believe AMD has had a breakthrough in their Infinity Cache/chiplet architecture that will completely blindside Nvidia/Intel from a price/performance perspective.

So we'll see. But yeah, I'd say skepticism on that front is uh, more than warranted too.
 
Last edited:
That's why we put up with the hassles of PC as a gaming platform

Theres not much 'hassle' at all anymore. You'd know if you tried.

such a silly strawman

Look in the mirror.

the 4080 16/12Gb in particular, are significant jumps over their predecessors because of DLSS3 and as such

They are faster than any Ampere GPU out there, and its not because of DLSS3.

Although not necessarily on this forum, the other argument I've seen is that DLSS3 is actually reducing choice due to the silicon budget devoted to this. Basically, the argument is that Nvidia could have made considerable advancements in non-reconstructed rendering at the same, or lower price points if they just weren't so wed to their 'obsession' with AI and reconstruction tech in general. This isn't hardly compelling to me either, as it seems to me we are facing hard limits in available bandwidth, and the choice is either devote an assload of die space to very fast cache, or...? Myself and others have arguments against Nvidia's product segmentation choices with Ada, but this theory will be taken to task in just over a month - it basically requires you to believe AMD has had a breakthrough in their Infinity Cache/chiplet architecture that will completely blindside Nvidia/Intel from a price/performance perspective.

Normal raster saw a healthy improvement still. And no, Ray tracing and AI reconstruction are no 'obsessions', they are modern new technologies not only embraced by NV but the whole tech market. I'd say the over-engineered SSD in the PS5 is actually more of an obsession leading nowhere near of an advantage over what other vendors give you.
 
Back
Top