Nvidia DLSS 3 antialiasing discussion

If Nvidia can calculate quickly half step between two real frames, why not also two additional quarter steps? Generate three frames between two real ones.

That will probably be added in future GPU generations.

Do you have any evidence to backup such claims?
 
If Nvidia can calculate quickly half step between two real frames, why not also two additional quarter steps? Generate three frames between two real ones.

That will probably be added in future GPU generations.

It’s a reasonable next step that probably requires an even beefier OFA and moar tensors. It’s basically just chopping up the optical flow and motion vectors into smaller chunks and looping over them.
 
I'm getting flashbacks...
This is an optional tech, with upsides, downsides, compromises and potential improvements upon further development.
In short, not dissimilar with what we've seen since the dawn of computer graphics.

[...]
Point being, dismissing new tech by speculation alone, seems to me at least, counterproductive, although weirdly enough, not in anyway unexpected if history is any indication.
Completely agree, this is the DLSS 1.x/Turing RT moment, a very exciting new technology which is far from perfect but it has so much room to grow and improve. People are looking at it only from the current implementation vs much more developed technologies, which is fair as we can only compare what we have not what it could be, however dismissing it completely is not fair. We are far closer to the top of the S-curve for some (traditional rasterisation, maybe DLSS 2?) vs near the bottom on others (RTRT, AI upscaling/frame generation) and given NVidia's track record developing DLSS it's fair to think it'll improve significantly over the next 2 or 3 years, maybe DLSS 4 will be that next big step. It might take a while to be the overall better solution, it might be (likely is) another stepping stone to something greater I don't know but if people are dismissing new tech because it's not better in every way vs more developed solutions that's very close minded

Just like some dismissing RT, first generation RT hardware and software vs 20+ years and 15+ generations of rasterisation hardware and software development, the performance cost is significant and won't be worth it for most people. That doesn't mean the technology as a whole should be dismissed, 4th or 5th gen RT/ML upscaling/frame gen will be worlds away from their progenitors and in 5 years they'll be far more "normal"
 
Simple thought experiment.

DLSS2 + fast CPU: 120fps
DLSS3 + slow CPU: 120fps

Are they the same?

Latency wise, using Reflex on the slow CPU alone will make the latency comparable to the fast CPU with DLSS2 and no Reflex.

However, your slow CPU now has a chance to deliver smoother output, reducing judder and blur, overcoming CPU limited scenes, overcoming fps locks on cut scenes, fps caps during gameplay for old or new games, and most importantly deliver a smooth display in heavily path traced/ray traced games, if this is our way to have plenty of path traced games right now, so be it.

Not all games need the tightest of latency, Red Dead Redemption 2 stands as a prime example of this, and even our fastest of CPUs today crumble in heavy single threaded workloads for current and old games. This will help tremendously in those cases.
 
Latency wise, using Reflex on the slow CPU alone will make the latency comparable to the fast CPU with DLSS2 and no Reflex.

Why would you choose to turn off Reflex with DLSS2 but turn it on for DLSS3?

However, your slow CPU now has a chance to deliver smoother output, reducing judder and blur, overcoming CPU limited scenes, overcoming fps locks on cut scenes, fps caps during gameplay for old or new games, and most importantly deliver a smooth display in heavily path traced/ray traced games, if this is our way to have plenty of path traced games right now, so be it.

The slow CPU is doing none of those things just because DLSS3 is enabled. It’s still slow and still updating game state slower than the fast CPU. DLSS3 doesn’t fix that.
 
Why would you choose to turn off Reflex with DLSS2 but turn it on for DLSS3?



The slow CPU is doing none of those things just because DLSS3 is enabled. It’s still slow and still updating game state slower than the fast CPU. DLSS3 doesn’t fix that.

Not 100% sure, but I think in a lot of games they update the game state at a fixed rate regardless of fps so movement, physics etc remain consistent regardless of performance (eg. driving at 60 fps feels the same as 120fps). Animation usually runs at the same rate as fps so it doesn't look weird, but I think the event loop that polls for user input etc would be fixed rate.
 

Mikkel Gjoel extracted the generated frames and turned it into a video. You can follow the link to download it. It looks pretty good.

Definitely interested to see more examples of how it handles occlusion in other games going forward, hopefully they can improve this or it's just a particular issue with this game atm.

As a % of frames and per the entire video, which doesn't have Peter occluded most of the time, the effect is likely very minimal especially at that framerate. However, every DLSS3 generated frame in that segment has pretty significant artifacting:1664734972981.png

1664734861015.png

1664735006905.png


Damn Peter's socks are being blown off!

1664735041416.png
 
Definitely interested to see more examples of how it handles occlusion in other games going forward, hopefully they can improve this or it's just a particular issue with this game atm.

As a % of frames and per the entire video, which doesn't have Peter occluded most of the time, the effect is likely very minimal especially at that framerate. However, every DLSS3 generated frame in that segment has pretty significant artifacting:
If only it was just the protagonist, but it can mess even static geometry even worse than any of those examples, as seen when he's running up the windows of some building to the roof
 
You're not even going to see those artifacts when they're slotted inbetween native looking frames at much higher framerates.

Watching this tech evolve is going to be something I tell ya.

You can find artifacts with any reconstruction or even upscaling technology, even on other platforms. In special when zooming in that much in stills. And yea, its the first iteration not even out to the public but ok.
 
You're not even going to see those artifacts when they're slotted inbetween native looking frames at much higher framerates.

Perhaps - which I mentioned regardless. Considering we routinely pixel-peep here wrt zooming into distant objects to determine rendering inconsistencies when discussing different reconstruction methods, I do think it's not out of the question to note a new reconstruction addition might cause the protagonist to occasionally lose a foot, no matter how difficult it might be for you to hypothetically notice. :)

As for the "still less artifacts than the PS5 version" (I literally took this poster off ignore an hour ago, took them 10 minutes), of course not. As I've shown, Spiderman can have more artifacts than the PC version with just DLSS2, let alone 3.

Again I'm not expecting DLSS3 to be flawless, but that's just yet another troll reply.

1664740183573.png

Watching this tech evolve is going to be something I tell ya.

Potentially! I certainly hope we're going to see DLSS 2 continue to evolve as well though, there's definitely still room for improvement there. I mean there are some hard ceilings here.
 
Back
Top