Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

That seems extremely cheap. I'd pick it up in a heartbeat if I had an RTX GPU. As it stands I'm tempted to pick it up even with my paltry 1070 but I already have a long queue of games waiting to be played.

Also the minimum specs are interesting. The AMD spec seems to be much lighter than the Nvidia spec in that regard. I'm guessing it must be pretty friendly to GCN vs Maxwell. I'd love to know how far it will scale down in Classic mode.
It's a very fun game and looks very decent without raytracing. I've had much fun with it yesterday already.
 
I really dont think any such jury is still out, since Hardware Unboxed tested this very specifically and in multiple guises. FSR was inarguably better.
It's still out.
A. I have zero faith in HUB's ability to properly assess IQ matters, especially when it comes to things like IHV provided upscaling solutions. Their bias is just too obvious.
B. The sample of games tested so far is too small to make such wide claims.

And DLSS 1.0 indeed had overall similar image quality to just lower resolution somewhat(like from 4k to 1800p or so) with a similar performance profile.
No, it didn't. I've tried it myself in whatever games it was implemented in. You could either get a similar image quality at lower performance (+ the usual TAA issues) or similar performance with lower IQ.

Not in every title(FFXV would be an exception, for example), but in quite a lot.
So it didn't, yeah. There weren't "quite a lot" of DLSS1 titles to begin with.

Adding sharpening ala CAS wasn't the 'equalizer', that was what could actually put it ahead in cases.
It was an equalizer. It was also often used improperly by HUB and the likes as if you can't apply sharpening to DLSS1 result.

And FSR seems to be a step ahead of this, at least using the Ultra Quality option. Which does make it useful.
Well doh FSR UQ is using a higher original res than anything available in DLSS1 or DLSS2.
The problem is that it's not really DLSS which FSR will be competing with.
DLSS will still be implemented as an Nv exclusive feature providing the best overall result.
What remains to be seen is if FSR will be implemented when there is already a different solution in place.
 
DLSS at least did some form of advanced AA and reconstruction of certain objects without the need of the TAA modified input. FSR is doing nothing and fails especially in scenarios without enough informations. Claiming that FSR delivers better picture quality than DLSS is wrong.
 
As of June 25, 2021

DLSS 2.0 supported titles - 51
  • Amid Evil
  • Anthem
  • Battlefield V
  • Bright Memory
  • Call Of Duty: Black Ops Cold War
  • Call Of Duty: Modern Warfare
  • Call Of Duty: Warzone
  • Chernobylite
  • Control
  • CRSED: F.O.A.D
  • Crysis Remastered
  • Cyberpunk 2077
  • Death Stranding
  • Deliver Us The Moon
  • Edge Of Eternity
  • Enlisted
  • Everspace 2
  • F1 2020
  • Final Fantasy XV
  • Fortnite
  • Ghostrunner
  • Iron Conflict
  • Gu Jian Qi Tan Online
  • Into The Radius
  • Iron Conflict
  • Justice
  • LEGO Builder’s Journey
  • Marvel’s Avengers
  • Mechwarrior V: Mercenaries
  • Metro Exodus / Metro Exodus Enhanced Edition
  • Minecraft
  • Monster Hunter: World
  • Moonlight Blade
  • Mortal Shell
  • Mount & Blade II Bannerlord
  • Necromunda: Hired Gun
  • Nioh 2: The Complete Edition
  • No Man’s Sky
  • Outriders
  • Pumpkin Jack
  • Rainbow Six Siege
  • Redout: Space Assault
  • Scavengers
  • Shadow of the Tomb Raider
  • The Fabled Woods
  • The Medium
  • The Persistance Enhanced
  • War Thunder
  • Watch Dogs Legion
  • Wolfenstein Youngblood
  • Wrench
  • Xuan-Yuan Sword VII

Upcoming DLSS 2.0 Games - 16
  • Atomic Heart
  • Boundary
  • Doom Eternal
  • Dying: 1983
  • FIST: Forged In Shadow Torch
  • Five Nights At Freddy’s Security Breach
  • Icarus: First Cohort
  • JX3
  • Naraka: Bladepoint
  • Ready Or Not
  • Red Dead Redemption 2
  • Rust
  • System Shock (2021, available now in the demo)
  • The Ascent
  • The Persistence
  • Vampire: The Masquerade – Bloodlines 2

I’m pretty sure BFV isn’t using DLSS 2.0. Is there a source for this list?
 
Since I missed a few, below is the official listing of DLSS titles/apps as of June 24, 2021.
NVIDIA RTX: List Of All Games, Engines And Applications Featuring GeForce RTX-Powered Technology And Features | GeForce News | NVIDIA

Supported Games/Apps - 63

AMID EVIL Game DLSS
Anthem Game DLSS
Aron's Adventure Game DLSS
Battlefield V Game RT DLSS
Bright Memory Game RT DLSS
Call of Duty: Black Ops Cold War Game RT DLSS
Call of Duty: Modern Warfare Game RT DLSS
Call of Duty: Warzone Game DLSS
Chernobylite Game DLSS
Control Game RT DLSS
CRSED F.O.A.D Game DLSS
Crysis Remastered Game RT DLSS
Cyberpunk 2077 Game RT DLSS
Dabanjia BIM App RT DLSS
Death Stranding Game DLSS
Deliver Us The Moon Game RT DLSS
Dimension 5 Techs D5 Render App RT DLSS
Edge of Eternity Game DLSS
Enlisted Game DLSS
Everspace 2 Game RT DLSS
F1 2020 Game DLSS
Final Fantasy XV Game DLSS
Fortnite Game RT DLSS
Ghostrunner Game RT DLSS
Into the Radius VR Game DLSS
Iron Conflict Game DLSS
Justice Game RT DLSS
LEGO Builder's Journey Game RT DLSS
Marvel's Avengers Game DLSS
Mechwarrior 5: Mercenaries Game RT DLSS
Metro Exodus PC Enhanced Edition Game RT DLSS
Minecraft with RTX Game RT DLSS
Monster Hunter World Game DLSS
Moonlight Blade Game RT DLSS
Mortal Shell Game RT DLSS
Mount & Blade II: Bannerlord Game DLSS
NARAKA: BLADEPOINT Game DLSS
Necromunda: Hired Gun Game DLSS
Nine To Five Game DLSS
Nioh 2 The Complete Edition Game DLSS
No Man's Sky Game DLSS
NVIDIA Omniverse App RT DLSS & AI
Outriders Game DLSS
Pumpkin Jack Game RT DLSS
Qu Gian Qi Tan Online Game DLSS
Ready or Not Game DLSS
Redout: Space Assault Game DLSS
Scavengers Game DLSS
Shadow of the Tomb Raider Game RT DLSS
SheenCity Mars App RT DLSS
Supraland Game DLSS
System Shock Demo Game DLSS
The Fabled Woods Game RT DLSS
The Medium Game RT DLSS
The Persistence Game RT DLSS
Tom Clancy's Rainbow Six Siege Game DLSS
Unity App Beta RT DLSS
Unreal Engine App RT DLSS
War Thunder Game DLSS
Watch Dogs: Legion Game RT DLSS
Wolfenstein: Youngblood Game RT DLSS
Wrench Game RT DLSS
Xuan-Yuan Sword VII Game RT DLSS
 
Last edited by a moderator:
It has been demonstrated that DLSS 1.0 could be used on any nVidia card, yet it was limited to GTX 20XX series, it didnt need the tensor cores at all.
Not true. DLSS 1.0 (Anthem, Battlefield, Tomb Raider etc) certainly did use tensor cores. The only version of DLSS that did not need tensor cores was only found in one game, Control. This was not DLSS 1.0 but 1.3.8.0 or "1.9" as HUB dubbed it.
 
Even that can be debated. I do not think Apple's OS is 'better' by any means than Windows. Quite the opposite.



True, its hardware accelerated, which does speed up things. Its a combination of both.



Exactly, but not on RDNA2 architectures they wont be able to match GPUs that have a hardwareblock for this to accelerate. PS5 is out of luck on this one aswell.



Cant call it high end atleast. GTX1060 launched early 2016, it was the then Pascals lowest offering. Its still not a bad GPU if you dont want the highest settings etc.
it's hardware accelerated because the existence of DLSS is mostly tied in with Raytracing. The GPUs supporting DLSS 2.0 are freaking good. Even a non RT 1660-1650 can offer fine framerates at 1440p nowadays, I think? My GTX 1060 3GB plays many current games at that resolution and framerate: NBA 2k21, FIFA 21, etc etc.

DLSS has been the most coveted recent technology I've seen. You read many people: "uffff, if consoles had DLSS it could save them". Yet just a few games support it, maybe 'cos nVidia wants it to become a vehicle for RT, rather than a more universal technology.

RT on the other hand is so immature that very few games implement it.

Yet it has been know for ages. In the late 90s I had seen some Siggraph presentations and renders -that took days- on PC magazines, implementing it, and was called The Holy Grail of 3D Rendering, rightly so.

But RT is inherently extremely computationally demanding. In fact, from what I got reading some knowledgeable people here, RT is much easier to program or implement compared to techniques such as rasterization or even scanline rendering.

I very much like RT. Do I care now? No.

It's laughable when they use a 24 year old game to prove RTX (Quake 2), this late in it's life cycle. Of course it runs like a modern AAA game in terms of framerate, safe to say running great RT on a modern game is pretty far off.

That's where DLSS comes in and yet if you compare with how many developers -consoles aside, which will help with adoption- are on the FSR bandwagon, makes DLSS sound like technical jargon rather than a winner.
 
You can say the same about GSync Ultimate
The situation here remains the same too, RTX GPUs remain the only GPUs capable of running every upscaling solution out there. That in addition to providing superior image quality.

Yet just a few games support it
Nope, DLSS is in more the than 50 titles Right now.

RT on the other hand is so immature that very few games implement it.
The number of games exceeded 42 titles and counting, most AAA games have RT now, some even require RT capable cards.
 
it's hardware accelerated because the existence of DLSS is mostly tied in with Raytracing. The GPUs supporting DLSS 2.0 are freaking good. Even a non RT 1660-1650 can offer fine framerates at 1440p nowadays, I think? My GTX 1060 3GB plays many current games at that resolution and framerate: NBA 2k21, FIFA 21, etc etc.

DLSS has been the most coveted recent technology I've seen. You read many people: "uffff, if consoles had DLSS it could save them". Yet just a few games support it, maybe 'cos nVidia wants it to become a vehicle for RT, rather than a more universal technology.

RT on the other hand is so immature that very few games implement it.

Yet it has been know for ages. In the late 90s I had seen some Siggraph presentations and renders -that took days- on PC magazines, implementing it, and was called The Holy Grail of 3D Rendering, rightly so.

But RT is inherently extremely computationally demanding. In fact, from what I got reading some knowledgeable people here, RT is much easier to program or implement compared to techniques such as rasterization or even scanline rendering.

I very much like RT. Do I care now? No.

It's laughable when they use a 24 year old game to prove RTX (Quake 2), this late in it's life cycle. Of course it runs like a modern AAA game in terms of framerate, safe to say running great RT on a modern game is pretty far off.

That's where DLSS comes in and yet if you compare with how many developers -consoles aside, which will help with adoption- are on the FSR bandwagon, makes DLSS sound like technical jargon rather than a winner.

DLSS, even 1.0 uses the tensor hardware block on RTX GPU's. If NV aint straight out lying ofcourse :p

I think that DLSS and RT go hand in hand because RT is a rather huge performance hog in any game, if your not willing to be limited to just reflections or something. But you can use DLSS without ray tracing aswell, many do.

RT is being implemented (immature or not) in just about any game nowadays, even on consoles which have very weak RT performance. It seems to me that next generation games without RT have become something rare by now.

Anyway, if you dont need/like RT, its for now atleast very possible to disable it/not use it, same for upscaling and reconstruction technologies ;)

Also, like ive said before, FSR is a very nice addition to the PC gaming space, not everyone has a RTX gpu and not every engine is supporting either TAAU/DLSS etc, FSR is something thats going to complement an already great landscape of pc tech.
 
Exactly, but not on RDNA2 architectures they wont be able to match GPUs that have a hardwareblock for this to accelerate.
For about the gazillionth time.
Matrix crunchers are not required for DLSS or any other AI work, they weren't designed for DLSS or image scaling and they're not the only way to accelerate AI workloads either.
Current DLSS builds run on matrix crunchers, but we don't know if they're optimal hardware for it or even if they actually accelerate it much, it only needs to be faster than running it on CUDA to make sense because the cores are there for professional cards running same GPUs regardless. It could be several times faster, but it could also not be, we don't know, yet you and few others praise them like they're some alien technology dropped from heavens for the peasants for this very specific workload (hint: idea to utilize tensors for DLSS came a lot later than the cores themselves).
We don't know if they would be any faster than regular cores for any other hypothetical future load either. Only thing we can be sure of is that they're dedicated units which can make them more useful, but IIRC they also steal some other resources which can lead to hobbling rest of the GPU in worst case scenario (hypothetical, don't know if there's actual loads like this at this time)
 
It could be several times faster, but it could also not be, we don't know, yet you and few others praise them like they're some alien technology dropped from heavens for the peasants for this very specific workload (hint: idea to utilize tensors for DLSS came a lot later than the cores themselves).

Well, Nvidia says so. You need RTX gpus to take full advantage of the new tech. I will take it as-is and let conspiracy theories to people like you. This isnt the thread for stuff like that either, have it somewhere else perhaps.

If AMD had dedicated hardware acceleration for such tasks, they would have used it.

https://developer.nvidia.com/dlss

''DLSS is powered by dedicated AI processors on RTX GPUs called Tensor Cores.''
 
Well, Nvidia says so. You need RTX gpus to take full advantage of the new tech. I will take it as-is and let conspiracy theories to people like you. This isnt the thread for stuff like that either, have it somewhere else perhaps.

If AMD had dedicated hardware acceleration for such tasks, they would have used it.

https://developer.nvidia.com/dlss

''DLSS is powered by dedicated AI processors on RTX GPUs called Tensor Cores.''
No-one questioned that quote, like I said, "Current DLSS builds run on matrix crunchers" aka tensor cores. DLSS 1.9 didn't though.
You don't need to be conspiracy theorist to know that general purpose GPU cores support tensor math and thus can run the exact same math. How much faster tensors actually are in DLSS is anyones guess, you just choose to assume it's worlds over difference.
AMD has their own matrix crunchers too, they just don't think they're worth the die size on consumer GPUs.
 
No-one questioned that quote, like I said, "Current DLSS builds run on matrix crunchers" aka tensor cores. DLSS 1.9 didn't though.
You don't need to be conspiracy theorist to know that general purpose GPU cores support tensor math and thus can run the exact same math. How much faster tensors actually are in DLSS is anyones guess, you just choose to assume it's worlds over difference.
AMD has their own matrix crunchers too, they just don't think they're worth the die size on consumer GPUs.

It will probably would/will run without tensor hardware, question is how fast. Probably slower, hence the need for the later DLSS iterations.AMD's next gpus might see some sort of this too.
 
  • Like
Reactions: HLJ
No-one questioned that quote, like I said, "Current DLSS builds run on matrix crunchers" aka tensor cores. DLSS 1.9 didn't though.
You don't need to be conspiracy theorist to know that general purpose GPU cores support tensor math and thus can run the exact same math. How much faster tensors actually are in DLSS is anyones guess, you just choose to assume it's worlds over difference.
AMD has their own matrix crunchers too, they just don't think they're worth the die size on consumer GPUs.

Matrix multiplications are actually quite useful for general image filtering works, especially if your filter kernel is not some simple math constructs.
The tensor cores also have another obvious benefit in that they are generally unused in normal shader computations, therefore they can be seen as "free."
 
Matrix multiplications are actually quite useful for general image filtering works, especially if your filter kernel is not some simple math constructs.
The tensor cores also have another obvious benefit in that they are generally unused in normal shader computations, therefore they can be seen as "free."
I'm pretty sure someone here said they can hog some resources rest of the GPU would use otherwise, which should mean it costs a bit. But like I said earlier myself, too, as long as they're faster than running it on CUDA it's worth it since the units are there anyway. AMD apparently at least at this time thinks they're not worth the die space, since timeframes suggest their matrix crunchers would have been in time for RDNA2 just as well as CDNA where they implemented it
 
Even that can be debated. I do not think Apple's OS is 'better' by any means than Windows. Quite the opposite.

*whoosh*
You missed the point of the quote, my dude.

Well, Nvidia says so. You need RTX gpus to take full advantage of the new tech. I will take it as-is and let conspiracy theories to people like you. This isnt the thread for stuff like that either, have it somewhere else perhaps.

Nothing he said was anywhere near a conspiracy or a conspiracy theory.

His point was: it stands to reason that tensor cores are not necessary to implement DLSS. Nvidia says that they’re optimal — and most of us here would believe that.
 
What does it even mean?
DLSS works in parallel with other work in the Wolfenstein Youngblood via async compute, this goes against those someone's words.
In less crude terms it means tensor cores utilize some of the bandwidth, caches and/or other resources of the GPU shared with CUDA cores, which should mean that while tensors can be even free at times, they're not necessarily so and if given priority, could slow down other parts of the GPU.
 
In less crude terms it means tensor cores utilize some of the bandwidth, caches and/or other resources of the GPU shared with CUDA cores
I guess he meant register file bandwidth, but any combination 2x FP32 or INT32 + FP32 or tensor ops will fully saturate reg file bandwidth, so it is kind of pointless to think about resources sharing in terms of reg file bandwidth since this all comes down to execution speed in a given task, the faster path wins, which is obviously TCs in case of the matrix multiplications.
As for caches and other resources, I just hinted above at some real use cases where multiple resources are being shared and this works just fine with async.
 
Back
Top