Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. Flappy Pannus

    Regular

    Joined:
    Jul 4, 2016
    Messages:
    329
    Likes Received:
    568
    I'm automatically more suspicious of any video that's sponsored by the company's product it's critiquing, but gotta say that's easily the best breakdown of DLSS I've seen yet. Extremely thorough in covering all the upsides and downsides, thanks for the reposting.
     
    PSman1700 likes this.
  2. Communism

    Newcomer

    Joined:
    Feb 1, 2014
    Messages:
    15
    Likes Received:
    7
    Despite the facade, everything about a cutting edge technology coming from "third party sources" is likely to be some filtering of what an influencer ("Journalist", "Blogger", "News", "Reviewer","Critic") was told by the engineers of the entity that created it filtered through the marketing department (at best case).

    Anyone who pretends otherwise is simply dishonest.

    I'd imagine Nvidia is working on something to wow people when NextGen Nvidia cards (ampere or whatever) comes out related to DLSS as they would have much more leeway in terms of hardware headroom to make a solution that is somewhat less compromised as the current one is (maybe DLSS2X or something else).

    It would be a great way to add extra pizzazz to "CurrentGen" tech level and performance target games during the gap between when AAA switches fully to something like Xbox Series X level performance and now.

    Xbox Series X is a 12 Tflop RDNA2 CPU with probably a 16-20x PCIe4.0 like Infinty Fabric connection to the 2 4 core Zen 2 clusters probably using the cache style that the Renoir is/will be using (less cache than server cpus) with 384(ish) bit GDDR6 interface.

    This means that the current DX11 style AAA games (which already hit CPU limits with the fastest CPUs when paired with GV-100) that will be out between now and the gradient of change in game design [due to targeting the new consoles] will allow much more "extra" PC options before then at least at 2560 x 1440p while 4k games will be able to utilize the new top end GPUs without needing those "extra"s without hitting hard CPU bottlenecks.
     
    #702 Communism, Feb 13, 2020
    Last edited by a moderator: Feb 13, 2020
  3. bgroovy

    Banned

    Joined:
    Oct 15, 2014
    Messages:
    799
    Likes Received:
    626
    But is it better than using the die space from the tensor cores for more shader units and still using AI to craft to train your upscaling for regular compute?
     
  4. iroboto

    iroboto Daft Funk
    Legend Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    14,834
    Likes Received:
    18,634
    Location:
    The North
    I think the answer is 'it depends'.
    If Nvidia is able to really crank out DLSS effectively for all titles, then the dedicated silicon is worth it. The amount of regular compute power to keep up with the fidelity of a high quality NN running on Tensor cores, it just won't be able to keep the frame rate up.

    There are also other aspects in which if you decide to use AI for (say texture up-res) that is once again another opportunity there for more usage out fo the hardware. There are a great deal of many cases in which it could be used, but unless tensor cores go main stream, I don't see it going that route. Tensor cores on the 20xx series may have arrived 1 console generation too early.
     
    PSman1700 likes this.
  5. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    Apparently, the new DLSS version doesn't require per game training and instead uses a general purpose algorithm, that can be updated across the board for all DLSS titles without the involvement of developers, it also doesn't put restrictions on resolution, as it works on ALL resolutions, it doesn't also restrict DLSS use to certain RTX GPU models, and most importantly provides a much much larger performance boost.

    HardwareUnboxed who previously called DLSS dead, are now extremely impressed with the new DLSS model.

     
  6. Remij

    Regular

    Joined:
    May 3, 2008
    Messages:
    684
    Likes Received:
    1,268
    Just saw that. Really quite impressive. If as they say, this new version of DLSS is easier to implement for developers, then we should hopefully see more soon.
     
  7. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,119
    Likes Received:
    3,093
    The DLSS is like the new RT, rough in the beginning because its new tech, but things are being improved grealy upon.
     
  8. Miniature Kaiju

    Newcomer

    Joined:
    Jun 10, 2019
    Messages:
    67
    Likes Received:
    70
    I'm going to guess this new "DLSS" has actually very little if any deep learning going on.
     
    milk likes this.
  9. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    No, it runs on the Tensor cores and uses Machine Learning algorithms, and is much more flexible and covers much more possibilities (and games) than the shader version used in Control.
     
  10. manux

    Veteran

    Joined:
    Sep 7, 2002
    Messages:
    3,034
    Likes Received:
    2,276
    Location:
    Self Imposed Exhile
    GDC is soon. out of curiosity I checked and there is many interesting presentations. One presentation is this

    https://schedule.gdconf.com/session...with-deep-learning-presented-by-nvidia/873813
     
    dorf and pharma like this.
  11. Miniature Kaiju

    Newcomer

    Joined:
    Jun 10, 2019
    Messages:
    67
    Likes Received:
    70
    I don't doubt they're using both, I doubt they're doing actual deep learning, that they're running a NN model to do the upscaling rather than, say, a genetic algorithm optimized stateful convolution algorithm. Using a "generalized model" kinda reinforces that.

    I've dealt enough with NN and DL to be really, really skeptical about running a model small enough to be performant in real time and getting anything better than a hand tuned algo. DLSS 1.0 definitely looked like they were attempting to do just that, with failures modes consistent with what I've seen before. This new stuff, nah.
     
    Kyyla, techuse, milk and 1 other person like this.
  12. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,894
    Likes Received:
    4,549
    Thanks for the schedule link. Really looking forward to more DLSS 2.0 technical information.
     
    manux likes this.
  13. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    So is it actually going to be open now so someone (not me) can do a proper comparison against TAAU with MIP biasing?
     
  14. dorf

    Newcomer

    Joined:
    Dec 21, 2019
    Messages:
    126
    Likes Received:
    417
    Nvidia definitely hasn't been scolded enough by tech sites/youtubers for the outright fraudulent marketing where they announced 20+ games getting DLSS support well over a year ago and delivered it only to a couple of them. I'm very glad Hardware Unboxed brought up that criticism.

    Also great to hear new details about the tech. I really hope Nvidia can up their game and start spreading DLSS 2.0 to a large amount titles in 2020. However, seeing Metro Exodus launching a new update & DLC still with the same old DLSS 1.0 isn't a very promising sign. I'm pretty disappointed about this, imo Metro really would have benefited from the new DLSS.
     
  15. Radolov

    Newcomer

    Joined:
    Jul 30, 2019
    Messages:
    12
    Likes Received:
    13
    This means that they've changed strategy, but I have a feeling that NVIDIA:s answers aren't completely truthful.
    (Note, these aren't perfect quotes and I've shortened some, just watch the videos at the timestamps)

    11:37 Why NVIDIA felt like going back to AI? They reached the limits of the shader version, and with tensors they have better image quality, handling of motion, supports all resolutions etc. Implementation in control required hand tuning and didn't work well in other games.

    24:47 NVIDIA says it's not their plan to have a shader version as well for other cards. It doesn't work well in other games.

    27:45 Initial implementations were ”more difficult than expected” and “quality was not where we wanted it to be”. Decided to focus on improving it rather than adding it to more games.

    The reason being it seeming like they're using parts of the algorithm in Control as a baseline, then use AI with tensors to make it a bit cleaner.

    Why I'm saying such a controversial statement is because:
    a) NVIDIA said so themselves in the blog about Control: “With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.”

    b) The algorithm in Youngblood produces the exact same artifacts that their Control algorithm produces when it comes to the sparks and embers, something NVIDIA highlights/shows that their slower AI research algorithm doesn't do.
    [​IMG]

    c) These parts look a bit similar. I've stretched out one so the timescales are roughly the same, just to make things a bit clearer. (The frame captures are from dorf earlier in this thread).
    The left part is the new algorithm, that lasts a lot shorter than the old one in Control (right). It also appears sooner than the one in Control, which would make sense given what was said in a). I also noticed right now, this might be what their new GDC talk mentions: "DLSS, deep learning based reconstruction algorithm, achieves better image quality with less input samples than traditional techniques."
    [​IMG]

    Hopefully we'll get some answers during NVIDIA:s talk.

    Their explanation as to why they have DLSS only for RTX cards is that they must use the tensor cores.
    Yet when they were doing ray tracing, they had no problem whatsoever to show that RT was possible run on slower cards with no RT cores, except with many times lower performance.

    They have a full shader based implementation in Control, yet they won't let you run it on 1600 series cards. Isn't that peculiar? Wouldn't they want to show that RTX is far, far superior to GTX as they did with ray tracing? They would just need to enable it on 1600 series and the performance would be many, many times worse, right? Well, we know that adding the 1600 series to Control would show that the tensor cores wouldn't be adding anything in that situation. They obviously would execute faster for the new version, we just don't know how much faster.

    Besides, even if they really wanted to prove that the new version is much faster on RTX, they could use DirectML to create a general version. AMD actually supports DirectML upscaling on everything newer than Vega. During Siggraph 2018 Microsoft actually presented a DirectML super-resolution model that was developed by NVIDIA. "This particular super-resolution model was developed by NVIDIA.". I find it very unfortunate that NVIDIA has had multiple opportunities to show how much better RTX is, and yet fail to show it.

    As for me, I think the main problem today is that ML and rendering don't seem to be executed in parallel. If we look at games, if you had for example a 4k image taking 16ms to render, then rendering it at 8ms from a lower resolution, and then upscaling with ML in 8ms, would end up being the same thing if done sequentially. However, if we were to render frame1, then upscale frame1 at the same time as we render frame 2, we'd be doubling the frame rate. Now it goes from "meh" to "amazing". Of course, the architecture needs to allow such execution.
     
  16. troyan

    Regular

    Joined:
    Sep 1, 2015
    Messages:
    609
    Likes Received:
    1,142
    Control uses a upscaling technique without reconstruction. The difference between DLSS and "DLSS" in Control is huge. And i dont see a reason why nVidia should activate it for other hardware. It would be slower than just using the native resolution...
     
    xpea and pharma like this.
  17. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    There might not even be a DLSS as such outside of a bare bones implementation heavily customized by NVIDIA for the big titles.

    There's so many knobs to tweak in how you implement SRAA. With forward/backward renderers, with selectively running rendering passes at lower or full resolution etc.
     
  18. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    I too think NVIDIA withheld DLSS from GTX cards as a measure of product segmentation, also to cut supporting costs and limit them to a certain degree.
     
    Jensen Krage likes this.
  19. Radolov

    Newcomer

    Joined:
    Jul 30, 2019
    Messages:
    12
    Likes Received:
    13
    Can you add a source or proof to that statement, or which version of DLSS are you referring to right now? All of them? (Let's say there are at least 3 versions of DLSS, one before Control, one in Control, and one after Control)

    I'm at least having a hard time seeing how for example a 1660ti would be slower in Control. Say we have two parts, X and Y. Let's say that X is the time it takes to render the lower-resolution frame, and Y is the time it takes to upscale it.
    A 1660ti ,compared to RTX cards, doesn't have tensor cores, and since they don't seem to be used in Control, we should expect the relative time of Y to X to remain roughly the same.
    However, the 1660ti also doesn't have RT cores, so if X contains RT operations, like Control may have, then we should expect X to become much larger.
    This means that in Control, the 1660ti might have even larger gains than the RTX cards, which would make the 1600 series owners very happy, unless I'm missing something, of course.

    As for the new algorithm in Wolfenstein I said I couldn't know for sure how much slower it would be if it ran on the 1600 series card.
    We still have the problem of having no RT cores increasing X, and we now add that Y is utilizing the tensor cores. So both are larger, we know that RT doesn't occupy 100% of X, at the same time we know that the tensor cores aren't fully utilized all of Y.
    We don't even know whether it can reorganzie the extra time spent on FP16 operations, to reduce time on other types of operations, making the overall impact less painful.
    I would call it a tricky situation, where the only answer comes from actually testing with the cards.

    I'm not suggesting anything unreasonable. I'm suggesting NVIDIA should be consistent with their marketing and allow at least 1600 series cards to run DLSS, be it faster or slower. It should be in NVIDIAs interest to make people try out RTX features, even if they run slower on current hardware. Because NVIDIA says "if you buy the RTX cards, they will run faster".
    Besides, I don't want games to remove graphics settings because "maybe they might make your PC run slower" or something along those lines. If I want to run RT with 30 FPS, or whether I want to see the power of ML with DLSS, then let me do that. Unless it's something that the hardware physically can't run, then don't restrict it.

    Just a quick note, NVIDIA guidelines says that you should use all caps when writing NVIDIA. Capitalizing the second letter doesn't make NVIDIA happy. Same goes for people writing rDNA, just why?


    Are you suggesting that hardware manufacturers are witholding features to make people buy the higher margin products? This must be some new kind of thing. ;)
     
  20. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,894
    Likes Received:
    4,549
    DLSS tests in Deliver is the Moon and Wolfenstein: Youngblood: A Significant Improvement in quality
    February 17, 2020
    https://www.hardwareluxx.ru/index.p...blood-znachitelnoe-uluchshenie-kachestva.html
     
    Lightman and PSman1700 like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...