Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    157
    Likes Received:
    33

    I think it was more of because you sound exactly like nvidia and in your rebuttals, are towing their marketing line, for verbatim.

    In reality though, (no matter what you or nvidia thinks), Gamers do NOT care about reflection in windows, or water... they want better game play and more screen size and more fps.. Nobody I know (or even have heard of), is demanding realism or perfect shadows in their games. So John Q Public is not going to pay for it, no matter how much marketing nvidia throws at it.

    Turing itself, is NOT a gaming GPU, meant for gaming. We all know this and nvidia has taken their marketing too far. The ONLY reason to upgrade to RTX is because of USB-C.



    Anyone using G-sync and needing ultimate frames, will have no choice... and upgrade to an RTX. While those who have not yet purchased their (FreeSync2/G-sync) monitors, are most likely waiting 2 more months until AMD releases their 7nm Radeons. Knowing as a gamer, the public will probably get a more dedicated die for their money.
     
  2. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    For the longest time, AMD has promoting Vega 20 as a GPU that’s for deep learning and pretty much deep learning only.

    Since you are waiting for a very dedicated 7nm die that will show up in 2 months, the only reasonable conclusion is that you think that the new RTX GPUs don’t have enough dedicated deep learning resources.

    And that makes total sense in some way, because for a gamer who is clearly not interested in image quality, there’s no reason at all to buy a traditional GPU to begin with: if you put all the settings to “low”, there’s no game that’s not playable at very high frame rates on your 4K FreeSync monitor, and no reason to upgrade at all.
     
  3. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,042
    Likes Received:
    3,114
    Location:
    Pennsylvania
    Please don't speak for all gamers. I'm personally very excited about what this will bring eventually to gaming, including reflections in windows mirroring truly what is behind me instead of a fake cubemap that's reused everywhere. I care more about those things than 100fps. That doesn't mean I'm buying into RTX but I'm very much looking forward to having true lighting, shadows and reflections in games in the coming years.
     
  4. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    Seems obvious to me why a company would capitalize on a technology advantage that the competition can't match. This debate would make sense if we were talking about some promised future but RT and DLSS are working right now. It's a real, tangible thing that people can actually use. That's a lot more than can be said for other features that never materialized even after several years.

    They would likely be massive and expensive even without RT. The larger, faster caches, improved compression and independent INT pipeline aren't free. Turing is clearly more efficient than Pascal so those changes were worthwhile. Of course we don't know how much RT and tensors cost in terms of die size but they certainly don't deserve all the blame for the size and cost of Turing chips.
     
  5. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,125
    Likes Received:
    2,885
    Location:
    Well within 3d
    DLSS is described as a post-processing method, which only happens after all shading is done to give that phase the necessary inputs.
    Since it's only one frame, I suspect the usual concurrent start of frame N+1's rendering would be happening in reality.

    I haven't been able to absorb the full range of details on the technique, but perhaps that is something that can be constrained by where it is in the frame at this early adopter's state.
    Some of the artifacting may be worsened by a complex scene's broadly constructed ground truth leading to unwanted interference in the behaviors of certain materials or geometry that sporadically create similar inputs. Perhaps something like a primitive buffer or primitive ID (Nvidia's task and mesh shaders do allow for the possibility) could help focus the learning process or more cleanly represent idealized weights for specific subsets of the world like grass or finely detailed surfaces. It's not as automatic, but it may give the developer a chance to more feed the network indications of their intent. Perhaps if this can somehow be applied in different phases, then shaders could be constructed to key into certain execution or data patterns that could flag an area as being more amenable to a specific set of weights despite the pixel output being ambiguous as to what fed into it.
     
    matthias likes this.
  6. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    157
    Likes Received:
    33
    But your, yourself claim that real life shadows & reflections are not a top priority.

    Heaving reflections in a window, (or water puddle) does not have to be perfectly real... or perfectly rendered, for you to see their movement out of the corner of your eye. The point is, nvidia is selling unnecessary fluff... because shadows & reflections are already in games.



    In addition, having "perfect" and ultra-realistic shadows and reflections is just marketing hype. It is something that makes sense when the games themselves are rendered in such detail. Subsequently, DLSS bring nothing new to the table, just another proprietary form of AA cheats, to make something look good, instead of rendering for upmost quality...

    RTX won't look the same under 7nm, Nvidia will listen and make a GPU for gaming, different their then ML bins. The reviews don't lie, turing is a powerful chip, yes. But for the money, it is not meant specifically for gaming and the price premium isn't there. Naming the top half-dozen games, is there any reason for someone who owns a 1080ti, to spend $900 bucks for nearly the same performance?

    Those who were waiting to see what Nvidia's 2k series would bring, still might opt for the RTX 2080 instead of the cheaper 1080ti, just to get USB-C and better board. But if that is not a concern, then expect the 1080ti to power all the sub 4k g-sync displays.
     
  7. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    The reflection of the flame thrower in the BF5 demo was awesome, and impossible to do with fake techniques.

    You call that unnecessary, and that’s fine, because you already admitted to not caring about image quality.

    But when people are spending hours on custom Skyrim improvements or the Witcher 3, claiming that nobody else cares either is just projection.

    And that utmost quality is achieved in movies by rendering at 64x SS. Which you know full well is impossible to do in real time. So you look for alternatives that are good enough with a lower performance impact. And DLSS is just another way of doing that.

    A year or two from now, you’re going to look back and think “man, what was I thinking?” Just like your predictions about these awesome multi-die Vega GPUs we were supposed to see by end of 2017. (https://forum.beyond3d.com/threads/amd-vega-hardware-reviews.60246/page-51#post-1996101)

    I expect 7nm gaming GPUs will have more SMs (obviously), more RT cores per SM, and the same or more Tensor cores.

    And the reason is very simple: these GPUs are already being designed today, and it’s way too early to declare victory or defeat today about ray tracing becoming entrenched.

    Similarly, there are no machine learning accelerators that you can buy today to shove under your desk other than those from Nvidia. And a gaming GPU is perfect to spread the R&D cost for that.

    In the end, your biggest beef is with the one thing that can be changed at a moment’s notice. If Nvidia prices the RTX GPUs, 20% cheaper (which they could easily do if forced to), none of your other arguments are relevant anymore.
     
  8. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    From what I can tell, once tensor cores are fully utilized there is no possibility for the shader cores to be active:
    One tensor core needs the equivalent register read/write bandwidth of 16 FP ALUs (or 16 INT ALUs)
    A tensor core does a 4^2x4^2+4^2 matrix Multiply/Add: AxB+C=D, A,B need 2x16x16 bit read, C 16x32 bit read, result 16x32 bit write
    As a SM has 64 FP and 64 INT ALUs, and 8 tensor cores. Doing the maths the 8 tensor cores use an equivalent number of register ports as the 64 FP + 64 INT ALUs. So when the 8 tensors are active there is no room to overlap that with shader computations of the SM.
     
  9. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,741
    Likes Received:
    11,222
    Location:
    Under my bridge
    This thread is not about the value of ray-tracing to gamers. Stick to discussing DLSS.
     
  10. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,933
    Likes Received:
    1,629
    https://www.pcgamer.com/what-is-ray-tracing/



     
    OCASM likes this.
  11. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    It's time that we get some honest image quality tests again, so objective tests can assess anti-aliasing performance instead of marketing rethorics.

    For example a good test to understand what is anti-aliasing is this one:
    https://www.testufo.com/aliasing-vi...&background=000000&antialiasing=0&thickness=4

    Running this with DLSS 4K compared to 4K (even without AA), would simply show more aliasing.
    More aliasing meaning less segments to define the line and thus more articulated stair-casing for DLSS
     
    #131 Voxilla, Sep 24, 2018
    Last edited: Sep 24, 2018
  12. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,297
    Likes Received:
    464
    I mean...come on.
     
    xpea, trinibwoy, pharma and 4 others like this.
  13. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,186
    Likes Received:
    1,841
    Location:
    Finland
    It's multiple frames like TAA is, it just does some cleanup to remove the afterimages. Here's quote from the whitepaper with my bolding
     
  14. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    Multiple frames of half the resolution to be precise.
     
  15. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,186
    Likes Received:
    1,841
    Location:
    Finland
    I'm not sure of that limitation, we know that FFXV does it at half the resolution but I'm not sure if anyone has confirmed Infiltrator or Star Wars resolution for example. The whitepaper at least makes no mention of having to use exactly half the res.
     
  16. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    DLSS 2x is full res, DLSS half res
     
    jlippo, BRiT and Ike Turner like this.
  17. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,791
    Likes Received:
    2,602
    It really just shows that many of the pedantic complaints against RTX (other than price) are because of it's development by NVIDIA.
     
    LeStoffer and pharma like this.
  18. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,125
    Likes Received:
    2,885
    Location:
    Well within 3d
    This seems like a relatively neutral outcome relative to what happens with a post-process compute shader heavily utilizing the SM, or what would happen on prior architectures. It reduces the throughput of other rendering work by either occupying the standard SIMD units or occupying the register ports they would need. The GPU's opportunistically switching to other threads if the tensor/compute shader stalls and the overal GPU/driver load balancing by controlling how many SMs may be running a given invocation remain.

    That was in reference to the image attached to the post I replied to, which was a simplified marketing diagram about a frame being rendered by Turing. Even if from multiple frames, the post-process phase would be executed at the tail end of the frame that is the last input.
     
    #138 3dilettante, Sep 24, 2018
    Last edited: Sep 24, 2018
  19. shiznit

    Regular Newcomer

    Joined:
    Nov 27, 2007
    Messages:
    313
    Likes Received:
    64
    Location:
    Oblast of Columbia
    #139 shiznit, Sep 28, 2018
    Last edited: Sep 28, 2018
  20. ECH

    ECH
    Regular

    Joined:
    May 24, 2007
    Messages:
    682
    Likes Received:
    7
    Is it not true that DLSS needs to be implemented in the game engine itself?
    I'm curious as to why it's not something that can be added to Nvidia's GUI via drivers?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...