Nvidia Turing Speculation thread [2018]

Discussion in 'Architecture and Products' started by Voxilla, Apr 22, 2018.

Tags:
Thread Status:
Not open for further replies.
  1. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,931
    Likes Received:
    5,533
    Location:
    Pennsylvania
    Yeah it would have to be almost real-time to be used in instant replays.
     
  2. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    I’m pretty sure he was using the slide to show that RT has been accelerated (the time graphs), not what the real layout looks like.

    If your technical judgement is so clouded by emotions, why bother to engage in these kind of discussion?
     
  3. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505
    Anything based on Deep Learning has also the potential of generating artefacts.
    This for rare cases not seen during the training.
    If you look at the DL slomo footage of the falling ice hockey skater, there are huge tearing artefacts on the skates.
    As good as it looks, it might not be good enough to cover everything.
     
    Lightman and Malo like this.
  4. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    That deep slomo network requires at least an order of magnitude more calculations and cannot be done in real time. (At least not on a single GPU.)

    It’s a large, deep network.

    Here’s the paper: https://arxiv.org/pdf/1712.00080.pdf
     
    Cat Merc, Alexko, Lightman and 2 others like this.
  5. entity279

    Veteran Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,332
    Likes Received:
    500
    Location:
    Romania
    ^^ Slomo cannot be done in realtime anyway, without changing a few physics' constants *runaway*
     
    Cat Merc, dobwal, Alexko and 4 others like this.
  6. CSI PC

    Veteran

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Cue Jensen "The more you buy the more you save" :)
    I assume comes down to if the spanking new large DGX node can be centralised with the broadcasting station and away from the hires TV/film camera feed giving greater flexibility.
    Those hires broadcast cameras used for best fidelity in sports I think (used to be) are shockingly expensive, even without considering aspects of high speed motion capturing with slo-mo playback.
     
  7. Lorens

    Joined:
    Aug 24, 2018
    Messages:
    4
    Likes Received:
    0
    I am very confused about what part of the denoising is done with the help of AI instead of normal shaders doing the work.

    In the quadro turing presentation they showed that only global illumination is being denoised by a AI based denoiser and reflections and all other light effect with compute. But the slide at the turing event had no single mention of AI denoising , not even for global illumination.

    Are the tensor cores too slow for it in realtime gaming situations ?
     
  8. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,680
    I think the tensor cores are used for what they're calling DLSS (deep-learning super-sampling), but which seems to be an upscale instead of real super sampling.
     
    Heinrich04 likes this.
  9. Lorens

    Joined:
    Aug 24, 2018
    Messages:
    4
    Likes Received:
    0
    Yes they are used for that but Jensen also suggested it was used for raytrace denoising or attempt could be in some capability. Nvidia is a bit vague about it.

    Maybe it just something they are still developing and we will see AI based denoising of raytracing at a later time.
     
  10. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,931
    Likes Received:
    5,533
    Location:
    Pennsylvania
    You seem to imply Tensors couldn't be used for both?
     
  11. Lorens

    Joined:
    Aug 24, 2018
    Messages:
    4
    Likes Received:
    0
    No I am saying they could be but during the presentation the slide said they don't, which would conclude that they only use the tensor cores for DLSS.
     
  12. Ike Turner

    Veteran

    Joined:
    Jul 30, 2005
    Messages:
    2,110
    Likes Received:
    2,304
    The Tensor Cores are used for Denoising is practically all of the RT demos shown Pica Pica, Star Wars, Cornell box, etc etc) Now can they do both Denoising and DLSS at the same time? We don't know yet.
     
  13. Lorens

    Joined:
    Aug 24, 2018
    Messages:
    4
    Likes Received:
    0
    https://image.slidesharecdn.com/jhh...rce-rtx-launch-event-21-638.jpg?cb=1534805756

    This slide seems to imply they don't.

    Compare that to the quadro presentation slide where it does say AI below global illumination.
     
  14. Clukos

    Clukos Bloodborne 2 when?
    Veteran

    Joined:
    Jun 25, 2014
    Messages:
    4,688
    Likes Received:
    4,353
    Honestly, that's what I expect from the 7nm TU102 replacement:
    • Increased clockspeed to around 2350-2500 MHz boost, 2050 base
    • 2 RT cores per SM
    • 64 SMs (4096 cc)
    • 16GB HMB2 or 24GB GDDR6
    • Identical tensor core count
    That should be a significant leap over Pascal in "normal" FP32 workloads while offering faster RT perf than Turing at a -hopefully- smaller die size and lower cost.
     
    #534 Clukos, Aug 25, 2018
    Last edited: Aug 26, 2018
  15. McHuj

    Veteran Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,613
    Likes Received:
    869
    Location:
    Texas
    I think a big limiter will be bandwidth in the 7nm parts. NVIDIA is already using 14 Gbps GDDR6, going to the current max of 18 Gbps is only a modest bump.

    They may need to go to HBM in the 102 class GPU to get the bandwidth.
     
  16. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,894
    Likes Received:
    4,548
    Wonder if it's a future feature since it's included in the NGX SDK. Currently there is only details for DLSS and AI Painting, but also included in the stack are placeholders for AI Slow-Mo and AI Res-Up.
    https://developer.nvidia.com/rtx/ngx
     
    #536 pharma, Aug 25, 2018
    Last edited: Aug 25, 2018
    Heinrich04 likes this.
  17. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    A theoretical 18Gb/384-bit part would see memory bandwidth up to 864GB/s from 616GB/s, a healthy 40% increase and well within the realm of possibility.
     
  18. McHuj

    Veteran Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,613
    Likes Received:
    869
    Location:
    Texas
    That's true, but I'm just not expecting that in the first round of 7nm chips.
     
  19. manux

    Veteran

    Joined:
    Sep 7, 2002
    Messages:
    3,034
    Likes Received:
    2,276
    Location:
    Self Imposed Exhile
     
    OCASM and pharma like this.
  20. eloyc

    Veteran

    Joined:
    Jan 23, 2009
    Messages:
    2,551
    Likes Received:
    1,705
    Way too much chit-chat, IMO, and little detail about the questions asked, actually.

    And it's kind of annoying this recent attitude of "thanks to RT everything feels real and like you're actually in the game, so before this moment everything was crap".
     
    Lightman likes this.
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...