Nvidia Ampere Discussion [2020-05-14]

Discussion in 'Architecture and Products' started by Man from Atlantis, May 14, 2020.

Tags:
  1. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    822
    Likes Received:
    462
    I'm afraid it's not that simple, solving non linear partial differential equations if quite different from linear algebra.
     
    Jawed likes this.
  2. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    822
    Likes Received:
    462
    Old stuff yes, and I'm not sure it will catch on. Houdini Pyro doesn't use anything like that AFAIK.
    Even Nvidia's own GPU fluid simulator does not do this...
     
  3. Man from Atlantis

    Regular

    Joined:
    Jul 31, 2010
    Messages:
    849
    Likes Received:
    488
  4. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,359
    Likes Received:
    244
    Location:
    San Francisco
    Solving PDEs with neural networks is still in its infancy but it's making rapid progress.

    For instance this was a recent breakthrough by a Caltech and Purdue University collaboration (Anima Anandkumar, who led this project, is a Director of Research at NVIDIA):


    So yes, in a not so distant future these workloads might progressively move to tensor core-like HW.
     
    PSman1700, Lightman, Newguy and 5 others like this.
  5. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    8,237
    Likes Received:
    2,046
    Location:
    Guess...
    Great news all round!
     
    PSman1700, Lightman and Krteq like this.
  6. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,949
    Likes Received:
    2,933
    Getting Immediate Speedups with NVIDIA A100 TF32
    November 13, 2020
    [​IMG]
    https://developer.nvidia.com/blog/getting-immediate-speedups-with-a100-tf32/
     
    #2506 pharma, Nov 20, 2020 at 11:10 AM
    Last edited: Nov 20, 2020 at 11:24 AM
  7. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    822
    Likes Received:
    462
    The poor Phd student felt so embarassed by his professor he already appologized:
    "Thanks! Sorry for the high-pitched expression on Twitter.. In the paper, we are much careful about the wording. "

    I read part of the paper regarding Navier Stokes, and they trying to simulate a fluid in 2D on a 64x64 grid.
    How fast one iteration step takes, seems to be 0.005s as quoted.
    My GPU simulator does simulate 304x304x304 (and also render it at the same time in 4K) at 160 FPS on a 3090
    So 0,00625s per iteration, without the rendering it would at least also be 0.005s for the simulation iteration.
    The difference we both do 0.005s per iteration but this NN method does it for 64x64 and my method does it for 304x304x304

    Or my method is 304^3 / 64^2 = 7000x times faster compared to this NN method.
    Or 7000000x times faster to what the NN method compares itself with.
    Now I expect to get here 7000 times more likes as the post before :)
     
    #2507 Voxilla, Nov 20, 2020 at 12:14 PM
    Last edited: Nov 20, 2020 at 1:51 PM
    Kej, Lightman, Naed and 1 other person like this.
  8. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,359
    Likes Received:
    244
    Location:
    San Francisco
    I believe you're missing the forest for the trees. These are early but tremendously encouraging developments. No one has claimed you should throw away your code.
    Also it's a research paper that presents a new idea, not the most optimized way to perform a specific task on some given HW.
     
    PSman1700, tinokun, manux and 2 others like this.
  9. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    822
    Likes Received:
    462
    Tremendously encouraging developments why? If your job is a data scientist, maybe.
    Some data scientists seem to think, we can solve any problem and the cool thing is, we don't need to know anything about the problem.
    Give us for a billion inputs the billion expected outputs, and we can solve the problem.
    At least we can create a solution that for the billion inputs will approximately produce the billion expected outputs, and we hope that for the infinite other inputs it will approximately produce the infinite other outputs.
    For some problems that can be a good approach especially for those where there is not a known exact solution for or when getting it wrong for some inputs is not a big problem.
    For some problems there are existing fast solutions, like fluid simulation, that can be computed up to the exact solution.
    And there it gets a bit murky. Because there the idea is to try to replace those solutions and also the experts that have a lot of knowledge of how to solve the problem, or how to in the future find better solutions for the problem. The solutions produced by the data scientists can get it right to a certain degree, but not better than that because lack of data or the network gets too big to compute. That said I'm not against NNs, if they produce solutions better than existing solutions and are fast to compute they should be considered.
     
    #2509 Voxilla, Nov 21, 2020 at 8:24 AM
    Last edited: Nov 21, 2020 at 11:53 AM
    tekyfo and nutball like this.
  10. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,412
    Likes Received:
    1,408
    Location:
    Earth
    Many things are not pursued widely because they are seen as impossible/impractical. Once someone shows something is possible the problem changes and rapid progress can be made as foundation is there. Often it's different team/person who is good at doing foundational research versus taking a known approach/paper and optimizing hell out of it for real world application. 4 minute mile was impossible for long time but once it was broken things changed dramatically.

    Viewpoint really differs if one is looking at what kind of games ship 2022 versus looking at research and trying to guess what could be happening few more years down the line.

    One thing that keeps dnn's out of games is hardware. There is no way in hell to go deep into dnn's in games until tensor core like solutions are mainstream. AMD's new instructions are nice step forward but still very slow compared to tensor cores.

    One solution I can imagine in near(ish) future is really good upscaling material/mesh dnn solution. For example store textures/meshes in lower res/as metadata and upres/resolve extra details in runtime. Something like gimme a brick/brick wall texture and mesh here. Use something lowres/metadata. Let dnn during runtime resolve smaller res texture&mesh into much higher res. In essence wildy trade disk space/artist time in favor of descriptive solution where details are resolved programmatically. I suspect if this takes off first indication will be happening in some kind of build time baking solution and once hw allows it will be moved to be realtime while making game install sizes much smaller.
     
    #2510 manux, Nov 21, 2020 at 7:23 PM
    Last edited: Nov 21, 2020 at 7:37 PM
    nnunn and T2098 like this.
  11. troyan

    Regular Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    255
    Likes Received:
    485
    PSman1700, Plano, nnunn and 3 others like this.
  12. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    822
    Likes Received:
    462
    That is a ridiculous statement, in case you are not aware.
    Rendering on those graphs is slow for all cards as raytracing is on.
    Watch the graphs without raytracing and see the difference, only under these conditions can the shader TFLOPs be maximised.
     
    Lightman likes this.
  13. troyan

    Regular Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    255
    Likes Received:
    485
  14. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    498
    Likes Received:
    295
    Lightman and Voxilla like this.
  15. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    601
    Likes Received:
    291
    I really don't find them "tremendously encouraging" at all. ML is good for what it is, nested probability estimations and generating those from a dataset so people don't have to.

    This "ML all the stuff!" approach though just doesn't fundamentally make any sense. If you need entirely known, perfectly predictable results you wouldn't use an essentially statistics based approach to begin with. It's the same reason "neural rendering" is nigh abandoned at this point already. You don't need to guess the rendering equation, you have it and need to do it as fast as possible.

    Now at the point when you have enough results, then you can use statistics to get a close guess for the rest of the corellated data. Thus denoising. And probably you "denoise" a bunch of other things as well, adda bunch of "close enough" samples to a navier-stokes simulation after you reach a good threshold. But it's not fundamentally good starting point for many things. That's like asking self driving car to go somewhere without cameras.
     
    #2515 Frenetic Pony, Nov 25, 2020 at 2:24 AM
    Last edited: Nov 25, 2020 at 2:32 AM
    Voxilla likes this.
  16. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,359
    Likes Received:
    244
    Location:
    San Francisco
    Well, I guess we'll have to shut down research centers, university departments and corporate research labs. No doubt there is a lot of poor work around, but there are also a flux of new results that were simply unthinkable only 5 year ago.
    No Monte Carlo rendering for you.
    Neural rendering is a hot topic making progress at breakneck speed. I am not sure how you can even dream to say it is being abandoned.

    If you think people use ML/DL for rendering to guess the rendering equation you are in for a surprise. Sure, there is the odd paper here and there that pretends to know nothing about gfx but it's hardly representative of the best work.
     
    tinokun, manux, McHuj and 2 others like this.
  17. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    822
    Likes Received:
    462
    Yes:

    You've just mentioned 'hardware-assisted ray tracing being available for Crysis Remastered at launch'. Can you go into a bit more detail? Is this using Turing and Ampere's RT Cores, and what kind of improvement have you seen by enabling them?

    [SH] We are using the Vulkan extensions to enable hardware ray tracing on NVIDIA RTX cards.
    This gives us a significant performance boost in the game. The differences you will see in the
    game and the reflections of animated objects, besides the main character, and performance.

    Hardware support gives us a 5-9ms of rendering time performance boost with ray tracing enabled. In areas where ray tracing is not 100% present, like on a wooden floor, you won't see many differences in performance between software and hardware ray tracing, but for 95% on the game, you will feel the performance benefits.

    Why did you opt to go with NVIDIA's Vulkan 'VKRay' extension for ray tracing instead of using Microsoft's DXR API?

    [SH] We developed the game with DX11 and our own CRYENGINE API in place. The Vulkan extension was a great fit for us to build everything on top of our current solution to improve
    performance.
     
    Lightman and PSman1700 like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...