Nvidia Ampere Discussion [2020-05-14]

Discussion in 'Architecture and Products' started by Man from Atlantis, May 14, 2020.

Tags:
  1. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Because you could fill it with FP64 units, esp. if you already need full 32 Bit for the multiplier if it's supposed to double as INT32. ;)
     
  2. Love_In_Rio

    Veteran

    Joined:
    Apr 21, 2004
    Messages:
    1,627
    Likes Received:
    226
    Flops and wattages to compare must be what Nvidia advertise, not other figures. If you say 16,5 for 2080ti you could also say 25,5 for 3070, or whatever.
     
  3. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    Without them telling how they achieve that number, it's quite bold talk.
    MS's "13 TF of compute" is really up to 380G ray-box and 95G ray-triangle per second peak. According to someone who at least I've learned to trust to know what he's talking about, 2080 Ti would reach around 444G ray-box intersections peak (68*4*1.635GHz), ray-triangle is little fuzzier but his assumption was that it would be around 1/4 of ray-box like XSX.
    Assuming his 2080 Ti numbers aren't way off, there's no way 2080 RT cores are anywhere near "equivalent of 34TF of compute" using the same methods MS did for their numbers
     
    Lightman likes this.
  4. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    Depends on what you compare to what and on what clocks Ampere and 3070 specifically will actually run. Again, we need to know the technical details.
     
    PSman1700 likes this.
  5. Dictator

    Regular

    Joined:
    Feb 11, 2011
    Messages:
    681
    Likes Received:
    3,969
    It will be cross architecture in terms of a comparison, but I have heard from different types of sources that the RT performance differential referenced by Nvidia is going to be indicative of what we see in real world games on average. Hopefully there is an early generation game that offers RT performance on XSX that we can test in comparison to PC. Preferably on and off, and preferably with the same quality settings.

    Perhaps we will know more as soon as the first GDC presentations about next gen RT pop up next year.
     
    PSman1700, pharma and DavidGraham like this.
  6. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London


    1080Ti versus 980Ti is about 70% in these tests, though I think it's fair to say the variance is stronger - principally because 980Ti is short of memory at 4K.
    I presume that LOD will not help with decompression, since textures would be stored once for all MIPs in a block that needs to be decompressed, no matter the MIP level required to show on screen.

    The reduced memory of the cheaper cards will also hurt, in terms of "scratchpad space while decompressing", but that effect should be reduced with the right kind of pipelining. But decompression workloads do tend to be bursty in nature, even at the most finely-grained level.

    I think it's going to be a couple of years before we see AAA games making heavy usage of Direct Storage. So those GA106s will be dead anyway.
     
  7. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    I don't believe this methodology is accurate at all. NVIDIA didn't state the output of each RT core.

    They know how Microsoft derived their numbers given the collaboration between NVIDIA and Microsoft.
     
    PSman1700 and pharma like this.
  8. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    Well, it is coming from someone who should know what he's talking about, but it's of course 2nd (or rather 3rd) hand information.

    NVIDIA knows for sure, but without telling the details of how they came up with their numbers it's still marketing speech and may or may not be misleading, and I for one don't believe for a second without proof that even 2080 is over 260 % the speed of XSX in RT
     
  9. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,090
  10. chris1515

    Legend

    Joined:
    Jul 24, 2005
    Messages:
    7,157
    Likes Received:
    7,965
    Location:
    Barcelona Spain
  11. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,090
  12. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    Depends on whether 7nm or 8nm ... :rolleyes:
     
    Lightman and PSman1700 like this.
  13. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    That's for Ray Intersection, what about BVH Traversal?
     
  14. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    #1114 CarstenS, Sep 2, 2020
    Last edited: Sep 2, 2020
  15. If the NVENC encoder hasn't been improved I may hunt for a 2080TI in the future. I'm planning on upgrading from a 1080p monitor to a 1440p monitor, as 4K is (i think) a waste of resources at 40cm from the monitor. The 3080 looks nice but holy moly at those power requirements, and I feel the 8GB of the 3070 may be a disadvantage in the near future.
     
  16. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,090
    Was a 3070Ti with 16GB's of vram. Didn't catch any price, if there was any? Got taken away fast that page. I think NV wants to announce their products themselfs.

    But, will the 3090 run Crysis remastered? :p
     
    BRiT likes this.
  17. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Encoder stayed the same, says Nvidia, only Decoder was enhanced (for example AV1).
    Read the fine print: "2 - Recommendation is made based on PC configured with an Intel Core i9-10900K processor. A lower power rating may work depending on system configuration." Maybe it helps if your processors does not guzzle 250-ish watts alone. :)
     
    #1117 CarstenS, Sep 2, 2020
    Last edited: Sep 2, 2020
    Cuthalu, Lightman, sonen and 5 others like this.
  18. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    PSman1700 likes this.
  19. shiznit

    Regular

    Joined:
    Nov 27, 2007
    Messages:
    345
    Likes Received:
    95
    Location:
    Oblast of Columbia
    It rarely exceeds 100W in games. I'm not sure why we're making a big deal about Cinebench consumption. But I'm curious about the potential PCIe 3.0 bottleneck.

    Edit: looks like the upper bound is ~150W with average consumption ~120W when juiced up to 5.2 GHz @ 1.4v. A little tweaking and you could get it closer to 100W. Not bad for top tier gaming perf in my book.

     
    #1119 shiznit, Sep 2, 2020
    Last edited: Sep 2, 2020
    PSman1700 likes this.
  20. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    13,878
    Likes Received:
    4,724
    Yea the stg 2000 to the riva 128 was a game changer. The performance upgrade was drastic. Its hard to find benchmarks from back then however
     
    Lightman likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...