Nvidia Ampere Discussion [2020-05-14]

Discussion in 'Architecture and Products' started by Man from Atlantis, May 14, 2020.

Tags:
  1. Accord1999

    Newcomer

    Joined:
    Jun 21, 2003
    Messages:
    133
    Likes Received:
    6
    With Nvidia's Power Limit capability, you can go in the opposite direction. You set the maximum board power you want to use, the video card will downclock itself to a stable level and then you tweak upwards.
     
    Kyyla, LeStoffer, PSman1700 and 3 others like this.
  2. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,679
    Good point. I haven't seen anyone do it that way, but it might be interesting. I'm assuming power supply recommendations assume people will overclock, so maybe a 15-20% power reduction would work on a 650W.
     
  3. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    Cyan, Lightman, PSman1700 and 3 others like this.
  4. arandomguy

    Regular Newcomer

    Joined:
    Jul 27, 2020
    Messages:
    252
    Likes Received:
    355
    My understanding is that AV1 encoding is still rather immature at this stage as in there is still a lot of gains happening (as in several factors) in terms of performance and performance/quality just from improvements on how it's done. I'd guess real time AV1 encoding for something like a consumer GPU (rather transistor and power sensitive) likely doesn't make sense at this point (if even possible given practical constraints) due to the immaturity especially given the lack of usage.

    I might be wrong with this with regards to VP9 but the interest is that I think Twitch (and possibly other streaming platforms?) might be looking to start implementing VP9 relatively sooner while wider AV1 adoption might not be until closer to 2025. Then again maybe h.264 encode improvements can outrace essentially the benefits of moving to VP9?

    Many (if not most?) reviewers, including rather notable ones, still use total system power consumption for power measurements. Also it's not always clear what they are actually measuring in terms of an "average" or whether or it not it's some peak figure, or if they have the capability of capturing data points beyond basically eyeballing a read out.
     
    swaaye and CarstenS like this.
  5. Man from Atlantis

    Regular

    Joined:
    Jul 31, 2010
    Messages:
    960
    Likes Received:
    853
    3080 is 42%(25-62%) faster than 2080Ti on average in compubench

     
    #1285 Man from Atlantis, Sep 6, 2020
    Last edited: Sep 6, 2020
    Kugai Calo, Lightman, nnunn and 3 others like this.
  6. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    Re: RTX 3080 Compubench benchmarks

    https://www.notebookcheck.net/Nvidi...d-Big-Navi-a-hard-target-to-hit.492276.0.html
     
    Lightman and PSman1700 like this.
  7. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,112
    Location:
    New York
    I was expecting more of an advantage in synthetic Compubench tests given the massive flops increase. Clearly the bottleneck is elsewhere.
     
    Picao84 and Lightman like this.
  8. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    While doing F@H with a borrowed 2080 Ti, I did this all the time. Power-Limit to 90% or my 450W PSU would shut off when the card hit high load. With my similarly underrated (and now deceased) old PSU, i had to cap my Vega56 as well at 95%. edit: Yeah, with my next build, I won't cheap out on PSU-wattage any longer.
     
    #1288 CarstenS, Sep 6, 2020
    Last edited: Sep 6, 2020
    sonen, Lightman, PSman1700 and 2 others like this.
  9. dorf

    Newcomer

    Joined:
    Dec 21, 2019
    Messages:
    126
    Likes Received:
    417
    Any ideas whether CPU load (BVH stuff) in rtrt will ~same as with Turing or shifted more to GPU?
     
  10. Picao84

    Veteran

    Joined:
    Feb 15, 2010
    Messages:
    2,109
    Likes Received:
    1,195
    Do we know if when the NDA drops if we'll get reviews of RTX3070 as well? Or only in October?
     
  11. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,679
  12. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,400
    Likes Received:
    1,845
    Location:
    France
    (And I love single big 12v rail, simpler to manage than multiples rails imo)
     
  13. fellix

    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,552
    Likes Received:
    514
    Location:
    Varna, Bulgaria
    At this rate of Tensor logic investment, what are the chances that at some point in the future Nvidia will just fold all arithmetic ALUs in just more Tensor arrays?
    The MMA programming model is already compliant with the standard grid/warp ordering of the conventional SIMT scheduling.
     
    #1293 fellix, Sep 6, 2020
    Last edited: Sep 6, 2020
  14. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,502
    Likes Received:
    24,399
    Or why not just sell separate cards that only have Tensors, so now gamers need to buy a GPU and a TPU in order to game?
     
    Lightman and egoless like this.
  15. iroboto

    iroboto Daft Funk
    Legend Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    14,833
    Likes Received:
    18,633
    Location:
    The North
    Not even the AI line of GPUs would do this. Standard compute is still very necessary, not all machine learning uses the same types of computation.
    Even the Vega cards are very good at certain types of algorithms.
    Throwing out compute in favour of tensor cores is unlikely to ever happen. You need compute flexibility.

    Or to put another way; tensor cores accelerate a type of machine learning. But we are always developing new methods and algorithms. The need for flexible compute is the enabler for that.
     
    #1295 iroboto, Sep 6, 2020
    Last edited: Sep 6, 2020
  16. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    Guess this cements it as an average other than raytracing titles.

    Looking at the numbers, if we take transistor count as a measure the chip is 50% bigger than a 2080ti, and performs 40% better and a bit better than that on raytracing. The die sizes versus process shrink roughly match up, so despite all the changes to parallelization, performance per relative die size hasn't improved at all, it's just a bigger chip. Performance per watt is better but not even close to as huge a jump as claimed, with around 20% improvement over a ti, if the TDP rating for a 3080 is accurate. The exception here being raytracing performance, which does better apparently.

    So other than raytracing performance Ampere doesn't seem like a huge jump over Turing in sheer engineering terms. Thankfully there's competition from AMD now to drive up a jump in benefits to consumers though, there it does well even above the Turing Super series.
     
    Lightman and Kugai Calo like this.
  17. kalelovil

    Regular

    Joined:
    Sep 8, 2011
    Messages:
    568
    Likes Received:
    104
    Memory bandwidth has only improved by 24% though.
    And the RTX 3080 is more cut down than the RTX 2080 ti. 2 memory channels vs 1. 20% of SMs vs 5%. RTX 3090 vs Titan RTX is probably a better comparison.
     
    xpea likes this.
  18. troyan

    Regular

    Joined:
    Sep 1, 2015
    Messages:
    604
    Likes Received:
    1,123
    GA104 is 392,5mm^2 and has 61% more transistors than TU106. RTX3070 will be around 70% faster than a 2060 Super in games while having the same bandwidth. Every transistor spent has result in the same performance increase. That is actually really good after the transistion from Pascal to Turing.
     
    xpea likes this.
  19. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
    +50% according to Nvidia so maybe a bit less when looking at independent review summaries.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...