Nvidia Ampere Discussion [2020-05-14]

Discussion in 'Architecture and Products' started by Man from Atlantis, May 14, 2020.

Tags:
  1. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,092
    It leads me to believe they are not a false reflection of their performance either. They instead let the benchmarks speak, it's that what impressed people. 80 to % 100% increase in raw raster performance in multiple modern famous games this early on. 200% increase in ray tracing? And then the DLSS going berserk mode.
    All that with prices that seem humane again. Let's not forget all the other features they added on their way.
    The GPU design seems to be very good too, with the airflow and build quality on top of that.
    I'm sure those 20-36TF's can come in handy with many UE5 games.

    People expected 30% increase before that day.
     
  2. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    No doubt, it's the end result that matters.
     
    PSman1700 likes this.
  3. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,092
    And most importantly, price. I think NV feels the heat from AMD. They are advancing i think in the GPU department. I wouldn't mind another '9700 pro' moment, shaking up the pc gpu space abit more.

    Edit: very good memories of the 9700pro, that thing teared through DX9 games like HL2 which was the game of the century. Held up very well in other titles too.
     
  4. LiXiangyang

    Newcomer

    Joined:
    Mar 4, 2013
    Messages:
    87
    Likes Received:
    48
    Well, the compiler can construct different instructions for different GPU arch as well, and in practice the compiler can reorganize instructions to fit the pipeline better for various arch, and a good programmer can also take the most of the resource for the target arch by allocating resource, modifying read/write pattern and hide latency better for a particular GPU arch, and giving compiler many hints for optimization without the need of writing machine codes.

    Benchmarking via OpenCL/CUDA is quite abit different than game benchmark, in the latter case, the programmer just deal with lots APIs and usually don't need to get involved with the underline computing resources, whilst the codes behind the APIs are already being fine tuned and optimized by the vendor.
     
    Deleted member 90741 likes this.
  5. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,320
    Likes Received:
    525
    R300 was special and unfortunately for NVIDIA NV30 was different kind of special. While both companies had their share of misses, R300/NV30 and G80/R600 pairings happen... well, once a decade. Quite frankly I don’t think we will ever another perfect alignment of great and terrible in the same generation from the duopoly.
     
  6. Cyan

    Cyan orange
    Legend

    Joined:
    Apr 24, 2007
    Messages:
    9,734
    Likes Received:
    3,460
    sry, you are right. Posted the news in a hurry before reading the rest of the posts.
     
    BRiT likes this.
  7. Cyan

    Cyan orange
    Legend

    Joined:
    Apr 24, 2007
    Messages:
    9,734
    Likes Received:
    3,460
    shouldn't be worried for now because I think the gold fever towards bitcoin is so 2017 and things arent the same as they were. Even some gamers back then used to say: "I dont care about gamers" as soon as they saw a dime.

    Some people started to mine again as of recently now that bitcoin has recovered slightly. I've built some mining rigs for others back in the day and mining is something that tires me, it's a boring topic. I'd rather invest on bitcoin and buy some than mining, but to each its own.
     
  8. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    I think it's a matter of target audience. The general (gamer) public is a way larger group than the tech nerds. Look at LTT's success in terms of subscribers and YT-views. Those people care more about 4K gaming and are hyped by 8k thrown in the mix i guess.

    Meanwhile, Nvidia did show something on their virtual techday sessions, that at least hint at the possibilities. I'll link the slide here:
    https://www.hardwareluxx.de/images/...on-00143_81D3C28EF3204D7D87D13319AC4CDDD8.jpg

    [In other news: I actually am aware, that it's also caches and memory as well as datapaths inside the SMs, that sizeably can affect those results.]
     
    Lightman and pharma like this.
  9. OlegSH

    Regular

    Joined:
    Jan 10, 2010
    Messages:
    801
    Likes Received:
    1,631
    Luxmark results are astonishing and as far as i remember it doesn't even support hw-accelerated RT.
     
    Lightman and PSman1700 like this.
  10. OlegSH

    Regular

    Joined:
    Jan 10, 2010
    Messages:
    801
    Likes Received:
    1,631
    I remember FP64 energy cost per ALU was very minor in comparison with data movement per Bill Dally's presentations. The same goes for ALU area cost.
    Also, there are much more computationally dense tensor cores and there are no troubles with energy since they don't move more bits around in comparison with SIMD units.
    Data paths were already there in Turing, so I guess it didn't cost anything area wise and energy wise to add these ALUs and it couples well with doubled L1 bandwidth and other changes.
     
  11. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Nope, that's purely OpenCL (and I don't think it has an extension for RT). Goes to to show, how much the combination of 2x FP32, larger and 2x faster L1 (IMO the main factors here) can yield when you're not rasterizing.
     
    Lightman and LeStoffer like this.
  12. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,679
    Power consumption matters, it just doesn't matter more than price. The other thing is people are only looking at the numbers theoretically right now, and they haven't actually put the gpu in their case and had to deal with the impacts on their cpu, or the noise of their fans.
     
    Cuthalu and DegustatoR like this.
  13. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
  14. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    Naturally, once you increase fps, CPU utilization increases accordingly.
     
  15. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    I only skimmed over the video and did not notice, but did he in fact show the card once? A graphics card read out through the driver, I can driver-mod you in sub 2 minutes (+ install time).
     
    Lightman likes this.
  16. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,891
    Likes Received:
    4,539
    https://www.tomshardware.com/features/nvidia-ampere-architecture-deep-dive
     
    Cyan and PSman1700 like this.
  17. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Don't think so.
     
    CarstenS likes this.
  18. T2098

    Newcomer

    Joined:
    Jun 15, 2020
    Messages:
    55
    Likes Received:
    115
    I would be pretty hesitant to read anything into these Ashes of the Singularity results.
    The MSI Gaming X Trio 2080 Ti has the same memory clocks versus the FE 2080Ti, and only ~7% higher boost clocks on the core.

    ... and somehow almost 15% higher FPS? I think people are comparing apples to oranges here, on different CPU / system memory configs.
     
  19. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,891
    Likes Received:
    4,539


    "The set was spotted by videocardz who posted this first. They also mention that this channel where all this was posted has already been caught using fake review samples and publishing a review earlier (they hid the name of the Ryzen processor). This time leakers do not show any review sample, so we cannot confirm if they actually tested the card or have done a bit of 'guestimation'".
    https://www.guru3d.com/news-story/first-alleged-benchmark-results-geforce-rtx-3080-surface.html
     
    #1399 pharma, Sep 9, 2020
    Last edited: Sep 9, 2020
    Lightman, PSman1700 and Cyan like this.
  20. Cyan

    Cyan orange
    Legend

    Joined:
    Apr 24, 2007
    Messages:
    9,734
    Likes Received:
    3,460
    PSman1700 and pharma like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...