Nvidia Ampere Discussion [2020-05-14]

Discussion in 'Architecture and Products' started by Man from Atlantis, May 14, 2020.

Tags:
  1. PSman1700

    PSman1700 Legend

    Doubt it. That 10GB GDDR6X is already heaps faster to begin with, aside from that that 10GB vram can be used efficiently too.
     
  2. troyan

    troyan Regular

    XBSX has only 10GB fast memory. I think that is nVidia's reason behind the configuration.
     
    chris1515, PSman1700 and Cyan like this.
  3. LiXiangyang

    LiXiangyang Newcomer

    I have many inhouse ML algorithms that can be benefited greatly from more FP32 CUDA core counts, but not so much from more tensor cores, so to me if Nvidia's new FP32 cuda "cores" are really just as capable as the old ones, than I am impressed with the product (I actually has the feeling of skip this generation completely after I saw the disappointing Tesla A100's specs, what a waste of 54 trillion transistors and TSMC's expensive 7nm process).

    Anyway I am waiting for more benchmarks and hoping a dual-slot Titan solution for GA102.
     
  4. Frenetic Pony

    Frenetic Pony Regular



    Not that I should even need to appeal to authority. The average user can see how Rdr2 can use more ram than consoles even have available altogether, just on the gpu. (PS4 only has a bit over 4gb guaranteed) and part of that ram is taken up by non gpu assets. The new console have 13.5gb of ram available, they can squeeze that non high-speed ram usage into that extra 3.5gb pool that the Series X had without touching the 10gb pool.

    Speaking of, while 10gb might be minimum requirements in the near future, that means 8gb could easily be below. Definitely don't buy a 3070 till the 16gb versions come out.
     
    Last edited: Sep 2, 2020
  5. Dictator

    Dictator Regular

    I am not sure if this is a joke post, but that carmack post was very much so about dx9 Era Hardware, OS, and API Stuff.
     
    LeStoffer, chris1515, xpea and 2 others like this.
  6. pharma

    pharma Veteran

    The comments responding to Carmack's post are funny and revealing.
     
    PSman1700 likes this.
  7. bdmosky

    bdmosky Newcomer

    Right... A lot has changed since 2014 on the API front.
     
    chris1515 and BRiT like this.
  8. Jawed

    Jawed Legend

    PCI Express 4 looks like it will be relevant for those buying these new cards:



    EDIT: Oh and I forgot to say, some of what was shown in the Digital Foundry "3080 performance preview" may have been slower specifically because it's a PCI Express 3 system...
     
    Last edited: Sep 2, 2020
    Lightman likes this.
  9. Davros

    Davros Legend

    The average user can also see it use less on the gpu than consoles have, it can run on a GTX 770 2GB
     
    chris1515, PSman1700 and trinibwoy like this.
  10. techuse

    techuse Veteran

    Are people really denying that consoles are more efficient? Its pretty much a fact.
     
  11. pjbliverpool

    pjbliverpool B3D Scallywag Legend

    To get RDR2 anywhere near 8GB you need to be running at well above the console settings.

    I think you'd struggle to find a game on the PC today that can't match console settings with only 4GB VRAM, let alone 8GB.
     
  12. trinibwoy

    trinibwoy Meh Legend

    Meh, there's no hard evidence there to attribute performance differences to PCIe bandwidth. We're talking about 2-3% here which is well within the margin of error for PC benchmarking.
     
  13. pharma

    pharma Veteran

  14. iroboto

    iroboto Daft Funk Legend Subscriber

    Looking forward to seeing those CDN prices.
     
  15. gamervivek

    gamervivek Regular

    Take a step back from the $700 price tag and the 3080 is as if nvidia cut one more memory channel off of 1080Ti, clocked it to the max and then claimed their biggest generational leap ever, easily more than 2x 980. The perf/W improvement, not the joke that Jensen presented, is also quite lackluster,



    3090 should be 15% faster than 3080 and looks like a poor substitute for the former Ti designations which far better, mostly being cut-down of 50% larger chips.

    Objectively, this is a far worse improvement than the last node change, the saving grace being that nvidia didn't go all out with Pascal, only ~450mm2 for the biggest chip.
     
    Lightman, no-X, sonen and 4 others like this.
  16. Davros

    Davros Legend

    phama do you know how the ichill x3 differs from the ichill x4 (3070) theres £20 difference does one support pcie x4 and the other x3 ?
    ps: at overclockers most of the 2070 supers are the same or more expensive than the 3070
     
    Last edited: Sep 2, 2020
  17. McHuj

    McHuj Veteran Subscriber

    I think this is where the cheaper Samsung 8nm process comes to play. I believe that they went with 7nm TSMC, they PPW would be better, but at 799 instead of 699. Which would you choose?
     
  18. pharma

    pharma Veteran

    Not sure, but could be anything from boost clocks to slightly different capacitors on the board. I imagine more information will be forthcoming around Sept. 17. They should all be PCIe-4.

    Edit: They just got a lot of new cards.
     
    PSman1700 and Davros like this.
  19. I think we should wait for reviews before assuming the typical clocks on the new cards. Nvidia tends to sandbag a bit, when claiming those boost clocks.

    34% efficiency uplift from 2080 to 3080. Part of it comes from the TSMC 12N to Samsung 8N transition, part comes from adopting GDDR6X (i. e. don't need 384bit or wider bus for higher bandwidth).
    Doesn't look like a lot of those 34% are coming from architectural improvements.
     
  20. ninelven

    ninelven PM Veteran

    Bruh, 3080 is the Ti 3090 is the Titan.
     
    Cuthalu and Scott_Arm like this.
Loading...

Share This Page

Loading...