AMD: Navi Speculation, Rumours and Discussion [2019]

Discussion in 'Architecture and Products' started by Kaotik, Jan 2, 2019.

  1. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,756
    Likes Received:
    1,992
    Location:
    Germany
    They sure as hell are happy about the pricing.
     
  2. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    150
    Likes Received:
    123
    Vega needs twice the bandwidth, ~1.35 raw FLOPS, a full process node advantage and that ~1.4 higher TDP to perform "a bit worse than RTX 2080". Sure, the performance per transistor is a nice metric but what about the others?
     
    Cuthalu, del42sa, Cat Merc and 5 others like this.
  3. DmitryKo

    Regular

    Joined:
    Feb 26, 2002
    Messages:
    607
    Likes Received:
    411
    Location:
    55°38′33″ N, 37°28′37″ E
    RTX2080 still has 1/4 less CUs and TFLOPS, and the same transistor count and die clocks are implemented on a 14 nm process node vs 7 nm.

    BTW did AMD ever publish INT8/INT4 throughput figures for Vega 20?
     
    #123 DmitryKo, Jan 22, 2019
    Last edited: Jan 23, 2019
    pharma and vipa899 like this.
  4. Azhrei

    Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    15
    Likes Received:
    7
    nVidia are also playing with a lot more money than RTG.
     
    vipa899 likes this.
  5. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    2,993
    Likes Received:
    104
    I just think that if nvidia would have managed to increase efficiency (perf/w and perf/transistor) on the same process like they did with kepler -> maxwell that would have been a much, much worse problem for AMD. As it is, Turing did nothing (for gaming, excluding the new stuff) for perf/w and is worse for perf/transistor - yes it's got new features, but they came at a cost. Nvidia also increased perf / raw flop, but that did not increase perf/transistor neither (as those new SMs definitely didn't come for free).
    I mean, if you compare gk104 (gtx 680) with gm206 (gtx 960), that was quite the efficiency improvement there - the latter has just over half the memory bandwidth, 75% the chip size, 2/3 the power draw, half the TMUs, ... yet in the end the latter is pretty much as fast, and it also had more features (sure they might not have been as flashy as RTX, but kepler actually had quite some limitations with its 11_0 feature set vs. 12_1).
    None of that is true for Pascal->Turing, except is has new features. The chips are way bigger for the same performance (plus they get more memory bandwidth too), perf/w is largely the same.
    So if AMD feared nvidia would pull off another maxwell, they certainly would have to be relieved. (That said, the writing was on the wall back then for maxwell, since at least some of kepler's inefficiencies, like the too high TMU count and the inefficient SMs where it was mostly impossible to get high alu utilization, were obvious. Since Paxwell didn't show any such obvious flaws, it would have been a miracle if nvidia could have improved on it in a similar spectacular fashion.)
     
    Silent_Buddha, DmitryKo and vipa899 like this.
  6. vipa899

    Regular Newcomer

    Joined:
    Mar 31, 2017
    Messages:
    922
    Likes Received:
    348
    Location:
    Sweden
    True, its only for the better then.
     
  7. Cat Merc

    Newcomer

    Joined:
    May 14, 2017
    Messages:
    114
    Likes Received:
    97
    Except NVIDIA has a bunch of transistors spent on tensor cores and RT cores that Vega 20 doesn't.
     
    del42sa likes this.
  8. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,756
    Likes Received:
    1,992
    Location:
    Germany
    And AMD has a bunch of transistors spent on a 4096 Bit memory interface and half-rate FP64.

    I don't think it's as easy as pointing out where each has spent this and that many transistors. They both made design choices for various, varying reasons and taking different gambles and bets on whether or not, when and where they would pay off.
     
  9. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,251
    Likes Received:
    185
    Well, and Vega 20 has a bunch of transistors spent on features, which are not supported on TU104 (e.g. native 16:1 Int4, 8:1 Int8, 1:2 FP64, ECC cache, full HW support for virtualisation, PCIe 4.0, HBCC etc.)
     
    Heinrich4 and Lightman like this.
  10. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,251
    Likes Received:
    185
    I mentioned that energy efficiency isn't good. But why should I care for TFLOPS or memory controller? It's within the limit of the same transistor budget.
     
  11. troyan

    Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    115
    Likes Received:
    176
    Because transistor budget doesnt matter. GV100 has 60% more transistors than Vega20 but delivers 3x more training performance in Resnet-50.

    Most of the budget is used for features which doesnt relate into direct performance increase. Build a game around Turing's features and a RTX2080 will perform much better.
     
  12. Rootax

    Regular Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    967
    Likes Received:
    445
    Location:
    France
    You can make the argument for Vega too, or every gpu, build a game around Vega's feature/arch and it will perform much better... Welcome to the pc space.
     
    #132 Rootax, Jan 23, 2019
    Last edited: Jan 23, 2019
    Lightman and no-X like this.
  13. troyan

    Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    115
    Likes Received:
    176
    Turing supports a lot more features: RT? Around 50%+ faster. DL? Around 50%+ faster. Mesh-Shading? VR?

    And dont forget: AMD is using 7nm. The process allows for 25% higher clocks. Instead of using more transistors AMD uses the process to increase performance.
     
    vipa899 likes this.
  14. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    150
    Likes Received:
    123
    Exacly this. Using a single metric is useless without a context.
     
    A1xLLcqAgt0qc2RyMz0y likes this.
  15. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    1,607
    Likes Received:
    1,161
    Is this an Nvidia vs AMD thread or "AMD: Navi Speculation, Rumours and Discussion [2019]" ? :sleeping:
     
  16. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    342
    Likes Received:
    195
    Turing still supports IDP4A (4x Int8 dot product-accumulate as well as 2x, 4x and 8x int8, int4 and binary/boolean ops on TCs - http://on-demand.gputechconf.com/gtc-kr/2018/pdf/HPC_Minseok_Lee_NVIDIA.pdf ) and I am not sure whether HBCC is any different from Volta/Turings' virtual memory implementation, neither have I seen any "full HW support for virtualisation" / "Volta MPS" comparisons.
     
    yuri likes this.
  17. del42sa

    Newcomer

    Joined:
    Jun 29, 2017
    Messages:
    62
    Likes Received:
    40
    #137 del42sa, Jan 26, 2019
    Last edited: Jan 26, 2019
    vipa899 likes this.
  18. SpaceBeer

    Newcomer

    Joined:
    Apr 15, 2017
    Messages:
    34
    Likes Received:
    14
    Location:
    The Balkans
    1-2% of gamers are buying $600+ GPUs. And 80-90% of them go for nVidia cards. Regardless of unit and production cost, AMD will not make huge profit in that market (probably never)
     
  19. itsmydamnation

    Veteran Regular

    Joined:
    Apr 29, 2007
    Messages:
    1,241
    Likes Received:
    332
    Location:
    Australia
    Love the link, it cost 80 dollars for 4gb but we have no idea how much 4gb costs.......
     
    Lightman likes this.
  20. BoMbY

    Newcomer

    Joined:
    Aug 31, 2017
    Messages:
    64
    Likes Received:
    29
    ToTTenTranz, Lightman and Kaotik like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...