AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Discussion in 'Architecture and Products' started by ToTTenTranz, Sep 20, 2016.

  1. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,788
    Likes Received:
    2,593
    Maybe on average, but if you go into details, several games lose 8, 9 and 10% of performance. Heck Witcher 3 lost 14% @1080p. So it's very workload dependent, AMD wouldn't have pushed the clocks that high for a mere 4%.
    The least power saver is 200W in that review, which is still much higher than GP104 (166w), and with 10% less performance.
    You can push Pascal and Volta clock to the limits comfortably @~2.1GHz without increasing voltage or going through the roof in power consumption, so once more it comes down to architecture, which is what the main argument is about. One arc (Vega) has a ceiling on clocks and thus scales badly power wise once you push past a certain point, and the other one doesn't, because it has that certain threshold point at a much higher position. Even when it has nearly double the transistor count, and even when being on an older node.
    I've heard of several. Even unstable cards. Once more it's lottery.
     
  2. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,788
    Likes Received:
    2,593
    WCCFT reached out to NVIDIA about their ResNet 50 performance using Tensor Cores, NVIDIA got back to them with their latest results for Turing (T4) and Volta (V100).

    [​IMG]
    https://wccftech.com/amd-radeon-mi60-resnet-benchmarks-v100-tensor-not-used/

    Official statement from NVIDIA:

    Official Statement from AMD:
     
    beyondtest, yuri, pharma and 3 others like this.
  3. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,032
    Likes Received:
    3,104
    Location:
    Pennsylvania
    Lots of dick-swinging as usual, boring to read. And not just by the companies but by members here as well holding their dicks for them.
     
    swaaye, lanek, DeF and 8 others like this.
  4. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,184
    Likes Received:
    1,841
    Location:
    Finland
    Of course on average
    GPU power != card power
    I never said Vega matches Pascal perf/watt, those points were just to illustrate how Vega 10 was pushed close to it's limits and GP104 not, contrary to what you claimed earlier ("GP104 is clocked to the max")

    I'm not sure how we got here but we both seem to be claiming the same thing but still 'arguing' about it o_O

    Guess there has to be some bad cards always, and yes, there's always some lottery involved no matter what you buy.
     
  5. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    1,566
    Likes Received:
    400
    Location:
    Earth
    Real swinging starts when google tpu2 and fpga's are brought in. There is a lot of competition outside gpu's on dnn training/inference solutions.
     
    nnunn and jacozz like this.
  6. jacozz

    Newcomer

    Joined:
    Mar 23, 2012
    Messages:
    89
    Likes Received:
    18
    I don't know.
    But shouldn't dedicated hardware for a specific task always trump a Jack of all trades-chip?
    GPU:s should be be about graphics no?
    If you want a piece of the AI-market, maybe better develop specifik hardware for that.
     
    BRiT likes this.
  7. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,735
    Likes Received:
    11,210
    Location:
    Under my bridge
    Compute says otherwise.
     
  8. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    1,566
    Likes Received:
    400
    Location:
    Earth
    It really depends on use case what makes most sense. If you are giant company with service Y it probably makes all the sense in the world to optimize a solution for Y. If you are renting compute time to a set of diverged customers you likely want flexible solution instead of multiple niche solutions(maintenance, shifting demand, volume plays to your advantage). If you are researcher at university you might want something extra flexible and quite likely cheap. Your budget is limited, you might want to work on your laptop/desktop and possibly you are pushing boundaries with new algorithms so anything too hardcoded doesn't cut it.

    This is exciting time as generic AI is very much unsolved problem. It's difficult to even imagine what solution both in hardware and algorithms would feasibly lead to generic ai/singularity in semi near future. Computing needs are diverged enough and growing enough that there is room for many players to innovate and play today.

    One super interesting thing is various types of neural networks as they seem to be great at different kind of graphics tasks. Maybe the nature of graphics rendering is about to get greatly enhanced. Maybe someone even dares to dream of graphics engine outputting data structures that are very suitable for neural networks to enhance in such a way that pleasing visuals are outputted. Think this of as cell shading in steroids kind of idea. "ai rendering" might sound crazy and it probably is but there already are demos of taking a neural network and teaching it style of specific artist. Neural network then takes ordinary pictures and changes their style to match that artist.
     
    snarfbot and nnunn like this.
  9. beyondtest

    Newcomer

    Joined:
    Jun 3, 2018
    Messages:
    58
    Likes Received:
    13
    That's huge granted I know little about downvolts.
     
  10. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    157
    Likes Received:
    33
    MI50 & 60 scale much better than any other alternative. Doesn't matter how much power is in one chip, it matters how much power can be achieved with numerous chips.
     
  11. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    Mobile Vega 20 tested in the latest 15" Macbook Pro:



    [​IMG]

    He also claims the thermals are substantially improved. The i9 throttles substantially less now, while the whole system is quieter.

    Comparing Polaris 11/21 with Vega M 20, we're seeing an 82% performance upgrade within the same power envelope and same fab process.

    I have no idea how these compare to windows scores, but it would be really interesting to compare the Mobile Vega 16/20 with the GP106 and GP107 mobile cards.
     
    AstuteCobra, Lightman, sonen and 2 others like this.
  12. Picao84

    Veteran Regular

    Joined:
    Feb 15, 2010
    Messages:
    1,555
    Likes Received:
    699
    If this is true, why are they pushing RX590 at all? Vega 20 must be exclusive to Apple even on desktop?
     
  13. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,298
    Likes Received:
    247
    It's more expensive (HBM2 + interposer). Manufacturing price to performance ratio is worse. The strong point - small size - isn't important in desktop.
     
  14. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    They're pushing Polaris 10 again because it's a whole lot cheaper to make than a new Vega mid-range chip, and contrary to the belief of many power efficiency isn't all that important in the $200-300 price range for gaming video cards.
    Vega 20 may not be exclusive to Apple and we could still see a design win for it in the laptop PC space, but the macbook pro gets away with enormous margins that are very rare elsewhere and Vega 20 is probably a whole lot more expensive than e.g. a GTX1060 Max-Q.
     
  15. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,179
    Likes Received:
    581
    Location:
    France
    Vega 20 vs Mobile Vega 20 is confusing...
     
  16. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    Well technically the consumer doesn't even know of the existence of a 7nm Vega 20.
    All they'll see in AMD's webpage is a Radeon Instinct MI60 and MI50.
     
  17. Picao84

    Veteran Regular

    Joined:
    Feb 15, 2010
    Messages:
    1,555
    Likes Received:
    699
    True, forgot about the cost of HBM2.
     
  18. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,179
    Likes Received:
    581
    Location:
    France
    Yeah, I mean for "us" sometimes, like even some tech sites use "Vega 20" without the mobile mention.
     
  19. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,015
    Likes Received:
    112
    Even in cpu-only benchmarks... So that would point more towards an improved cooling solution (or luck of the draw) actually...

    There weren't any power consumption numbers, right? If the cooling solution is indeed better, it could easily draw a bit more power and still be quieter (although I don't doubt efficiency increased substantially).
     
    A1xLLcqAgt0qc2RyMz0y and sonen like this.
  20. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    332
    Likes Received:
    87
    Vega seems to be fantastic... for running super low power at super low frequencies. However that doesn't mean as a cost of silicon it's great. It takes more silicon, and thus more cost, to build an equivalently performing Vega chip than to build a Polaris one. So an Rx590 is what we get instead of a Vega 32.
     
    ToTTenTranz likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...