AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Discussion in 'Architecture and Products' started by ToTTenTranz, Sep 20, 2016.

  1. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,285
    Likes Received:
    1,437
    Maybe on average, but if you go into details, several games lose 8, 9 and 10% of performance. Heck Witcher 3 lost 14% @1080p. So it's very workload dependent, AMD wouldn't have pushed the clocks that high for a mere 4%.
    The least power saver is 200W in that review, which is still much higher than GP104 (166w), and with 10% less performance.
    You can push Pascal and Volta clock to the limits comfortably @~2.1GHz without increasing voltage or going through the roof in power consumption, so once more it comes down to architecture, which is what the main argument is about. One arc (Vega) has a ceiling on clocks and thus scales badly power wise once you push past a certain point, and the other one doesn't, because it has that certain threshold point at a much higher position. Even when it has nearly double the transistor count, and even when being on an older node.
    I've heard of several. Even unstable cards. Once more it's lottery.
     
  2. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,285
    Likes Received:
    1,437
    WCCFT reached out to NVIDIA about their ResNet 50 performance using Tensor Cores, NVIDIA got back to them with their latest results for Turing (T4) and Volta (V100).

    [​IMG]
    https://wccftech.com/amd-radeon-mi60-resnet-benchmarks-v100-tensor-not-used/

    Official statement from NVIDIA:

    Official Statement from AMD:
     
    beyondtest, yuri, pharma and 3 others like this.
  3. Malo

    Malo YakTribe.games
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,464
    Likes Received:
    2,507
    Location:
    Pennsylvania
    Lots of dick-swinging as usual, boring to read. And not just by the companies but by members here as well holding their dicks for them.
     
  4. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    7,731
    Likes Received:
    1,457
    Location:
    Finland
    Of course on average
    GPU power != card power
    I never said Vega matches Pascal perf/watt, those points were just to illustrate how Vega 10 was pushed close to it's limits and GP104 not, contrary to what you claimed earlier ("GP104 is clocked to the max")

    I'm not sure how we got here but we both seem to be claiming the same thing but still 'arguing' about it o_O

    Guess there has to be some bad cards always, and yes, there's always some lottery involved no matter what you buy.
     
  5. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    1,364
    Likes Received:
    226
    Location:
    USA, CA
    Real swinging starts when google tpu2 and fpga's are brought in. There is a lot of competition outside gpu's on dnn training/inference solutions.
     
    nnunn and jacozz like this.
  6. jacozz

    Newcomer

    Joined:
    Mar 23, 2012
    Messages:
    89
    Likes Received:
    18
    I don't know.
    But shouldn't dedicated hardware for a specific task always trump a Jack of all trades-chip?
    GPU:s should be be about graphics no?
    If you want a piece of the AI-market, maybe better develop specifik hardware for that.
     
    BRiT likes this.
  7. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,065
    Likes Received:
    8,915
    Location:
    Under my bridge
    Compute says otherwise.
     
  8. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    1,364
    Likes Received:
    226
    Location:
    USA, CA
    It really depends on use case what makes most sense. If you are giant company with service Y it probably makes all the sense in the world to optimize a solution for Y. If you are renting compute time to a set of diverged customers you likely want flexible solution instead of multiple niche solutions(maintenance, shifting demand, volume plays to your advantage). If you are researcher at university you might want something extra flexible and quite likely cheap. Your budget is limited, you might want to work on your laptop/desktop and possibly you are pushing boundaries with new algorithms so anything too hardcoded doesn't cut it.

    This is exciting time as generic AI is very much unsolved problem. It's difficult to even imagine what solution both in hardware and algorithms would feasibly lead to generic ai/singularity in semi near future. Computing needs are diverged enough and growing enough that there is room for many players to innovate and play today.

    One super interesting thing is various types of neural networks as they seem to be great at different kind of graphics tasks. Maybe the nature of graphics rendering is about to get greatly enhanced. Maybe someone even dares to dream of graphics engine outputting data structures that are very suitable for neural networks to enhance in such a way that pleasing visuals are outputted. Think this of as cell shading in steroids kind of idea. "ai rendering" might sound crazy and it probably is but there already are demos of taking a neural network and teaching it style of specific artist. Neural network then takes ordinary pictures and changes their style to match that artist.
     
    snarfbot and nnunn like this.
  9. beyondtest

    Newcomer

    Joined:
    Jun 3, 2018
    Messages:
    58
    Likes Received:
    13
    That's huge granted I know little about downvolts.
     
  10. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    56
    Likes Received:
    13
    MI50 & 60 scale much better than any other alternative. Doesn't matter how much power is in one chip, it matters how much power can be achieved with numerous chips.
     

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...