AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Discussion in 'Architecture and Products' started by ToTTenTranz, Sep 20, 2016.

  1. mpg1

    Veteran Newcomer

    Joined:
    Mar 5, 2015
    Messages:
    1,526
    Likes Received:
    1,112
  2. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,798
    Likes Received:
    2,056
    Location:
    Germany
    5 mm², but anyway, that was just a random number.

    What counts is performance for me as a customer - not how useful the product I bought is for other markets. Maybe we have recently crossed the line where the one-size-fits-all approach was still valid. In the high-end, you can get away with larger-than-absolutely necessary die sizes more easily if your product is also the fastest around. Because despite much criticism and marketing efforts to the contrary, Nvidia was able to command a price premium for its GP102-cards and - if the latest pricing rumors are true - AMD has to compete with 1080 on pricing mostly.

    This of course is of no concern to the customer I was portraying above, but in the long run every dollar not earned is lost funding into coming generations, where the gap will eventually widen even more unless someone seriously missteps (Geforce FX or R600).
     
    silent_guy likes this.
  3. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,059
    Likes Received:
    1,021
    This is true, and if it was the only mechanic at work, all such markets would evolve into monopolys. A fair number effectively do.
    There are other factors at work though.

    Graphics processors depend on design, process, volume (as it increasingly affects cost) and software. It is an increasingly expensive field to be in, and total volume is currently roughly 50million discrete GPUs per year. Can it sustain even two players long term? If AMDs APU business is profitable, that reinforces their discrete GPU efforts. But only if sufficiently profitable.
     
    CarstenS likes this.
  4. Digidi

    Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    225
    Likes Received:
    97
    If you look at Ryzen Launch, AMD radically Drops the Price by the same amount of Performance. I think the same Thing will happen with Vega. 1080Ti Performance by a much lower Price, to geht the Name back in the GPU buisness.

    Deactivated Tiled Base Rasterize and Power Features and a strange behavior of HBM Memory Point out that AMD is hiding some really Special from us.
     
  5. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    1080ti performance for 399$ would revolutionize the market. That would be Ryzen²
     
    Cat Merc and CarstenS like this.
  6. Digidi

    Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    225
    Likes Received:
    97
    I think ist more the 599 Dollar Liquid cooled. Ist 100 $ Cheaper than the 1080ti. If this Card dont hit 1080ti nobody will buy it. At this Segment 100 $ doesen't matter. Than you use the better Performance.
     
  7. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,798
    Likes Received:
    2,056
    Location:
    Germany
    I hope you're right.
     
    Cat Merc likes this.
  8. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,965
    Likes Received:
    4,554
    The GTX1080 making a bigger profit doesn't mean that a $500 Vega would struggle to make money. Those are two very different things.
    As mentioned before, Fiji was selling for a similar price with twice the number of HBM stacks together a 50% larger die and much larger interposer. And it was definitely making money for AMD.

    Truth is (due to AMD's absence in the high-end for over a year), nvidia selling their 300mm^2 chip in $500-600 cards has probably been making them unprecedented amounts of profit-per-card.
    7 years ago, the similarly sized GF104 was introduced at $230 in its higher-end version, the GTX460.



    If Vega 56 reaches close to GTX 1080 and is priced at $400, it'll be great already (for the consumer) IMO.


    Even if AMD has been sandbagging Vega's true performance and the RX driver will bring a 50% gaming performance boost over FE's current driver, I think we can all agree on one thing:

    - All marketing and general communication to the consumer surrounding RX Vega has been the most terrible I've seen regarding any graphics card, ever.

    Even if they have an unlikely ace in their hand, it's like AMD is doing their best to get everyone to lose their patience and purchase something from the competition. Every single meaningless video, tweet, blind test, wine taste, whatever etc. feels like they're trying to lose as much of their audience as they possibly can before the thing is out there.
    Heck, I have a Freesync monitor and I'm in the market for a new high-end graphics card (mainly because my R9 290Xes are ridiculously overpriced in the 2nd-hand market), and even I sometimes get the urge to buy a GTX1080 just to spite their marketing team.
    It's that terrible.
     
    RedVi, Cat Merc, MDolenc and 2 others like this.
  9. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,983
    Likes Received:
    1,496
    $400 1080 performance and I'm in. If it's better than a 1080 at $400 then day one
     
    Silent_Buddha and mpg1 like this.
  10. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    Given the right price? For the consumer, it doesn't need to.

    But for a forum that's supposed to focus on the architecture of the GPU, comparative perf/mm2 and perf/W is one of the key points, IMO.

    If AMD can't get a Titan XP-sized die with HBM to perform like a Titan Xp, yet consume quite a bit more, there's something broken about their architecture. Especially after giving themselves an extra year to (supposedly) finally fix the issues that have plagued them.

    All of that still assumes that Vega will perform only a little slower than an Xp. If it performs like an FE, then Vega is simply one of their worst designs ever.
     
    kalelovil, yuri and pharma like this.
  11. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    Not sure if you're being sarcastic?
     
  12. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,965
    Likes Received:
    4,554
    Or they're spending a large chunk of the die area to successfully achieve 24TFLOPs of FP16 ops, and/or they're spending another substantial chunk of the die area on the HBCC that allows the GPU to address different kinds of storage, both of which the GP102 is unable to do.

    Only if this tweet sounds like sarcasm to you.

     
    BacBeyond and Malo like this.
  13. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    HBCC: I'm still waiting for a good explanation about how this is different than virtual memory. And it's not the kind of thing that will cost a ton of area.

    FP16: See my post here: https://dev.beyond3d.com/threads/nvidia-volta-speculation-thread.53930/page-20#post-1984231
    Using 0.00266 mm2 per FP16 core, the FP16 cost for GP102 would be 20mm2, bringing a GP102 to 491mm2 vs 484mm2 for Vega. Correct for the most smaller area required for an HBM PHY and GP102-with-FP16 and Vega have the same die size.

    Not unreasonable to expect the same performance, IMHO.
     
    yuri likes this.
  14. mpg1

    Veteran Newcomer

    Joined:
    Mar 5, 2015
    Messages:
    1,526
    Likes Received:
    1,112
    what are the chances the Vega 56 matches 1080 performance though..
     
  15. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,798
    Likes Received:
    2,056
    Location:
    Germany
    Hard to say IMHO. In the past, scaling with CUs was less than stellar for many gaming workloads. If this is still the case and AMD was able to find some knobs in the driver to propel Vega FE/64 beyond 1080, then a 12.5% decrease in theoretical FLOPS might not translate into 12.5 % less performance in games. Which can be both good and bad for AMD.

    If only those were not advertised as „packed math“. As such, reusing existing FP32 resources, the die size implications should be less than massive.
     
  16. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,965
    Likes Received:
    4,554
    Guess you'll have to wait another full 10 hours or so.


    And exactly how much of that "per-FP16-core" size estimation you made for a different architecture from a different IHV being made in a different process is relevant for how much Vega is spending on its version of RPM?
    For all you know, AMD could need 5x more area than nvidia for the same end result.



    If AMD prices it at $400-450 I'd say pretty big chances.
    AMD isn't exactly known for charging a premium over nvidia cards with similar performance, is it?
     
  17. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,798
    Likes Received:
    2,056
    Location:
    Germany
    Regarding pricing - sure, AMD can always resort to their SEPs, but current overblown market pricing on basically every polaris card is creating additional hurdles in this regard. Inside your own stack, you have to keep a bit of sanity - or not? Just imagine a Vega 56 at the same e-tail pricepoint as RX 580. That would blow some minds. Mine at least.
     
  18. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,965
    Likes Received:
    4,554
    Pun intended?


    Regardless, AMD's current (official) stance is that RX580 could go back to lower than its MSRP within months/weeks/days/hours, so pricing on Vega cards won't be influenced by the current inflation on Polaris 10 cards.
    Besides, if they price Vega too high then people will simply buy the GTX1080 and 1080Ti, both of which are freely available because they have worse mining performance than cards using regular GDDR5 (Polaris and GTX1070).
     
    Cat Merc and CarstenS like this.
  19. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Trading some performance for flexibility wouldn't be unreasonable. Having all the Tier3 features and programmability will cost some area and performance.

    My thinking was IO and Infinity eating space if they aren't hiding performance. More PCIE lanes for SSG, extending the network on APUs, and possibly internal bandwidth. The chips being larger, but not significantly harming yields as Infinity may be less prone to defects or easily able to work around them. Somewhere around here was a Nextplatform article on how the network would handle data poisoning by dropping hot cores on CPUs. While the chip is larger, yields may be higher because of that.

    Found it.
    https://www.nextplatform.com/2017/07/12/heart-amds-epyc-comeback-infinity-fabric/
     
    #3099 Anarchist4000, Jul 30, 2017
    Last edited: Jul 30, 2017
    BRiT likes this.
  20. sir doris

    Regular

    Joined:
    May 9, 2002
    Messages:
    651
    Likes Received:
    110
    Unfortunately no matter the performance, even if AMD price these cards low, the retailer's will be charging ridiculous amounts of cash for them (well here in the UK at least) :( The GTX 1070's are £450+ when they are available, and even the rubbish at mining 1080's are still £550+. So short of supply Vegas will get gouging prices.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...