Why is AMD losing the next gen race to Nvidia?

Discussion in 'Architecture and Products' started by gongo, Aug 18, 2016.

Tags:
  1. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    No he is not, CSI is saying you need to ensure settings/games are the same because otherwise, the conclusion of the article isn't justified because there is no direct correlation between the end %, which is a valid argument.

    now if Hardware Canucks, kept the same games and then a different part of the review with different games, then we can see the correlation of how things shifted, but without the original shift (same games/settings across that time frame) you can't see the changes taking place, further more then trying to add in the new games the shift can be even more dramatic, which causes margins of error to be exceeded.

    Now you might be able to make a general conclusion but by no means is that conclusion 100% absolute because its like looking at a trend instead of actual results. A Trend might be wrong and less data you have the more chances it might be wrong, and that is exactly what you have here a trend with limited data compounded by the fact you need to exclude some of the data points because there is nothing to compare to because of the reasons above.
     
    #241 Razor1, Dec 7, 2016
    Last edited: Dec 7, 2016
    pharma, CSI PC, DavidGraham and 2 others like this.
  2. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Yep spot on Razor.
    Just to add seems the chart description for Doom Vulkan was not correct, Michael did test it with TSAA with Async Compute and not MSAA.
    Also in the forum Michael also says about the focus of the project:
    And of course that includes revisiting settings as games-drivers sometimes updated that either optimised better more advanced settings or provided further options, but this falls into the balancing act I mention above not just for settings but also games included/fully or partially API omitted as part of the scope of such a project and importantly the context of his conclusion, a few of us disagree with his conclusion approach but it is how he wanted to do it.
    TBH and IMO you cannot go wrong with either a good 480 model or good 1060 model, they trade blows.
    I was disappointed he has now removed AoTS, but it is a pig to get consistent results from and I guess may had been fedup with queries about why his results using PresentMon does not match internal benchmark+internal capture tool, seems nearly everyone has now swept that game under the rug.
    Cheers
     
    #242 CSI PC, Dec 7, 2016
    Last edited: Dec 7, 2016
  3. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    FWIW, when we did our own RX480 re-test after 4 months the card proved to perform measuarably better as a combined result from what seemed to be better Power Management in the driver and of course driver improvements. What in our selection of tests used to be performance in the vicinity of R9 290 turned out to be better than 290X lateron.

    Then, with the move to the new testing regimen two month ago and the inclusion of games like vk_Doom the card moved further up in the ranking to what I would call a real-world par with the 1060 6GB - both at advertised boost speeds.

    Another point to be considered when testing tightly power managed cards is intra-series variation. While the first of our reference 480s (used for the re-test of course) was turning in an average of ~1190 MHz over our parcours due to power throttling especially in higher resolutions, another reference card that apparently used less power was at almost stock boost with avg. ~1250 MHz just because it used less power. This easily equated to 3% performance variation and thus almost as much again as the driver improvements over the first months. So be mindful of that too.
     
    Kej, Razor1, Ike Turner and 3 others like this.
  4. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,907
    Likes Received:
    1,607
    "Advertised boost speeds", meaning reference clocks?
     
  5. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    Correct.
     
  6. gongo

    Regular

    Joined:
    Jan 26, 2008
    Messages:
    582
    Likes Received:
    12
    1080Ti looks like a good enthu card....gamers on 980 and below will want to jump in.
    AMD gamble to stick with HBM2 is not paying dividends...
    RX Vega needs to be leaked now...and performance should be 20% faster.
    If not, why do they think there are gamers willing to wait another 2-3 months for it. Or that those on $600 card now, will want to 'upgrade'.
    Volta is also around the corner.
     
  7. entity279

    Veteran Regular Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,229
    Likes Received:
    422
    Location:
    Romania
    I personally I'm willing to wait bc nVidia does not support Freesync and I refused to buy into a Gsync monitor.

    Another reason for waiting would be the hope in price decrease : if volta & vega are released in 1-3 months from eachother at least.
     
    Sxotty likes this.
  8. ImSpartacus

    Regular Newcomer

    Joined:
    Jun 30, 2015
    Messages:
    252
    Likes Received:
    199
    The problem is that if Vega 10 would beat the 1080 Ti by 20% as it's configured, then Nvidia probably wouldn't've configured the 1080 Ti like that.

    We all know that Nvidia is big on maintaining that halo gpu. And both Nvidia & amd probably know what each other is doing due to corporate espionage. If Nvidia needed to, they could've used a full gp102.

    So I think it's safe to assume that Vega 10 won't outperform the 1080 ti by any meaningful margin.
     
  9. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Configured as what ? it is titanXP with small cut on memory. it is coming with faster clock speed and benchmark just show that.
    On the other end, dont forget custom cards will come with way higher clock speed.

    The 1080TI seems consume a bit more more than the TitanXP ( i need to check more review anyway as it differ completely from a review to another ). it is slighty over 250W .. at a certain moment, they cant push more the GP102 as what it is on the TitanXP without ending with higher TDP.
    TitanXp was allready the max you can do with a 250W enveloppe.

    Now i dont say that Vega will be faster, just that Nvidia cant push more the GP102. The only new configuration possible will be Volta.
     
    #249 lanek, Mar 9, 2017
    Last edited: Mar 9, 2017
  10. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
    nvidia could still make a 600mm2 HBM2 behemoth that is roughly another 30% over GP102, but they won't. And they don't need to considering they have six cards that are ahead of AMD's best.

    In hindsight, Koduri talking up a big game last year was utterly comical and the punchline came with Vega™ T-shirts. :lol:
     
  11. ImSpartacus

    Regular Newcomer

    Joined:
    Jun 30, 2015
    Messages:
    252
    Likes Received:
    199
    If Nvidia needed to do so, you better believe they'd find a way to effectively cool a 300W GP102.

    Nvidia really cares about being "the best". If nothing else, they'd cherry pick GP102 die and release a limited run 2017 Titan X with all of GP102's bells and whistles just so they can say that they had the best single gpu.

    But they did none of those things. So I'm doubtful that Vega 10 will blow us away. I'd love to be wrong, but it's feeling like Vega 10 really was meant to be released in late 2016. I doubt Volta will be kind to it.
     
  12. entity279

    Veteran Regular Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,229
    Likes Received:
    422
    Location:
    Romania
    No, they just can't do that overnight. It's hard enoungh increasing the clocks of an existing SKU by say 100mhz while maintaining decent availability. Anything more complex requires months of preparation, in exces of 6 ..
     
  13. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    There's many factors outside cooling who enter in question... first, going to an "AMD like" power consumption back in the 290X era is not forcibly a good idea today for Nvidia. Cost will increase and can eat on margin. ( or the politic on PC hardware is now sell less, but for more ( the invert of AMD who need to sell more for less price for increase their market share ), and you have still to feed the AIBs. ( and them can increase the clock with better cooling, and eat their margin and come with too with higher power consumption, justified by the +15-20% performance bonus )... Lets imagine Vega come then at a 10% less performance for 100$ less..
     
    #253 lanek, Mar 9, 2017
    Last edited: Mar 9, 2017
  14. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,797
    Likes Received:
    2,056
    Location:
    Germany
    Well, they certainly almost maxed out on GP102's potential. Yes, sure, they can launch a full config, maybe tighten up binning a bit more, but it's not gonna be another 10% or more.

    But, and it's a big but: With GP102 having raised the bar and not so much the price, they still have headroom left in terms of die size (GP102 supposedly is ever so slightly smaller than Vega 10) for another chip going up to the established economical max at about 600 mm² and they still have the potential left untapped that AMD has been advertising since Fiji: HBM(2) - both in terms of power and in terms of die space. AMD said that the memory controllers on Fiji were smaller than those on Tahiti - although I am not sure if they meant normalized for process tech.

    So Vega 10 better have some surprises up it's sleeve apart from being a Fiji-sized Polaris with HMB2-bandwidth levels. But since Vega 10 is not a mainstream product but an enthusiast class GPU, I am quite confident, there's something more to be seen than monikers like HBCC.
     
    Anarchist4000 likes this.
  15. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    If you are speaking about a potential GP100 like.. Theres allready some rumor about a refresh of it, effectively... Geforce 2xxx or a new Titan ?.. All depend maybe if Volta is for early 2018 or end of 2018.

    Again, i dont say that Vega is faster, i was just respond that with the GP102, they cant do really much.

    In fact it is a bit hard to know where is Vega...
     
    #255 lanek, Mar 9, 2017
    Last edited: Mar 9, 2017
  16. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,491
    Likes Received:
    909
    What I find surprising is just how large Vega 10 is. I mean, it's similar to Fiji in unit count and its memory interface is half the latter's size, yet it's almost as large in spite of the 14nm process.

    Granted, the front end is apparently much improved, and I guess it has fast DP, but still!

    Then again, maybe it has a lot more ROPs or something…
     
  17. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    I think its better to look at Vega from Polaris when looking at unit ratios. And looking at it from that point of view its die size seems to reasonable.
     
  18. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,491
    Likes Received:
    909
    Hmm, interesting point, but Polaris 10 is 232mm² large with 36 CUs, whereas Vega 10 is believed to have 64 (N)CUs. 64/36 × 232 = 412mm², and when you factor in the area savings from HBM, Vega 10 should be under 400mm². Yet some reports put it at 500mm² or so. But then I guess the front end, the new features in the NCUs, and DP rate could account for the difference.
     
    Lightman likes this.
  19. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    yes ..DP and logic take a lot of places and as Vega seems to be designed around this new "storage controllers" (HBCC ).. all this should take space. Is it enough for account for the differences, as you said i dont know.

    Vega seems real good at professional solution ( on paper seems largely compet with GP100-GP102 quadro tesla, if not a notch higher (outside FP16 case ), the question i ask me is what is the tradeoffs for gaming..
     
    #259 lanek, Mar 9, 2017
    Last edited: Mar 9, 2017
  20. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    They also have both GDDR and HBM bus too (rumored), so might not get the HBM bus size savings.

    But yeah I think the DP units and what ever tweaks for the NCU will add up for the rest of it.

    We might also see the transistor density drop for higher clocks speeds.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...