Why is AMD losing the next gen race to Nvidia?

Discussion in 'Architecture and Products' started by gongo, Aug 18, 2016.

Tags:
  1. gongo

    Regular

    Joined:
    Jan 26, 2008
    Messages:
    582
    Likes Received:
    12
    So now Nvidia has release the 3G 1060...this is in addition to the full Pascal in a laptop form(gasps!). Nvidia has launched the full range of next gen nm gaming GPU...whereas AMD is struggling with Polaris.

    What gives? I was expecting the move to 14/16nm is a form of reset. A new dawn, a new fight. Yet the noteworthy rumors i've heard about AMD is..."AIB are not happy that AMD has not let them know of a high performance part this year" ...

    What is taking AMD so long?

    On a high level, Polaris do not seem much changed to their 28nm family...almost similar to Pascal-Maxwell. Is AMD waiting for HBM2? Is AMD tied up with PS4Neo?

    I am sad Nvidia is given free reign now, and prices are up. Normally, such 'delay-no news' turns out to be bad. AMD big GPU may be suffering from high power drain leakage i fear...RX 480 did not leave a good impression.
     
    PixResearch likes this.
  2. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,699
    Likes Received:
    117
    Should be fairly obvious.... high bandwidth is the enemy of energy efficiency. The more energy you spend moving things around, the less you have in your budget to actually do stuff. Maximum performance is determined by perf/watt given a fixed max power consumption. Currently, Nvidia has better compression techniques, which allows them to save power potentially 3 different ways (clocks, bus width, lower data movement). For example, the better compression allows Nvidia to use a 192bit memory interface on the 1060, which saves power (vs the 256bit on RX480). And they are also moving less data due to the compression which saves more power. Then, they can either choose to spend the power they saved on better performance, lower TDP, or a mix of both.

    If AMD is/was counting HBM2, then the problem is likely one of availability/expense. But compression is the gift that just keeps giving, as it will still save power even if one has an over-abundance of available bandwidth....
     
    ImSpartacus and Grall like this.
  3. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,955
    Likes Received:
    3,038
    Location:
    Pennsylvania
    AMD is relying on DX12/Vulkan and ports using those as their new dawn. They're obviously strong with the new APIs but it's still going to be awhile before we see wider adoption considering game development timespans.
     
    Heinrich04 and milk like this.
  4. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Its a question of balance ( i dont speak about gaming but we are not so far ), you need high bandwith, high capacity of storage and efficiency and high compute power, it is a balance between thoses .I personally dont care that my compute gpu's for raytracing eatt 600W if they can offer me the best possibilitty on thoses 3 aspects. ( with photogrametry,, i need 16k-32K textures, i need high resolution HDRi as lighting environnement sources ( let say 16-32K too ), i need extremely high poly counts ) with onlly that i can eat 32GB of Vram in a nutshell, for a singe frame and a single scene. ( animation are render anyway frame by frame, so then the problem is for VFX, there we move to CPU, because its impossible to have enough memory on GPU's )

    I dont think, they rely on them, but it is in their interest to provide steam on them.. Since a good time, outside that defacto, some features of their architecture find their way with thoses "new API's" ( as it was finally a good idea ), they can provide easely to the developpers, new features, codes compatibles with their architectures.. without having to rely on drivers, for fix the games their gpus are running.. this is allready a good thing in his own.. It was absolutelly funny to see how much DX11 games, was just not made for run on AMD GPU's ... with absolutely no optimization and debug for run on thoses gpu's ... dont ask you where come the overhead of the driver in this cases. And when we look the benchmarks, AMD was not so bad with theirs drivers ..

    I will say coulld change things, not willl.. I dont see Nvidia let it go so easely .
     
    #4 lanek, Aug 18, 2016
    Last edited: Aug 18, 2016
    Razor1 and Heinrich04 like this.
  5. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    I think you're putting a little to much emphasis on the compression part. I'm sure it helps, but the low level architecture of the SM itself is much more likely to be the biggest factor by far.
     
    Razor1, no-X and liolio like this.
  6. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    Does lack of an R&D budget fit somewhere into this?
     
  7. kalelovil

    Regular

    Joined:
    Sep 8, 2011
    Messages:
    555
    Likes Received:
    93
    Perhaps AMD didn't expect Nvidia to release so many Pascal GPUs in a short space of time. AMD hoped they could capture the midrange while Nvidia concentrated on the high-end market.
    They might not have had much choice though, having to stretch a smaller R&D budget than Intel or Nvidia across CPUs. GPUs and SoCs.

    Alternative optimistic theory: There was a bigger third Polaris GPU planned. Something went wrong and it was delayed. The delay would have meant it came out only months before Vega, and Vega is somehow a much better chip, so it was cancelled.
     
    #7 kalelovil, Aug 19, 2016
    Last edited: Aug 19, 2016
  8. xEx

    xEx
    Regular Newcomer

    Joined:
    Feb 2, 2012
    Messages:
    939
    Likes Received:
    398
    The answer is simple: Lack of resources.
     
    Heinrich04 and liolio like this.
  9. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,699
    Likes Received:
    117
    Hmmm.... I'm not so sure. Certainly, there are many contributing factors, such at special function units, L0, TBR, z-rate, etc... But I would say the compression is not chump change. The 1060 has 192GB/s bandwidth. One could down-clock an RX480's memory to 6Gbps and measure the performance lost....
     
  10. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,720
    Likes Received:
    2,460
    I seriously hope not, one doesn't need to be a wizard to know such strategy is a failed one. First it relies on an uncertain outcome happening within an uncertain time in the future, secondly it assumes the competitor will stand still and do nothing about it while both history and present have shown the complete opposite.

    Being unprepared and underqualified now, for the hope that things will change later is a bad business practice which gets companies nowhere but bankruptcy, For example. If we look at AMD's marketing push towards VR with the RX 480/Fury, we see them getting pummeled in almost every VR release compared to NV, so what gives? Well AMD is waiting for DX12 to chime in for VR games, a wait which shall be quite long indeed, so how the heck will AMD convince buyers to purchase their hardware for VR now, when they can't provide adequate performance "NOW"? with empty words?
     
    Laurent06 and Heinrich04 like this.
  11. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    Possibly but having both both main product lines not delivering starting over different periods of times and the ups and downs of one of those product line, speaks to more then just R&D.

    ATI buyout came at a time when AMD thought they were in good position on the CPU side, but it was actually the start of the decline of their CPU side, and then the r600 hit, which didn't do anything for them, until the rv770 they were kinda in murky waters, then once they started working on GCN, they have lost ground ever generation not to mention their CPU side went to the toilet.

    Which just speaks of taking wrong steps and not fixing those steps, mainly power consumption, which nV has been able to tout since (Fermi) and AMD took the lead shortly with the rv770.

    Lower power consumption equals the ability to have greater performance at the end.

    Big question is when ya got the money and ya don't do it, when can ya do it?
     
    #11 Razor1, Aug 19, 2016
    Last edited: Aug 19, 2016
    Heinrich04 likes this.
  12. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    Let's look at it this way: when you need to save power, it pays to first optimize those units of which you have the most and/or those which run most often.

    In a GPU, the calculation cores would qualify for both: you have many of them and they calculate something every clock cycle.

    Nvidia promoted BW compression as a new feature for their second gen Maxwell, yet the GTX 750 Ti already had a major jump in perf/W. It's not much different from later Maxwells.

    We have a pretty good idea what Nvidia did inside the Maxwell SMs compared to Kepler, they all improve perf/W, and GCN has none of that.

    I think compression's primary reason is bandwidth amplification, and that the power savings are a nice second order effect.
     
    Heinrich04 likes this.
  13. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,699
    Likes Received:
    117
    Well, yes and no.... The actual energy/op matters too. If you have 0.1% "bad" ops, but they consume orders of magnitude more energy than what you are normally doing, it pays to work on that too or obviate the need for them to exist in the first place.

    2% here, 3% there.... pretty soon you are up to 12%.... Just imagine if tomorrow AMD released at driver that increased graphics performance 12% across the board. Or better yet, if the RX480 had simply launched with +12% performance... This thread probably wouldn't exist.
     
    Heinrich04 likes this.
  14. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    647
    Likes Received:
    92
    This. Nvidia spends in excess of $100M more than AMD per quarter on R&D ($350M v/s $243M from the latest results)
     
    Heinrich04 likes this.
  15. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,723
    Likes Received:
    193
    Location:
    Stateless
    Indeed, he best answer around.
    There are many thing that set AMD and NVIDIA GPUs aside and it has been a while that AMD GPUs suffers from limitations that AMD did not address.
    They lack money for the GPUs but also the CPU they gave up on their most successfully IPs (cat cores). They are also wasting money often pushing theirs chips way past the diminishing returns. I would think that the management sucks too.
     
  16. xEx

    xEx
    Regular Newcomer

    Joined:
    Feb 2, 2012
    Messages:
    939
    Likes Received:
    398
    I used "resources" instead of "money" for this reason. Its not just money AMD is lacking is also talent, the most crucial part. Raja said it already, no1 wants to work on AMD because of its "Dying image" so AMD only can hire what everyone else don't want. In my opinion AMD should have took its shot at ARM instead of trying to fight Intel which has like 20 times more money to fight that fight.
     
    Heinrich04 likes this.
  17. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,723
    Likes Received:
    193
    Location:
    Stateless
    They may indeed lose talents or fail to attract new ones though I think the management and marketing is not giving the wrong goals to the engineering teams. "More" seems to be the mantra ( in lot of companies) instead of better... Or even better do it right. The former is way easier to deliver and it is easy to hide responsibilities: "we were ask more, we delivered it, it failed but I held my side of the deal". I suspect those that would advocate to pass on some marketing checks boxes for the best or better are shunt fast... It happens in lots of place, it is like the corporate world decayed a lot since the " sellers" took over the "doers" but that is a more general issue.
     
    #17 liolio, Aug 19, 2016
    Last edited: Aug 19, 2016
  18. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    Yes, it had to be a grab bag of tons of little improvements. But first gen Maxwell improved perf/W by around 80%, without any known improvements in compression over Kepler. That really all that I'm saying.
     
  19. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    It seems likely a larger one was planned, but considering the release date and rumors of Vega in October maybe it was just ditched because of that? It wouldn't really have been a delay as the FINFET designs came from both IHVs around the same time. If Vega does show up in October that still leaves it months prior to the release of it's successor. All the roadmaps indicate Vega is HBM2, so it's entirely possible they are geared towards high end and APUs where they may be on interposes. If they simply took Fiji and shrunk it down with FINFET they should have had a competitor for GP104.
     
  20. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,699
    Likes Received:
    117
    I wasn't (nor is this thread about) comparing Nvidia's compression (they started it with Fermi btw) to Nvidia, but w/e......

    Anyway, I did a rough estimate based on the data available, and I would guesstimate that Nvidia's compression advantage is responsible for ~1/5 to 1/4 of their perf/watt lead. So a sizable chunk, but by no means all or even a majority of it. One could call it a necessary but not sufficient condition for AMD to regain competitiveness.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...