Why is AMD losing the next gen race to Nvidia?

Discussion in 'Architecture and Products' started by gongo, Aug 18, 2016.

Tags:
  1. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,297
    Likes Received:
    465
    Which doesn't even begin to describe the disparity when spending specifically on the graphics technology is juxtaposed.
     
  2. AnomalousEntity

    Newcomer

    Joined:
    Jun 6, 2016
    Messages:
    38
    Likes Received:
    25
    Location:
    Silicon Valley
    Because most game devs have NV GPUs in their dev machines, they don't really bother optimizing for AMD. If only you had the same codepaths for Consoles and PCs.
     
  3. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,270
    Likes Received:
    1,038
    Location:
    still camping with a mauler
    NVIDIA cards are flat out way more efficient than their AMD counterparts, even in games/benches that are well optimized for both IHVs. AMD has just now sort of matched NVIDIA's 28nm Maxwell efficiency levels, which is a bad joke considering how AMD was hyping efficiency for so long with Polaris. They are not even close to Pascal in this measure.
     
  4. AnomalousEntity

    Newcomer

    Joined:
    Jun 6, 2016
    Messages:
    38
    Likes Received:
    25
    Location:
    Silicon Valley
    That's not true. I can write a benchmark which will utilize more compute so AMD will naturally have an upper hand. Power efficiency doesn't really matter on desktops, throughput is where it's at.
     
  5. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    efficiency is a metric is based on what you are running if its a game, every single part of the GPU is important, so if there is a bottleneck somewhere, well efficiency goes down.
     
  6. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,270
    Likes Received:
    1,038
    Location:
    still camping with a mauler
    So the fact that you can write an esoteric benchmark that shows AMD winning is proof that NVIDIA cards are not more efficient? This is an absurd proposition. You could also write an integer heavy CPU benchmark and the FX8350 would beat the i7-6770. Does that mean AMD CPUs are more efficient than Intel's?

    Also if AMD GPUs are so much better at compute, why are they not being put into supercomputers like Tesla? Right, because they aren't.
     
    Grall likes this.
  7. AnomalousEntity

    Newcomer

    Joined:
    Jun 6, 2016
    Messages:
    38
    Likes Received:
    25
    Location:
    Silicon Valley
    You keep throwing this term "efficiency" around. What do you mean by efficiency of a GPU? Utilization? Throughput? Less bubbles in the pipeline?

    My point is, if games used more compute resources then AMD would be winning more. But they aren't cause they aren't being optimized for that kind of architecture.

    Regarding supercomputing, they tend to use whatever is more popular in the market and has better software stack. I've worked in a supercomputing lab and trust me scientists don't know much about GPU architectures.
     
    ToTTenTranz likes this.
  8. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    What consoles aren't holding anything back lol, why aren't console games using more compute ;) well cause I guess they can't..... limited hardware being designed for. Maybe AMD should change their stratagy to better align with their console parts. Cause game developers are stuck with lowest common denominators, which in shader horsepower would be consoles......
     
  9. swaaye

    swaaye Entirely Suboptimal
    Legend

    Joined:
    Mar 15, 2003
    Messages:
    8,580
    Likes Received:
    662
    Location:
    WI, USA
    I think people have said AMD would be ahead if there was more compute utilization all the way back to X1600 in 2005. Though back then it was graphics shader utilization vs. texturing/pixel fillrate
     
    #29 swaaye, Aug 19, 2016
    Last edited: Aug 19, 2016
  10. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    yep they did have a leg up with the x1900 though only for a short while developers were pushing more shader ops nearer to the end of the 7900 life span then the g80 hit.
     
  11. itsmydamnation

    Veteran Regular

    Joined:
    Apr 29, 2007
    Messages:
    1,311
    Likes Received:
    411
    Location:
    Australia
    My Guess is when 20nm got cancelled NV backported to 28nm and AMD didn't, instead they split there strategy pushed the current GCN uarch a bit to far (Fiji's lack of something) and moved the smaller 20nm dies to 14nm (polaris) and kept'd the bigger dies of the generation after that (vega). So since maxwell G2 NV have been one generation ahead of AMD and that wont balance out until Vega hits.

    The simple reason for AMD doing that is cash.

    The proof will be in the pudding when Vega is released and we can see the performance and perf per watt.


    edit: the interesting thing is in crypto currency, a 1070 has around the same hash rate as a 470 and around the same power consumption ( both are memory throughput limited)................

    To me that says it could very well largely entirely ROP related ( more the Dave Kanter theory then the uber compression theory).
     
    Ike Turner likes this.
  12. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    Wow really... a 1070 only matches a 470 thats surprising to me at least. Any other compute heavy application where this is true?
     
  13. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,270
    Likes Received:
    1,038
    Location:
    still camping with a mauler
    Fury should be an absolute beast in this crypto currency stuff right?
     
  14. firstminion

    Newcomer

    Joined:
    Aug 7, 2013
    Messages:
    217
    Likes Received:
    46
    Yes, this is cause for much grief.
     
  15. DeanoC

    DeanoC Trust me, I'm a renderer person!
    Veteran Subscriber

    Joined:
    Feb 6, 2003
    Messages:
    1,469
    Likes Received:
    185
    Location:
    Viking lands
    CUDA is a massive plus for NVIDIA in the workstation and server markets. Hats off to them for cultivating a vibrant ecosystem that means even when pure numbers (compute) favour the other chips, CUDA means you go to NV hardware in those markets. But the danger for NV is that CUDA ultimately is very C++ like and that is' amenable to tool translation.

    Hence HIP which AMD have developed which allows with minor source modifications, the same code to run on AMD and NVIDIA. On NVIDIA is passes it through to NVCC as normal, whereas on AMD it passes it to hcc for GCN code generation.
     
    Malo, Razor1, Grall and 2 others like this.
  16. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    205
    Likes Received:
    179
    AMD is simply lacks resources to push all their projects. 3-4 dGPUs chips, 1-2 APU iGPUs and 2-? semi-custom console iGPUs. Thats a lot.

    In result they will slow down and unless something miraculous happen they will be more and more irrelevant in PC space.
     
  17. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,196
    Likes Received:
    1,184
    In our defense, unless you're futzing around with computation for its own sake, scientists have front line competence in some other field. The field where the actual problem is, as opposed to delving into the finer points of GPU architecture. Computation for us is a scientific tool. The more obstacles the tool introduces, and the more attention and energy its usage requires, the worse it is.

    Unless of course your area of interest is computation itself. In which case the value of what is actually computed tends to be zilch.
     
    Alexko, Razor1, firstminion and 5 others like this.
  18. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Yeah.
    A good example of that change is Doom with Vulkan and using the AMD low level shader instrinsic functions/extensions that are part of GPUOpen; this gave greater performance gain in Doom than the async compute for PC-AMD cards.
    GPUOpen brings some very useful libraries/extensions to PC, which align with consoles.
     
    Razor1 and Heinrich04 like this.
  19. DeanoC

    DeanoC Trust me, I'm a renderer person!
    Veteran Subscriber

    Joined:
    Feb 6, 2003
    Messages:
    1,469
    Likes Received:
    185
    Location:
    Viking lands
    Similar arguments could be made about NVIDIA, they have at least 3 different cutting edge GPU chip designs. Big Pascal (P100), Consumer Pascal (P102+) and Embedded Pascal (PX etc.). Last I looked that have fingers in many CPU pies (2 internal ARM cores, POWER support, x86 support etc.) and there own console projects (Shields) and rumours abound about at least one semi-custom iGPU.

    And TBH its a great strategy, they have products for various markets expanding who they can sell to.

    AMD is in a similar space perhaps even a little behind NVIDIA in execution (but in fairness AMD are smaller and have an x64 CPU architecture to develop which is a big develop ask). No one will deny that so many products can provide for 'challenging' schedules but that is the business now and why competition is a good thing.
     
    homerdog, Heinrich04, Razor1 and 2 others like this.
  20. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    205
    Likes Received:
    179
    Sure, they got a lot of side projects besides the main GPU line. Maybe even more than AMD. However, they can afford it with their record revenue, brutal margins, market position, etc.
     
    I.S.T. and Heinrich04 like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...