Why is AMD losing the next gen race to Nvidia?

This. Nvidia spends in excess of $100M more than AMD per quarter on R&D ($350M v/s $243M from the latest results)

Which doesn't even begin to describe the disparity when spending specifically on the graphics technology is juxtaposed.
 
Because most game devs have NV GPUs in their dev machines, they don't really bother optimizing for AMD. If only you had the same codepaths for Consoles and PCs.
 
Because most game devs have NV GPUs in their dev machines, they don't really bother optimizing for AMD. If only you had the same codepaths for Consoles and PCs.
NVIDIA cards are flat out way more efficient than their AMD counterparts, even in games/benches that are well optimized for both IHVs. AMD has just now sort of matched NVIDIA's 28nm Maxwell efficiency levels, which is a bad joke considering how AMD was hyping efficiency for so long with Polaris. They are not even close to Pascal in this measure.
 
That's not true. I can write a benchmark which will utilize more compute so AMD will naturally have an upper hand. Power efficiency doesn't really matter on desktops, throughput is where it's at.
 
efficiency is a metric is based on what you are running if its a game, every single part of the GPU is important, so if there is a bottleneck somewhere, well efficiency goes down.
 
That's not true. I can write a benchmark which will utilize more compute so AMD will naturally have an upper hand. Power efficiency doesn't really matter on desktops, throughput is where it's at.
So the fact that you can write an esoteric benchmark that shows AMD winning is proof that NVIDIA cards are not more efficient? This is an absurd proposition. You could also write an integer heavy CPU benchmark and the FX8350 would beat the i7-6770. Does that mean AMD CPUs are more efficient than Intel's?

Also if AMD GPUs are so much better at compute, why are they not being put into supercomputers like Tesla? Right, because they aren't.
 
You keep throwing this term "efficiency" around. What do you mean by efficiency of a GPU? Utilization? Throughput? Less bubbles in the pipeline?

My point is, if games used more compute resources then AMD would be winning more. But they aren't cause they aren't being optimized for that kind of architecture.

Regarding supercomputing, they tend to use whatever is more popular in the market and has better software stack. I've worked in a supercomputing lab and trust me scientists don't know much about GPU architectures.
 
You keep throwing this term "efficiency" around. What do you mean by efficiency of a GPU? Utilization? Throughput? Less bubbles in the pipeline?

My point is, if games used more compute resources then AMD would be winning more. But they aren't cause they aren't being optimized for that kind of architecture.

Regarding supercomputing, they tend to use whatever is more popular in the market and has better software stack. I've worked in a supercomputing lab and trust me scientists don't know much about GPU architectures.


What consoles aren't holding anything back lol, why aren't console games using more compute ;) well cause I guess they can't..... limited hardware being designed for. Maybe AMD should change their stratagy to better align with their console parts. Cause game developers are stuck with lowest common denominators, which in shader horsepower would be consoles......
 
I think people have said AMD would be ahead if there was more compute utilization all the way back to X1600 in 2005. Though back then it was graphics shader utilization vs. texturing/pixel fillrate
 
Last edited:
yep they did have a leg up with the x1900 though only for a short while developers were pushing more shader ops nearer to the end of the 7900 life span then the g80 hit.
 
My Guess is when 20nm got cancelled NV backported to 28nm and AMD didn't, instead they split there strategy pushed the current GCN uarch a bit to far (Fiji's lack of something) and moved the smaller 20nm dies to 14nm (polaris) and kept'd the bigger dies of the generation after that (vega). So since maxwell G2 NV have been one generation ahead of AMD and that wont balance out until Vega hits.

The simple reason for AMD doing that is cash.

The proof will be in the pudding when Vega is released and we can see the performance and perf per watt.


edit: the interesting thing is in crypto currency, a 1070 has around the same hash rate as a 470 and around the same power consumption ( both are memory throughput limited)................

To me that says it could very well largely entirely ROP related ( more the Dave Kanter theory then the uber compression theory).
 
CUDA is a massive plus for NVIDIA in the workstation and server markets. Hats off to them for cultivating a vibrant ecosystem that means even when pure numbers (compute) favour the other chips, CUDA means you go to NV hardware in those markets. But the danger for NV is that CUDA ultimately is very C++ like and that is' amenable to tool translation.

Hence HIP which AMD have developed which allows with minor source modifications, the same code to run on AMD and NVIDIA. On NVIDIA is passes it through to NVCC as normal, whereas on AMD it passes it to hcc for GCN code generation.
 
AMD is simply lacks resources to push all their projects. 3-4 dGPUs chips, 1-2 APU iGPUs and 2-? semi-custom console iGPUs. Thats a lot.

In result they will slow down and unless something miraculous happen they will be more and more irrelevant in PC space.
 
Yes, this is cause for much grief.
In our defense, unless you're futzing around with computation for its own sake, scientists have front line competence in some other field. The field where the actual problem is, as opposed to delving into the finer points of GPU architecture. Computation for us is a scientific tool. The more obstacles the tool introduces, and the more attention and energy its usage requires, the worse it is.

Unless of course your area of interest is computation itself. In which case the value of what is actually computed tends to be zilch.
 
What consoles aren't holding anything back lol, why aren't console games using more compute ;) well cause I guess they can't..... limited hardware being designed for. Maybe AMD should change their stratagy to better align with their console parts. Cause game developers are stuck with lowest common denominators, which in shader horsepower would be consoles......

Yeah.
A good example of that change is Doom with Vulkan and using the AMD low level shader instrinsic functions/extensions that are part of GPUOpen; this gave greater performance gain in Doom than the async compute for PC-AMD cards.
GPUOpen brings some very useful libraries/extensions to PC, which align with consoles.
 
AMD is simply lacks resources to push all their projects. 3-4 dGPUs chips, 1-2 APU iGPUs and 2-? semi-custom console iGPUs. Thats a lot.

In result they will slow down and unless something miraculous happen they will be more and more irrelevant in PC space.

Similar arguments could be made about NVIDIA, they have at least 3 different cutting edge GPU chip designs. Big Pascal (P100), Consumer Pascal (P102+) and Embedded Pascal (PX etc.). Last I looked that have fingers in many CPU pies (2 internal ARM cores, POWER support, x86 support etc.) and there own console projects (Shields) and rumours abound about at least one semi-custom iGPU.

And TBH its a great strategy, they have products for various markets expanding who they can sell to.

AMD is in a similar space perhaps even a little behind NVIDIA in execution (but in fairness AMD are smaller and have an x64 CPU architecture to develop which is a big develop ask). No one will deny that so many products can provide for 'challenging' schedules but that is the business now and why competition is a good thing.
 
Back
Top