This. Nvidia spends in excess of $100M more than AMD per quarter on R&D ($350M v/s $243M from the latest results)
Which doesn't even begin to describe the disparity when spending specifically on the graphics technology is juxtaposed.
This. Nvidia spends in excess of $100M more than AMD per quarter on R&D ($350M v/s $243M from the latest results)
NVIDIA cards are flat out way more efficient than their AMD counterparts, even in games/benches that are well optimized for both IHVs. AMD has just now sort of matched NVIDIA's 28nm Maxwell efficiency levels, which is a bad joke considering how AMD was hyping efficiency for so long with Polaris. They are not even close to Pascal in this measure.Because most game devs have NV GPUs in their dev machines, they don't really bother optimizing for AMD. If only you had the same codepaths for Consoles and PCs.
So the fact that you can write an esoteric benchmark that shows AMD winning is proof that NVIDIA cards are not more efficient? This is an absurd proposition. You could also write an integer heavy CPU benchmark and the FX8350 would beat the i7-6770. Does that mean AMD CPUs are more efficient than Intel's?That's not true. I can write a benchmark which will utilize more compute so AMD will naturally have an upper hand. Power efficiency doesn't really matter on desktops, throughput is where it's at.
You keep throwing this term "efficiency" around. What do you mean by efficiency of a GPU? Utilization? Throughput? Less bubbles in the pipeline?
My point is, if games used more compute resources then AMD would be winning more. But they aren't cause they aren't being optimized for that kind of architecture.
Regarding supercomputing, they tend to use whatever is more popular in the market and has better software stack. I've worked in a supercomputing lab and trust me scientists don't know much about GPU architectures.
Wow really... a 1070 only matches a 470 thats surprising to me at least. Any other compute heavy application where this is true?the interesting thing is in crypto currency, a 1070 has around the same hash rate as a 470 and around the same power consumption
Regarding supercomputing, they tend to use whatever is more popular in the market and has better software stack. I've worked in a supercomputing lab and trust me scientists don't know much about GPU architectures.
In our defense, unless you're futzing around with computation for its own sake, scientists have front line competence in some other field. The field where the actual problem is, as opposed to delving into the finer points of GPU architecture. Computation for us is a scientific tool. The more obstacles the tool introduces, and the more attention and energy its usage requires, the worse it is.Yes, this is cause for much grief.
What consoles aren't holding anything back lol, why aren't console games using more compute well cause I guess they can't..... limited hardware being designed for. Maybe AMD should change their stratagy to better align with their console parts. Cause game developers are stuck with lowest common denominators, which in shader horsepower would be consoles......
AMD is simply lacks resources to push all their projects. 3-4 dGPUs chips, 1-2 APU iGPUs and 2-? semi-custom console iGPUs. Thats a lot.
In result they will slow down and unless something miraculous happen they will be more and more irrelevant in PC space.
Sure, they got a lot of side projects besides the main GPU line. Maybe even more than AMD. However, they can afford it with their record revenue, brutal margins, market position, etc.Similar arguments could be made about NVIDIA, they have at least...