Why is AMD losing the next gen race to Nvidia?

As the dust settles and the 1050 is to land it is even truer for the Rx460, GP107 (1050?) should kill it.

I don't understand aspects of AMD's new cards. ROPs are supposed to be less efficient from AMD than Nvidia, and yet AMD have fewer within the same performance segment (e.g. 480 has 32, 1060 has 48). It would appear that AMD do suffer from being ROP limited, and that's even when they have more BW. Were AMD expecting much higher frequencies from Polaris? Can they only have 8 ROPs per memory channel?

The 460 has only 16 ROPs. Its fill to flop rate is worse than the X1S and vastly worse than the PS4 - and taking into account CPU contention on the PS4 (up to 40 GB/s lost) and that Polaris has highly efficient colour compression, it just seems odd to have such low pixel fill rate. The GTX 1050 has 32 ROPs operating at higher frequencies than the 460.

Performance per watt from the 460 isn't good either. This very marginally overclocked 460 is topping out at 97W in game (!!) and 105W under stress test:

https://www.techpowerup.com/reviews/ASUS/RX_460_STRIX_OC/22.html
 
Time old fact, once competition has an upperhand, to change direction and rethink the architectural needs to make your chips competitive again takes quite a bit of time. Just doesn't happen over night. nV did what they did with Maxwell and did it better with Pascal but by doing so might have given AMD enough time to catch up with Navi. I think Vega they still didn't have the time to do what is necessary. With rumors of Volta being a complete redesign (if there is such a thing ;)) nV might run across some issues that might slow them down. We saw that with Fermi, great performer but power hog, which was remedied some what with later iterations of it with even more performance but similar thermal outputs.
 
If you think about it, ARM found a very cool way to be relevant, but entirely disconnected from the actual 'realization' aspects, by largely only doing the prints, but not the silicon.

Yes. I agree with you.
And that is also my case. You need to do somethings different from the competition to stay relevant. If you don't you are left to the price/performance race to the death. AMD need to focus in the areas they are great on. The traditional big desktop PC:s are pretty much dead. ok, to be fair, hardware enthusiast and pc-gamers are the exception. But how big is this market really? What about in 5 years? Will folks still buy oversized big-Tower- PC:s then? Of course I could be wrong. Maybe some folks like them oversized. I for sure don't.

This doesn't mean that AMD should give up on desktop-pc. But I think they need to be a lot more aggressive to stay competitive. In this case they have the technology to do a very sexy powerhouse in a small form factor. I would love to throw out my bulky ATX-case for one of those.
 
Reality is its all up to Zen is the Server Space, they gain a foothold there and revenue starts to flow some money can make it back into desktop desecrate and product life cycles can be shortened. I wouldn't write off Vega yet anyway, if Zen has shown anything what is published in papers and patients AMD takes to the next level in the actual product(stack cache, inorder-out of order switching ALU's, bridging FPU etc ) . Given the Patients we have seen from AMD around GCN i still think there is room for optimism.

Ignore people who present like they are neutral but are rather biased in their analysis, you don't have to look far in this thread to find one :devilish:
 
Reality is its all up to Zen is the Server Space, they gain a foothold there and revenue starts to flow some money can make it back into desktop desecrate and product life cycles can be shortened. I wouldn't write off Vega yet anyway, if Zen has shown anything what is published in papers and patients AMD takes to the next level in the actual product(stack cache, inorder-out of order switching ALU's, bridging FPU etc ) . Given the Patients we have seen from AMD around GCN i still think there is room for optimism.

Patents don't mean they are incorporating everything in though ;) we have seen that happen many times, the patents are there but don't show up for two or three generations down the road.

Reality is Zen has to be good or decent enough to push Intel's buttons at least in the mainstream segment, if not, desktop's, servers won't pick them up, let alone HPC's.

Intel has what 28 core Xeon's for HPC's now, those are also buyable to the general populous, Intel might have specific chips that only go to certain customers, I know they have done this in the past. If a 8 core Zen matches up with the 4 core i5, not going to cut it in the server market. It will be better for AMD's bottom line as right now they have a 8 core piece matching up with a 2 core i3. But don't expect server and HPC guys to go all crazy and start switching over.

If that is what comes to pass an 8 core going against a 4 core Intel.... If its an 8 core vs a 6 core, would be better, if its 8 core vs 8 core and they are equal or parity, that will be the only time HPC and server markets will start using AMD.

At those high end markets they need as much performance as they can get, the cost of the hardware is a very small amount of the total cost of what they are doing.
 
This statement is why AMD don't have a real chance to compete. A lot of folks only hope for competition so that the can buy Nvidia cards for cheap. If Vega is good, why not just buy one?
Oh if Big Vega is as fast as GP102 at a meaningfully lower price I'll jump all over it. If all AMD does is match NVIDIA then I'll go NVIDIA cause that's what I'm used to. I can swap the GPU out without even messing with drivers.

But with the way things are going the best I'm hoping for is for Big Vega to match GP104 at a much higher power level and with much less OC headroom, in which case I would take advantage of the new lower prices on GP104 (probably GTX1070 or 1075 if Vega is good enough to make NVIDIA try). So in a sense you're completely right, AMD determines which NVIDIA card I get. For instance, at $400 and depending on how things shake out:

Sucks = Big Vega slower than GP104
Passable = Big Vega matching GP104 with much higher TDP
Good = Big Vega matching GP104 with about same TDP
Great = Big Vega matching cut-down GP102 with about same TDP

Good Price = at least 10% cheaper

Vega sucks -> I get cut-down GP104
Vega is passable at same price -> I get GP104
Vega is passable at good price -> I get Vega or GP104
Vega is good at same price -> I get GP104
Vega is good at good price -> I get Vega
Vega is great at same price -> I get cut-down GP102
Vega is great at good price -> I get Vega

There's a lot of room in between those options to play but you get the idea. Look at how fast P10 is and do the math, you'll see how unlikely anything other than Passable is. Note that I don't even consider it a remote possibility that Big Vega matches Full GP102; that is in the realm of science fiction.

Also on a related note, NVIDIA isn't even trying. They now sell their 2nd tier card for $1K+. God help AMD cause NVIDIA ain't like Intel, they will drink AMD's blood. And God help us if/when NVIDIA goes for the kill. They are competing with AMD with cards that are a tier down in die size/TDP. They could price AMD into Armageddon at will.
 
Last edited:
If all AMD does is match NVIDIA then I'll go NVIDIA cause that's what I'm used to.
(...)
God help AMD cause NVIDIA ain't like Intel, they will drink AMD's blood. And God help us if/when NVIDIA goes for the kill.


Because fuck logic...
 
Because fuck logic...
You don't honestly expect a pity purchase from me do you? I already explained why it is easier for me to go NV rather than AMD. I am IHV agnostic so whichever makes the most sense to me is the one I get regardless of their respective levels of corporate incompetency.

Please let AMD put out a 7950 style (near the end of its lifespan anyway) killer again and I'll go Red in a heartbeat :)
 
You acknowledge that nvidia killing off AMD would be bad for you and consumers in general.
Yet the only way you would buy an AMD card is if it's both faster, consumes less and is cheaper. Basically you'll only abstain from buying a nvidia card if that purchase is objectively stupid on all accounts.
Because apparently buying an AMD card that matches a nvidia equivalent on all factors would be a pity purchase.

And then you finish with a sentence claiming you're IHV-agnostic.
Yeah..
 
It seems strange to complain about the negative effects of a lack of competition, but on the other hand make decisions that one knows full well will reduce competition further, and hurt you and more and more in the long term.
 
As a consumer we want the best for us when we purchase a product, if that product doesn't give us the best experience than what we could have gotten its our loss. At the end of it all, the companies that are competing for whats in our wallet *sounds like whats in your wallet ad by capital one :)*, have to deliver, end of the day if they don't they don't get our money, its the company's loss. Simple economics. Look what happened with 3Dfx, they couldn't deliver, and AMD and nV ate them up.

nV is operating in an effective monopoly at the high end yet they aren't screwing us in the performance segment yet, but they are screwing us in the enthusiast segment (have to see what the 1080ti offers though) for the time being but its not as bad as Intel though.

This isn't a socialistic system where we have to buy AMD since they are doing worse so we need to equalize the marketshare buy force buying a less competitive product. That would let AMD (or any company for that matter) laugh all the way to bank with our hard earned money for something that is second rate.

Look at this way, if nV wanted to kill AMD, they can drop their margins by 15% and its over. nV will still be in the 35-40% margin range and man it would hurt AMD to the tune of something Intel hasn't even done yet. And anti-trust all that stuff, can't do a damn thing, because nV is not selling at a loss or manipulating the market in such a manner that would raise eye brows.
 
Last edited:
I don't understand aspects of AMD's new cards. ROPs are supposed to be less efficient from AMD than Nvidia, and yet AMD have fewer within the same performance segment (e.g. 480 has 32, 1060 has 48). It would appear that AMD do suffer from being ROP limited, and that's even when they have more BW. Were AMD expecting much higher frequencies from Polaris? Can they only have 8 ROPs per memory channel?

The 460 has only 16 ROPs. Its fill to flop rate is worse than the X1S and vastly worse than the PS4 - and taking into account CPU contention on the PS4 (up to 40 GB/s lost) and that Polaris has highly efficient colour compression, it just seems odd to have such low pixel fill rate. The GTX 1050 has 32 ROPs operating at higher frequencies than the 460.

Performance per watt from the 460 isn't good either. This very marginally overclocked 460 is topping out at 97W in game (!!) and 105W under stress test:

https://www.techpowerup.com/reviews/ASUS/RX_460_STRIX_OC/22.html

You'll have to forgive a humble layman for a potentially silly question, but do ROPs run at the core clock?

I suppose they need to run at some clock. I just never "mentally" lumped them in with other compute resources.
 
yes ROP's run at core clocks, the only thing that doesn't to my knowledge is the memory bus (withstanding Tesla and Fermi with the hot clocks for their shader units if I remember right) which runs at the memory frequency.
 
nVidia can't keep raising prices indefinitely even without competition from AMD. Just like Intel before them, they are also competing against their older products. Eventually volume will drop and a better balance will need to be met. Sure, progress could slow (also like with Intel), but mobile is putting pressure back on Intel and will also put more pressure on nVidia if they cease innovating and pushing forward.
 
Well unlike the CPU market, graphics cards a 10% performance increase means jack lol. At least for CPU's business, large corporations, have a steady uptake of computers, most of those computers don't need graphics cards. The companies that have computers that need graphics horsepower, 10% isn't going to be enough to push them to upgrade.

Yeah even in a monopoly there is a equilibrium of sorts, price vs supply and demand still functions.
 
Because apparently buying an AMD card that matches a nvidia equivalent on all factors would be a pity purchase.

There's no such AMD card that can be equivalent on all factors as long as Nvidia has a much more aggressive stance on dev rel, proprietary tool sets, CUDA, and much greater market share to make those things especially relevant. These things are difficult to quantify and vary in importance from person to person, but they definitely carry some perceived value that has to be accounted for in the decision. For me personally it would actually require a great deal more than merely a one-off product generation that happens to benchmark slightly better to get me to switch back.
 
Polaris showed us that in spades, another 6 months going to make that much of a difference?
Polaris is in most parts just a Volcanic Islands shrink. Same architectural flaws, so it just benefited from the node shrink.
I don't understand aspects of AMD's new cards. ROPs are supposed to be less efficient from AMD than Nvidia, and yet AMD have fewer within the same performance segment (e.g. 480 has 32, 1060 has 48). It would appear that AMD do suffer from being ROP limited, and that's even when they have more BW.
If you really want to know - Nvidia and AMD utilize the ROPs differently.

Nvidia, thanks to the geometry buckets / tiling approach featured in Maxwell and Pascal, needs a sufficient number of ROPs to hold a full tile inside the ROPs at a time. That way, any write to the ROPs is essentially guaranteed not to hit the RAM while the same tile is active.

The whole GCN family doesn't have such tiling system yet, so the ROPs are mostly write-through with a comparably low cache hit rate. This means, that there is always a huge chance that any blend operation is actually going to cause a RAM access. This unfortunately means that most ROP activity is also utilizing the RAM system, which increases the power consumption significantly. The increased memory bandwidth is actually utilized despite the lower number of ROPs, except that the favorable solution would have been not to require the bandwidth during that part of the pipeline in the first place.
 
Last edited:
Also on a related note, NVIDIA isn't even trying. They now sell their 2nd tier card for $1K+. God help AMD cause NVIDIA ain't like Intel, they will drink AMD's blood. And God help us if/when NVIDIA goes for the kill. They are competing with AMD with cards that are a tier down in die size/TDP. They could price AMD into Armageddon at will.
The only thing stopping NV from doing that is their love of money. They'd have to cut prices to effectively kill AMD; as it is they're still selling all the cards they can make from what it seems, so they're supply capped anyhow. Killing off AMD wouldn't make them more money even if they raised prices back up afterwards...

It seems strange to complain about the negative effects of a lack of competition, but on the other hand make decisions that one knows full well will reduce competition further, and hurt you and more and more in the long term.
One person can't really do much for AMD. All of B3D buying all the cards they can't afford can't prop the company up, not when they're losing money at the tune of hundreds of millions per quarter.

You can't really fault a guy for not wanting to spend money on what is arguably sub-standard equipment...
 
That being the case, 480 and 1060 might not be that far apart on perf/watt. Polaris should have had a mechanism to reduce power usage, but it seems like they just maxed the voltages to get a reliable supply.
I am not so sure about that reliable supply, but I think RX 480's claim of VR-performance at least on par with R9 290 was what forced AMDs hand. On its own, nobody would have cared for 98 instead of 108 Full-HD-fps in Bioshock Infinite or 45 instead of 49 fps in Crysis 3.

What I'm more concerned about is that apparently the P10-GPUs seem to have massive individual leeway for working at decidedly lower voltages with not-so-large decreases in Megahertz. Mine for example can work at 1065 MHz and 0,88/0,92 Volt for GPU/MC. Another one I have in the office (from AIB) does basically the same, cutting power (for the whole unoptimized (!) rig) while mining ETH drastically.

Some numbers from the AIB-card (whole PC-wattage) while mining ETH:

default (1338/2000 MHz |1150/1000mV). 245W
mem@2200/1000mV: 260W (higher utilization, not 15 watt more memory consumption!)

GPU@1200MHz/985mV & MEM@2200MHz/925mV: 204W
GPU@1065MHz/910mV (offset) & MEM@2200MHz/925mV: 185W

Now this is probably not representative of real-world gaming, but is gives a hint how much more potential the Polaris architecture offers in terms of efficiency. I hope some of this will be leveraged in the mobile space, where power is of much more concern and the prices AMD can command for the GPU should be higher as well.
 
Back
Top