D
We've also included some data we have from a factory-overclocked GTX 980 (MSI's Gaming 4G model). We're missing Witcher benches here, but that doesn't really matter - the fact is that the 980 just doesn't scale well at 4K.
There have been theories that AMD's slower DX11 driver may be to blame for the poor showings at lower resolutions. However, if that were the case, we would expect Fury and Fury X to perform at the same level at 1080p and 1440p as CPU would become the bottleneck rather than the GPU hardware - this does not happen: the top-tier card is still faster and once again, overclocking brings us pretty close to overall parity. Curiously, we actually find that Far Cry 4 is faster on Fury than it was when we tested it on Fury X - substantially so, in fact. We must assume a fault of some kind in our test, or an optimisation in the 15.7 driver vs the 15.15 we had to test with the Fury X.
Unlikely that you'd have seen Mantle / DX12 test gains if that were the case.So, the front-end seems to be the bigger culprit than CPU bottleneck...
Unlikely that you'd have seen Mantle / DX12 test gains if that were the case.
Even at 1080p a lot of titles are not going to be 100% CPU bound, there are going to be frames, or parts of frames, that are still GPU bound. This is why the old B3D fillrate graphs were so useful...
Techreport do use B3D tests, and I think hardware.fr are quite comprehensive with their test suit, but the question would be then how to go about distinguishing the CPU bottleneck from the bottleneck within GPU.
Scott revised his final conclusion with a new view on the data that excludes Project Cars (I guess for those people that are unable to operate a spreadsheet or think for themselves )
http://techreport.com/blog/28624/reconsidering-the-overall-index-in-our-radeon-r9-fury-review
MSI R9 390X GAMING vs ASUS STRIX R9 Fury Review
http://www.hardocp.com/article/2015/07/13/msi_r9_390x_gaming_vs_asus_strix_fury_review
Speaking with AMD partners today it’s become clear that we’ll be seeing companies like GIGABYTE, MSI, PowerColor and more offer variants of the R9 Fury. When? Well that still remains a bit up in the air. While this isn’t really shocking news, we would highly doubt AMD would ignore so many partners on its new “Fiji” based card, everything becomes more interesting when you start to talk to companies about why they’ve been ignored.
Yield rates, yield rates and yield rates are constantly mentioned. While ASUS nor Sapphire want to tell anyone how many cards they got, it safe to say that the quantity isn’t what they would hope. This is painfully obvious as we see companies not happy with R9 Fury X quantities. The biggest issue relating to how AMD has launched the Radeon R9 Fury is the alienating of other partners.
High Bandwidth Memory and yield rates of the “Fiji” GPU continue to be the companies Achilles heel. AMD seem to be going down a path that could become very messy very quickly. We’ll be keeping our ear to the ground on this one as we try and figure out what’s going on with the two partner launch.
I'm wondering if AMD has a hard time optimizing some parts of its drivers or took the decision not to spend resources to try to match Nvidia's lower D3D11 overhead / better multithreading. Development resources are always limited and right now maybe it's best to direct most of those resources towards the future even if it means accepting Nvidia will have the edge in some games for a while.
Btw, inside the R9 Fury review, I had a small extra fight in 1440p : Fiji XT/Pro vs GM200 400/310 at the same clock. In turns out that Fiji and GM200 have the same performances in half of the games I tested (Batman AO, Crysis 3, Evolve, Far Cry 4, Hitman Absolution and Tomb Raider) while the GM200 has a 10-30% advantage in the other half (Anno 2070, Battlefield 4, Dying Light, GRID 2, Splinter Cell and The Witcher 3).
Unlikely that you'd have seen Mantle / DX12 test gains if that were the case.
Even at 1080p a lot of titles are not going to be 100% CPU bound, there are going to be frames, or parts of frames, that are still GPU bound. This is why the old B3D fillrate graphs were so useful...