AMD Radeon R9 Fury X Reviews

@3dilettante said:

The only way I can explain Fiji's existence in its current form is by assuming that AMD got complacent by the performance of Kepler, more or less the same as GCN but already touching reticle limits with GK110, and being massively blindsided by Maxwell. They probably thought that just touching the memory system would be sufficient to soar about whatever Nvidia could come up with and left the efficiency of GCN what it was.
I'd like to think better of their designers, but the implication is more dire. I suspect they knew they needed to do more, but everything about Fiji's launch makes me think AMD simply can't.

The seams are really starting to show when a change in memory and getting a competent cooler made by somebody else just about blows their wad. Everything besides that is flaky, and given the serious limitations for tweaking on a card that sold itself on being capable of being tweaked, I wonder if that water cooler is just there to keep the silicon from leaking to death, because right now it cannot even touch the max dissipation of that solution.


That's an interesting graph. I think what we're looking at here is not worse color compression, but a different bottlenecks coming into play. When using color compression, it's hitting some non-memory BW bottleneck somewhere (one that's very similar than TitanX), while without compression, it gets much better results than TitanX due to the HBM that's provided by the extra BW.
One of the other tests had the 290X slightly edging Fiji in fillrate.
Is there a description of what is being performed for that test, the format, or how the result is calculated?
Other tests show almost 64 Gpixels per second in throughput, so I am trying to get a gauge on how that relates to the GB/s figure for the compressed figure.
 
@3dilettante said:

That's interesting. If I vaguely recall, AMD cards benefited more with DX12 so maybe they can still see more gains.

Anyone see any BF4 Mantle benchmarks out there?

Techreport tried, DX11 was better so they ditched it. This may not bode well for a future where optimizations depend on game developers caring about supporting their games after launch just for new hardware.
 
@eastmen said:

Eh it all comes down to pricing. This would have been a better deal at 00 ... at 50 it would be a no brainer . But pricing seems to be a huge problem across the board from AMD this product cycle , the 390xs increased in price and were just a rebrand of the 290x . I think if they had just dropped prices 0 across the board on this product cycle they would have had much more favorable reviews.

Hopefully their move to the next node works better for them and hopefully we see a true GCN 2.0 as what they got now is 3 years old.
 
@entity279 said:

If I were buying a card now for that kind of money, Fury would be my choice hands down.

I'm half tempted even now, but it would be disappointing to buy such a bad card solely because it has an awesome cooler (I would be able to fold to death with it !).

Other thoughs :
- a quick glance on hardocp pointed towards Kyle not getting over the 4GB thing ( not that I'm with AMD on this one, their slides which recommended the 8GBs on 390X are unethical & cynical )
- kudos for Damien for introducing a second sample for the power consumption measurements. Hopefully more steps in this direction will be done by everyone
- there's no justification imaginable for a typical gamer to pick this card over the 980 TI at current prices
 
@eastmen said:

If I were buying a card now for that kind of money, Fury would be my choice hands down.

I'm half tempted even now, but it would be disappointing to buy such a bad card solely because it has an awesome cooler (I would be able to fold to death with it !).

Other thoughs :
- a quick glance on hardocp pointed towards Kyle not getting over the 4GB thing ( not that I'm with AMD on this one, their slides which recommended the 8GBs on 390X are unethical & cynical )
- kudos for Damien for introducing a second sample for the power consumption measurements. Hopefully more steps in this direction will be done by everyone
- there's no justification imaginable for a typical gamer to pick this card over the 980 TI at current prices
if your looking for gysnc / freesync a fury x and monitor will come out to 50ish cheaper than a 980ti with gysnc monitor
 
@gongo said:

Dave, if you guys are going to follow Nvidia to set up Fiji as AMD halo product ...with a fancy name, 'Fury X', and priced it up against 980 Ti, but with a later release date and with missing features and hardware, then you better make sure AMD has some surprises for us...

Fury X is a flop if this card was intended to re-invent your branding at a raised asking price.

What could have saved Fury X...the surprise i mentioned...is the overclocking possibilities. The built seemed to be set up for that with the CLC. But then your GCN architecture still fail at going for high clocks.

Why did your CTO , Joe, flat out lied about Fiji being 'an overclocker's dream' and we can 'overclock it like no tomorrow'?

You guys put too much into HBM and neglected improving GCN...i read Techreport dissection of Fiji, and i shook my head at so much was left the same.

I have used 3 consecutive generations of AMD cards because they were good for the asking prices, but today i will get a 980 Ti G1. It craps all over your Fury X, and then some.
 
@AlNets said:

TechReport used DX11 for BF4 on the Fury X since Mantle was showing significantly worse performance.
http://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/11

Semiaccurate however reported significant performance gains from Mantle (at 1080p however) in BF4, Civ:BE and Sniper Elite 3.
http://semiaccurate.com/2015/06/24/amds-radeon-goes-premium-with-the-fury-x/

Memory limitation? For some reason I vaguely thought that Mantle used up more.
 
@pjbliverpool said:

Dave, if you guys are going to follow Nvidia to set up Fiji as AMD halo product ...with a fancy name, 'Fury X', and priced it up against 980 Ti, but with a later release date and with missing features and hardware, then you better make sure AMD has some surprises for us...

Fury X is a flop if this card was intended to re-invent your branding at a raised asking price.

What could have saved Fury X...the surprise i mentioned...is the overclocking possibilities. The built seemed to be set up for that with the CLC. But then your GCN architecture still fail at going for high clocks.

Why did your CTO , Joe, flat out lied about Fiji being 'an overclocker's dream' and we can 'overclock it like no tomorrow'?

You guys put too much into HBM and neglected improving GCN...i read Techreport dissection of Fiji, and i shook my head at so much was left the same.

I have used 3 consecutive generations of AMD cards because they were good for the asking prices, but today i will get a 980 Ti G1. It craps all over your Fury X, and then some.

TBH I think this range has been marketed and priced terribly and will sell accordingly (without price drops). Calling the top end card Fury rather than the 390x has only served to raise expectations to unrealistic levels which then leaves a sour taste in the mouth of enthusiasts and journalists. Better to have kept the previous naming scheme, priced everything more reasonable and left people with a feeling of getting great value. Oh and the 8GB on Hawaii was just stupid in light of Fury's 4GB. I think they'd have been better off with something like this:

390x (Fury X) - 99
390 (Fury) - 29
380x with 4GB (390x with 8GB) - 79
380 with 4GB (390 with 8GB) - 99
370 (380) - 99
360 (370) - 29
350 (360) - 9

That would have given people a much better impression of value IMO and thus the increased sales would have more than made up for the lowered prices. To say nothing of improving AMD's overall image.
 
@CarstenS said:

With HBM and CCFC, no chance of pricing this competitively in a way that does not ruin a company in the long run I guess. IMHLO, AMD really had high hopes in HBM but soon found that they could not scale GCN linearly enough with the current tech/power limits.
 
@silent_guy said:

They must have spent tons of resources finding just the right settings on a few select games to make it bench 8% faster than a 980 Ti in the reviewers guide. One wonders how much better Fiji could have been if they had used that effort on engineering instead. /s
 
@gongo said:

TBH I think this range has been marketed and priced terribly and will sell accordingly (without price drops). Calling the top end card Fury rather than the 390x has only served to raise expectations to unrealistic levels which then leaves a sour taste in the mouth of enthusiasts and journalists. Better to have kept the previous naming scheme, priced everything more reasonable and left people with a feeling of getting great value. Oh and the 8GB on Hawaii was just stupid in light of Fury's 4GB. I think they'd have been better off with something like this:

390x (Fury X) - 99
390 (Fury) - 29
380x with 4GB (390x with 8GB) - 79
380 with 4GB (390 with 8GB) - 99
370 (380) - 99
360 (370) - 29
350 (360) - 9

That would have given people a much better impression of value IMO and thus the increased sales would have more than made up for the lowered prices. To say nothing of improving AMD's overall image.

I totally agrees with this structure. So much better...so simple..but if their CTO can get confused between overclockers' dreams and dog's poo. i am not surprised AMD thinking was not clear enough to formulate this picture.
 
@RobertR1 said:

They must have spent tons of resources finding just the right settings on a few select games to make it bench 8% faster than a 980 Ti in the reviewers guide. One wonders how much better Fiji could have been if they had used that effort on engineering instead. /s

The problem with sending these review guides is that they seem really cringe if your product doesn't pan out in regular gameplay when released.

First impressions mean the world. Even if drivers get better over time, you've already gained a reputation of being under powered and over priced to the competition.
 
@Dave Baumann said:

Memory limitation? For some reason I vaguely thought that Mantle used up more.
Mantle itself, no. Mantle probably has a lower overall footprint.

Like other lower level API's Mantle allows more control to the developer and the developer has more access to allocate buffers in specific places, where higher level API's and/or drivers may make choices outside of the developers control
 
@Alexko said:

TechReport used DX11 for BF4 on the Fury X since Mantle was showing significantly worse performance.
http://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/11

Semiaccurate however reported significant performance gains from Mantle (at 1080p however) in BF4, Civ:BE and Sniper Elite 3.
http://semiaccurate.com/2015/06/24/amds-radeon-goes-premium-with-the-fury-x/

This might have to do with memory, since the Mantle codepath seems to be a bit more memory-hungry than the D3D11 one. Presumably this is no problem at 1080p or even 1440p, but 4GB might not be enough at 4K (the Tech Report benched at 4K).

My guess is that DICE used Mantle to limit what they viewed as unnecessary transfers to and from VRAM by leaving as many assets as possible in it for quicker access, with higher memory consumption as a side-effect. At common definitions with common memory capacities that's a good trade-off, but perhaps not so much at 4K with just 4GB. I suppose this sort of thing is to be expected with low-level APIs: you have more control, but you have a greater responsibility to make sure that when you optimize something, it really is a net gain in all cases, or you have to enable/disable it according to the situation; you can't rely on the driver to do this for you anymore.

Developers are used to low-level APIs on consoles where this is not a problem, so there may be an adjustment period.

This is all just speculation on my part, but after I started writing this, I read Dave's post above, and my speculation is consistent with it.
 
@iroboto said:

That's interesting. If I vaguely recall, AMD cards benefited more with DX12 so maybe they can still see more gains.

Anyone see any BF4 Mantle benchmarks out there?
I think ideally, we wait for SW:Battlefront or Doom 4. BF4 mantle fits the description, but I'm unsure if they took advantage of everything that mantle had to offer (given how long ago it was released). At least with SW:BF and Doom 4 we should be seeing a ton of compute, and heavier async compute performance - and possibly some async copy for streaming.
 
Were there details given about Doom 4?
y
@eastmen said:

TBH I think this range has been marketed and priced terribly and will sell accordingly (without price drops). Calling the top end card Fury rather than the 390x has only served to raise expectations to unrealistic levels which then leaves a sour taste in the mouth of enthusiasts and journalists. Better to have kept the previous naming scheme, priced everything more reasonable and left people with a feeling of getting great value. Oh and the 8GB on Hawaii was just stupid in light of Fury's 4GB. I think they'd have been better off with something like this:

390x (Fury X) - $599
390 (Fury) - $529
380x with 4GB (390x with 8GB) - $379
380 with 4GB (390 with 8GB) - $299
370 (380) - $199
360 (370) - $129
350 (360) - $99

That would have given people a much better impression of value IMO and thus the increased sales would have more than made up for the lowered prices. To say nothing of improving AMD's overall image.

I still think the Hawaii rebrands would be to much. Hawaii X 8gig should have been $330 and Hawaii 4gig $250.

The way iti s now is way to expensive.
 
Back
Top