AMD Radeon R9 Fury reviews

Mantle question: Did any titles use it for anything other than speedups? I'm thinking any sort of graphical enhancement.
Civilization: Beyond Earth used it for a cleaner take on split frame rendering for multi-GPU setups. It doesn't really help the framerates, but it keeps the latency down.

Where is that line again between the driver domain and the developer's code?
There won't be any more of a line with DX12/Vulkan than there was with DX11/OpenGL. Developers are still going to find new and exciting ways to hang themselves, and driver teams are still going to work on game specific optimizations since it's in their best interest to deliver the best possible performance for their customers.
 
The real problem with this type of technology as TrueAudio, you need to get many developpers who goes on it, or it will take forever to start..
Let say, if it was part of DirectX or PS4 API.... and developpers could have 1 year for start using and was sold as a big features on new console generations sound output, you will get it explode. Then Nvidia will use it, and Intel will get onboard audio who use it.

Is there any games who support it on consoles right now ?
I don't believe it's defined as TrueAudio has in the PC space. There are specialized resources with many shared hardware traits to the PC blocks, and the console platforms utilize them for their needs.
However they tweaked them, how the IP is exposed, or all the use cases are not openly discussed as TrueAudio has been.

Well then, enlighten us! ;)

Relative to DX11, Mantle exposed the existence of the specialized queues like DMA and compute. The actual engines for them would have been abstracted or not utilized. Asynchronous compute apparently runs into the hard ordering limits of DX11.
Indirect operations and GPU-side occlusion queries were documented in the Mantle guide.
DX12 changes things, but its revealing them to the world at large has a chronological disadvantage.
 
Certainly more on the app side than in the past.


Yeah, it did (at least, in the context of DX).


But the draw back in the exposure and giving more control to the developer is more things can go wrong, and drivers might have to be tweaked more so then before. All depending on the API programming rules.

This has always been the case of low level programming, and the reason why high level programming was introduced. I am not saying its a bad thing, I think its good that the programmer has this type of control but because of the two different IHV's programmers might have affinity to a programming style that is better for one of them. Which actually is there right now too when shaders are concerned.
 
But the draw back in the exposure and giving more control to the developer is more things can go wrong, and drivers might have to be tweaked more so then before. All depending on the API programming rules.

This has always been the case of low level programming, and the reason why high level programming was introduced. I am not saying its a bad thing, I think its good that the programmer has this type of control but because of the two different IHV's programmers might have affinity to a programming style that is better for one of them. Which actually is there right now too when shaders are concerned.

Indeed, watching some of the AMD fans celebrate it as removing nvidia advantage with regards to earlier driver optimizations due to closer developer relations, it very amusing.
 
Apparently the disabled shaders can be partially unlocked on Fury cards. Complete unlocking is a no-go since some of the shaders are defective and artifacts result.

http://cdn.overclock.net/a/a6/500x1000px-LL-a68f6bee_updated.jpeg

From the thread creator himself,

OK, BIOS unlock works. In Windows 7, at least with 15.7.1 drivers were no troubles with signature.
Fancy pic, everything but one failed core is enabled. Still, zero performance rise if compared with 3840 shaders because SE symmetry was lost
http://cdn.overclock.net/2/21/500x1000px-LL-213b8a8b_fury_pro8.gif
 
What is the life expectancy of these being highly overvolted and overclocked?

Voltage was bumped from 1.35V to 1.55V for HBM which considering LN2 cooling shouldn't be a problem even after several runs. I would be more concerned with effects of cooling stacked dies and heating them up again. It's a bit of uncharted territory with first signs being very promising for extreme cooling applications for HBM. I was worried memory would just cold bug below certain temperatures.

Wonder what you could do with that 1TB/s memory if not for other architectural bottlenecks in Fiji!
 
Asus R9 Fury Strix (unlocked to Fury X) achieves 40%(45% non-unlocked) core & 100% memory overclock, using LN2 extreme cooling and physical voltage mods.
48% higher 3DMark Fire Strike Extreme result.
http://forum.hwbot.org/showthread.php?t=142320
Interesting, but 40-45% core bump on LN2 is still pretty disappointing IMO - especially when GM 204 / GM 200 are approaching a 100% core OC under the same cooling (BTW: that 3DM 10K score is almost as high as the best GTX 980 result). Granted, the GPU+HBM package in all likelihood makes overclocking trickier, but the LN2 results seem of more academic interest since I doubt the Fury/Fury X will be grabbing any world record headline marketing gold (I'd assume that the OC-friendly 980 Ti (Kingpin, Lightning, Matrix Platinum, HOF LN2) results will continue to rise as competition heats up.
 
Back
Top