GTX680 comes with 2GB as well. Maybe you're thinking about the older GTX580, but it wasn't meant to compete with Tahiti anyway.The argument against the FuryX wrt memory is the same as the on that was made against the GTX680 with its 1.5GB vs the 2GB of the 7970. Or against the 3GB of the 780Ti vs the 4GB against the R9 290X.
GDDR5 implemented a CRC check to allow it to reach high bus speeds while maintaining an acceptable error rate. HBM has dropped that feature since it has dropped the very high bus speeds.lol !
will HBM be like GDDR5 with error management detection...when you over-overclock the memory, the graphics will still run but you get worse performance?
Oops. I meant: 2GB vs 3GB.GTX680 comes with 2GB as well. Maybe you're thinking about the older GTX580, but it wasn't meant to compete with Tahiti anyway.
Everybody who expected FuryX to perform worse at 4K with today's games didn't pay attention to benchmarks for other GPUs. Even with the GTX970, it wasn't trivial to come with convincing examples where the 512MB less made a major difference.
As long as the working set fits the memory, it doesn't matter how much more GB there are on board.
The argument against the FuryX wrt memory is the same as the on that was made against the GTX680 with its 1.5GB vs the 2GB of the 7970. Or against the 3GB of the 780Ti vs the 4GB against the R9 290X.
It may be a problem in the future, but right now, you have to look hard to find any issues with it. The only difference is that the 4GB is a structural limitation of HBM1 where it was a pure cost issue for the GTX680/780Ti in the past.
I noticed that in MSI Afterburner in the settings menu, after I enabled extendofficialoverclockinglimits and rebooted I was able to OC the memory.
How did this lesser known site managed to overclock the HBM while the rest couldn't? GPU-z error?
The core clock was also overclocked higher, as do the CPU clock..so they are not a like for like comparison?
For this test, we briefly observed fluidity tested in each game. First, as we have already seen several times in bi-GPU and 4K, the Radeon generally have a small advantage over the GeForce at the felt. We assume here that the SLI link is reaching its limits, whether the frame pacing algorithms Nvidia are not fully functional in very high resolutions.
However, in any game we did not encounter any real fluidity problem on GeForce GTX 980 Ti SLI, unlike the Radeon R9 Fury X CFX suffering in 3 games:
- Dying Light max: very heavy jerks, unplayable
- Evolve medium: jerky at times, mainly early in the game
- Evolve very high: jerky at times, mainly early in the game
- The Witcher 3 medium: regular small jerks
- The Witcher 3 max without Hairworks: regular small jerks
- The Witcher 3 max with Hairworks: big jerks, very unpleasant
Note that in all these games the Radeon R9 295X2 suffers from the same problems, which are exacerbated by the lower level of performance.
...
Then, with Evolve and especially Dying Light, we could observe realistic and playable situations where it seems obvious that the memory of 4 GB per GPU is insufficient. To solve the problem, the only solution is to reduce the current level of detail textures, and it remains to seen if AMD can improve the behavior of Fury X with future drivers.
Example, we can already see that the Evolve jerks subside after a short playing time, signs of a gradual reorganization of the remaining data in memory. It is not the case in Dying Light that gradually adds new textures in an already saturated memory and causes big jerks. AMD said that its priority approach was to work with developers to ensure a more efficient conduct of their vis-à-vis gaming memory usage. An ideal approach, but that can have its limits if not everyone accepts to make efforts.
- The Witcher 3 max with Hairworks: big jerks, very unpleasant
Far Cry 4 now runs smooth as butter on 295x2 in TR's recent review of 980Ti while it ran pretty badly during Titan X's release. AMD won't get all developers to play ball like that and of course it doesn't matter much if the game is fixed months after everybody has had their fill of it.
Another HBM overclock and a decent performance increase at 1080p/win7 in kombustor test. From 49 to 57fps just by HBM overclocking. 105Mhz on core only gives 1fps increase in comparison.
http://www.overclock.net/t/1547314/...nano-x-x2-fiji-owners-club/1720#post_24106947
Newest version of gpuz also shows the correct bandwidth value.
Conclusion: No Difference
As shown in the charts above, all differences are within margin of error. There is effectively no difference from one driver revision to the next when considering only the existing Fury X options. For these purposes, reviews which were conducted using press drivers – assuming no other test error – can be considered effectively as accurate as reviews conducted using official launch drivers.
I'm not (yet) willing to accept that: if the memory timings (in cycles) don't change with increased clocks, it could be due to reduced latency as well.I find it amazing that even with HBM + Colour compression, this GPU is STILL bandwidth limited!