gamersnexus outlined the "tricks?" used by AMD in benchmarking Ryzen before the lanuch:
AMD inflated their numbers by doing a few things:
In the
Sniper Elite demo, AMD frequently looked at the skybox when reloading, and often kept more of the skybox in the frustum than on the side-by-side Intel processor. A skybox has no geometry, which is what loads a CPU with draw calls, and so it’ll inflate the framerate by nature of testing with chaotically conducted methodology. As for the
Battlefield 1 benchmarks, AMD also conducted using chaotic methods wherein the AMD CPU would zoom / look at different intervals than the Intel CPU, making it effectively impossible to compare the two head-to-head.
And, most importantly, all of these demos were run at 4K resolution. That creates a GPU bottleneck, meaning we are no longer observing true CPU performance. The analog would be to benchmark all GPUs at 720p, then declare they are equal (by way of tester-created CPU bottlenecks). There’s an argument to be made that low-end performance doesn’t matter if you’re stuck on the GPU, but that’s a bad argument: You don’t buy a worse-performing product for more money, especially when GPU upgrades will eventually out those limitations as bottlenecks external to the CPU vanish.
As for
Blender benchmarking, AMD’s demonstrated Blender benchmarks used different settings than what we would recommend. The values were deltas, so the presentation of data is sort of OK, but we prefer a more real-world render. In its Blender testing, AMD executes renders using just 150 samples per pixel, or what we consider to be “preview” quality (GN employs a 3D animator), and AMD runs slightly unoptimized 32x32 tile sizes, rendering out at 800x800. In our benchmark, we render using 400 samples per pixel for release candidate quality, 16x16 tiles, which is much faster for CPU rendering, and a 4K resolution. This means that our benchmarks are not comparable to AMD’s, but they are comparable against all the other CPUs we’ve tested. We also believe firmly that our benchmarks are a better representation of the real world. AMD still holds a lead in price-to-performance in our Blender benchmark, even when considering Intel’s significant overclocking capabilities (which do put the 6900K ahead, but don’t change its price).
As for
Cinebench, AMD ran those tests with the 6900K platform using memory in dual-channel, rather than its full quad-channel capabilities. That’s not to say that the results would drastically change, but it’s also not representative of how anyone would use an X99 platform.
Conclusion:
Regardless, Cinebench isn’t everything, and neither is core count. As software developers move to support more threads, if they ever do, perhaps AMD will pick up some steam – but
the 1800X is not a good buy for gaming in today’s market, and is arguable in production workloads where the GPU is faster. Our Premiere benchmarks complete approximately 3x faster when pushed to a GPU, even when compared against the
$1000 Intel 6900K. If you’re doing something truly software accelerated and cannot push to the GPU, then AMD is better at the price versus its Intel competition. AMD has done well with its 1800X strictly in this regard. You’ll just have to determine if you ever use software rendering, considering the workhorse that a modern GPU is when OpenCL/CUDA are present. If you know specific in stances where CPU acceleration is beneficial to your workflow or pipeline, consider the 1800X.
For gaming, it’s a hard pass. We absolutely do not recommend the 1800X for gaming-focused users or builds, given i5-level performance at two times the price. An R7 1700 might make more sense, and we’ll soon be testing that.
http://www.gamersnexus.net/hwreview...review-premiere-blender-fps-benchmarks/page-8