First of all, they didn't reveal anything. They just rehashed what me and AnandTech did in July for the Note 3.
their revelation that only benchmarks and no other apps have access to that level of performance this time
If that is what you understood from that article then it just proves my point of its worthlessness. The whole "otherwise unreachable performance levels" argument is absolutely null due to the very nature of the mechanism: it doesn't expose any kind of higher performance level. Period. What Samsung claimed back in July was related to the GPU frequencies of the 5410 on the i9500. The two cases have literally nothing in common in how they work or what their effect is.
we can confidently say that Samsung appears to be artificially boosting the US Note 3's benchmark scores with a special, high-power CPU mode that kicks in when the device runs a large number of popular benchmarking apps. Samsung did something similar with the international Galaxy S 4's GPU, but this is the first time we've seen the boost on a US device.
The above quote demonstrates that the writer fundamentally did not understand the technicalities of the two mechanisms at hand here.
Between the inclusion of that and the suspicious "frame rate adjustment" string, it's clear that Samsung is doing something to the GPU as well, though those clock speeds are more difficult to access than the CPU speeds (a method used by AnandTech on the international S 4 no longer works on the Note 3).
The bolded part is pure bullshit, as there has been
zero evidence of this up to today, yet they're able to make such a statement. They further prove how the writer(s) are technically inept in that last sentence (Of course a class path for PowerVR GPUs won't work on the Adreno, you have to use a different one /facepalm), and I've already stated that the "suspicious" refresh rate adjustment (Which actually never happens) has been part of Samsung firmwares since several generations for a perfectly valid use-case.
and Ars measuring the difference in performance when the app detect has been circumvented.
we're seeing artificial benchmark increases across the board of about 20 percent;
This has already been proven to be bullshit by testing (See AnandTech), secondly, here they claim
across the board while the very title of the article says the (much more correct)
up to 20% [again, on
GeekBench3 _only_].
Linpack showed a boosted variance of about 50 percent.
They use Linpack, a 3 year old benchmark whose run-time is nowdays <250ms, as a tool to state variance? If you don't get my point, here's a sequence of scores I just randomly benched: 569 412 443 366 581. Hey look, I got a 60% max variance just from that.
And last but not least, their whole editorial is based on a comparison to the G2, which they disastrously fail to properly analyse and detect that it has the very same "cheats" they so proudly denounce on the Note 3; which actually prompted AnandTech to post the
follow-up article to correct the whole perspective on the story and post the proper facts.
The article is a failure on every aspect it's trying to do, it fails on the technical parts, it fails on representing the real effect this has on benchmarking due to its asinine methodology, and it fails on the journalistic/editorial conclusion it tried to make.
So please let's drop that ArsTechnica article as any kind of valid reference point.
And I'm sorry to have ranted off in here again about this, let's get back on topic.