Samsung Galaxy S series rumours.....

Like the LTE Advanced Galaxy S4s, some sites have seen scores as high as 26 and 68 fps in GfxBench's T-Rex and Egypt HD, respectively, so that would seem to peg it as the MSM8974AB.

I have seen a site report a lot lower score with it, though, when they tested it at the launch event. That might've been thermal throttling, however.
 
Like the LTE Advanced Galaxy S4s, some sites have seen scores as high as 26 and 68 fps in GfxBench's T-Rex and Egypt HD, respectively, so that would seem to peg it as the MSM8974AB.

I have seen a site report a lot lower score with it, though, when they tested it at the launch event. That might've been thermal throttling, however.

Anandtech are stating that SoC in the Snapdragon Galaxy Note 3, is not a MSM8974AB, with a 550 MHz GPU, but a regular 8974 with a 450 MHz GPU. This is eyebrow raising at the very least, given its performance advantage over other Snapdragon 800 devices in GLBenchmark.

One possibility is drivers. However, given that the Nexus 5 result is on newer v@53.0 drivers, than the s800 equipped GT-I9506 which ran 4.2.2 & v@44.0 drivers when tested, it still scored a lot lower 23 vs 26 FPS in T-Rex (offscreen). So while the GNote 3, is probably on the latest drivers, it cannot account for the performance advantage.

Unlike many reviewers, Brian Klug runs the entire GLBenchmark test suite in one go, rather than cherry-picking one or two tests, because of this the Anandtech GLB FPS are often slightly lower, than on other reviews, as obviously running all the tests serially is more likely to trigger thermal throttling, also living in Arizona can't help either! I was therefore surprised that the Anandtech results were as high, as the other Samsung results posted on GLB's website. It is almost like Samsung have dramatically raised the GPU's thermal limits, just for this benchmark.


http://www.anandtech.com/show/7376/samsung-galaxy-note-3-review/4
 
Last edited by a moderator:
Unlike many reviewers, Brian Klug runs the entire GLBenchmark test suite in one go, rather than cherry-picking one or two tests, because of this the Anandtech GLB FPS are often slightly lower, than on other reviews, as obviously running all the tests serially is more likely to trigger thermal throttling, also living in Arizona can't help either! I was therefore surprised that the Anandtech results were as high, as the other Samsung results posted on GLB's website. It is almost like Samsung have dramatically raised the GPU's thermal limits, just for this benchmark.

http://arstechnica.com/gadgets/2013...rking-adjustments-inflate-scores-by-up-to-20/

They call it out pretty plainly.

Gets scores up to 20% better, and linpack was 50% better, by uprating clocks and easing thermals. They even found the file that detects the various benchmarks, and bypassed it to measure the differences.
 
Last edited by a moderator:
http://arstechnica.com/gadgets/2013...rking-adjustments-inflate-scores-by-up-to-20/

They call it out pretty plainly.

Gets scores up to 20% better, and linpack was 50% better, by uprating clocks and easing thermals. They even found the file that detects the various benchmarks, and bypassed it to measure the differences.

This has put the final nail on me getting the gn3...im surprised by the non ab soc...but its pretty shoddy that samsung is blatantly trying to fool people.
Nand/io performance is incredible though.
 
Then, I suppose none of the devices that have yet made it to the benchmark databases are equipped with an AB variant 8974. Perhaps the MI3 will launch it first.

With the benchmark boosting done on the Galaxy S4 and the Exynos 5410, the benchmarks could at least represent the performance that was available to common, processor intensive apps like the Samsung browser, camera, and gallery. This situation here is simply misleading.

Not having the same competitive pressures as Samsung to need to stand out among the Android manufacturer landscape, Apple could have less reason to manipulate like this, but that doesn't mean they aren't, certainly. Their design decisions don't lead me to believe they're pushing for performance at almost any cost; they seem to achieve top performance through a smarter balance. Hard to know one way or the other without good investigative work into app-specific power management profiles of the device like Ars did, however.

I await the results of someone benchmarking these things on an extended loop until these devices either throttle and show more representative averaged results or simply brick themselves through meltdown.
 
Last edited by a moderator:
I see this 'issue' one of two ways.

1) Either Samsung is being conservative with its thermal limits and this boost could be safely made available to all applications via a semi-hidden config in the setting menu, accepting that battery life would suffer.

2) The raising of thermal limits would actually negatively affect reliability, if run on a regular basis. That scenario would be downright unethical.

Samsung had already worked to boost its scores in GLbenchmark 2.5, according to the older Anandtech article. Sadly, for once Samsung is ahead of its rivals in terms of software, and now results from the only good cross-platform benchmark, GLB 2.7, must be taken with an air of suspicion.

This just shows that Samsung senior execs live in their own world, and are immune to criticism of their software decisions, maybe they just assume (rightly perhaps) that 99% of consumers won't care either way.
 
That Ars article is rubbish.

Manufacturers do this commonly. HTC does it, Samsung does it, and surprise surprise LG does it too, even on the G2, especially invalidating the whole point of the article. It just happens that they don't do it in GeekBench 3. I call that a tunnel-vision investigation.
 
The Ars article didn't claim the practice was exclusive to or more pronounced with Samsung. But yeah, Anand Lal Shimpi said it was fairly widespread and named HTC specifically as another.

If the boosting is done such that benchmark results don't represent the performance available to anything of significance to the user, the practice would be serving little purpose other than to mislead.
 
That Ars article is rubbish.

Manufacturers do this commonly.
The article uncovers some behind the scenes information that most people aren't aware about -> it's rubbish?

I find this article informative, and I hope that Ars keeps on publishing this kind of rubbish.
 
The article uncovers some behind the scenes information that most people aren't aware about -> it's rubbish?

I find this article informative, and I hope that Ars keeps on publishing this kind of rubbish.
They compare it to the G2; however they totally failed to discover that the G2 does exactly the same thing, only by luck that it doesn't in GeekBench 3. This invalidates the whole point they are trying to make in the comparison. The rest of article is just a copy of what I revealed in June and AnandTech expanded on in July.

So yes, it's rubbish.
 
This article is an investigation into this particular example of benchmark boosting, complete with revelations on the specifics and amount of the boost. It doesn't attempt to generalize into any conclusions that Samsung is more guilty or LG is less guilty beyond this specific example.

This example is also noteworthy for being purposefully different than the Galaxy S4 Exynos 5410 example from before. App detection for the sake of customizing the power profile of the device to better match the performance needed by specific apps is a smart practice; doing it for nothing of relevance (in this case, only for the sake of the benchmarks themselves) is quite a different thing and defeats most any usefulness of the benchmark results.

In continually chasing marketing figures of MHz and core counts and the like that outpace the advance of battery tech for their SoCs, these semis are making the trade-off of the duration and number of cores that can be sustained at such high levels. A downside of this can be extra latency scaling up and down through power states, so saying that such benchmark boosting is necessary to combat the fact that some of these benchmarks are not properly warming up their targeted processors is trying to conveniently side-step the costs of their own SoC power management design decisions, which none of the real apps on the device get to do (nor would it be desired to exclude them from power saving measures, unlike the temptation for the benchmarks.)
 
They compare it to the G2; however they totally failed to discover that the G2 does exactly the same thing, only by luck that it doesn't in GeekBench 3. This invalidates the whole point they are trying to make in the comparison. The rest of article is just a copy of what I revealed in June and AnandTech expanded on in July.

So yes, it's rubbish.

I disagree.

Your excellent work highlighted the issue in the 5410. I'm not sure at the time whether you indicated or suggested that others were doing similarly.

The Arstechnica piece confirms that this was not a one-off that Samsung did, possibly to combat some issues Samsung perceived to have with that particular soc or for some other reason. Rather it confirms that they are doing it with 3rd party Socs, and it is also implied that they are doing similarly with the 5420 exynos, in the note10.1 review.
 
Assuming Samsung was truthful about the special power profile/higher clock rate on the 5410 also being available to at least the Samsung browser, camera, and gallery applications, the whitelisted benchmarks in that case could at least produce results representative of the performance available to those common, processor intensive applications (results a buyer might be interested in knowing.)

This time, the benchmark results appear to represent only the performance available to the benchmarks themselves.

... okay, I need to stop repeating myself...
 
They compare it to the G2; however they totally failed to discover that the G2 does exactly the same thing, only by luck that it doesn't in GeekBench 3. This invalidates the whole point they are trying to make in the comparison. The rest of article is just a copy of what I revealed in June and AnandTech expanded on in July. So yes, it's rubbish.
If the article was only a comparison between a G2 that'd be one thing. But it's not: to me it's more a detailed comparison between Samsung with and without the hack. It's about the Note 3 in this case instead of an S4. About the claim of Samsung that they don't only do this for benchmarks, which they claimed after the Anand article. About the behavior of CPU core enablement behavior in different circumstances.

If you were expecting this an article to be the final word on performance differences between LG and Samsung, I can see who you'd be left wanting more, but that's an aspect that I wasn't particularly interested in: I have no allegiances either way.

If you're expect each article on the web to encyclopedic in everything it covers, you're setting yourself up for perennial disappointment.
 
Rather it confirms that they are doing it with 3rd party Socs,
AnandTech already covered this back in July, the Qualcomm based S4's also did this since release. Ars just basically copied that whole article piece-by-piece and just re-applied it to the Note 3. And as I said, them failing to discover that the G2 does the same, which they extensively highlight in the very same piece, is the most startling thing.

In any case I rest my case.
 
Nebu is 100% right. The impression I got from that Ars piece is the LG G2 being the primary comparison, and they didn't do much to see if LG did the same thing.

I hope we see FutureMark take a stand on this like they did in the past when ATI and NVidia were engaging in these tactics. As the developer of these benchmarks, it's quite easy for them to circumvent detection algorithms, whether naive ones based on executable names or more sophisticated ones.
 
Without knowing whether the boost by any of the other implicated OEMs is also being applied to other apps of significance on those devices or whether just the benchmarks, determining whether those OEMs are legitimately optimizing or cheating can't be done with Anandtech's data. It misses the point.

I did get to see from Anand's article that my suspicions of who wouldn't likely be involved in recent times, Apple and Motorola, were right, though. I feel some of that mentality is reflected in other areas of their hardware design approach.

edit:
Anandtech's article also mentions that he expects OEMs to start implementing performance boosting based upon workload pattern detection in the future so that benchmark renaming won't prevent the boosts. Errr... that kind of workload optimization is already employed to some degree by various software and hardware mechanisms and is a very welcome optimization that helps promote efficiency (just like app detection, as long as it doesn't focus exclusively on benchmarks.)
 
Last edited by a moderator:
Back
Top