Linley have put up a piece stating their belief that A10 is throttling a lot more than previous Ax chips, citing increased GPU frequency as the fundamental reason.
Apple Turbocharges PowerVR GPU
http://www.linleygroup.com/newsletters/newsletter_detail.php?num=5619
To me it looks poorly researched.
They assume that A10 is fundamentally using similar but customised graphics IP compared to the A9, and that most of the GPU performance is from an increase in clock. Apple haven't historically kept the same GPU IP on new Ax generations.
They cite poor increase in Futurmarks physics test, and also selective GFXbench tests as evidence of poor GPU performance/thermal throttling.
But physics is a CPU test. Ax has always struggled with the test. Futuremark put out a PR several years ago explaining why iphone5s didn't increase much compared to iphone5. It isn't a GPU issue.
https://www.futuremark.com/pressrel...results-from-the-apple-iphone-5s-and-ipad-air
"In the Physics test, which measures CPU performance, there is little difference between the iPhone 5s and the iPhone 5"
Finally, altough they do cite some glxbench tests, they don't mention the tests in that suite that might expose throttling, i.e. the sustain FPS tests and battery tests. According to the data in the Anandtech review, A10 in the iphone7 does drop from 60fps to 50fps after 5 mins, but sustains around 50fps until the battery dies. It's terminal fps is 50% more than on the iphone6s.
The slightly bigger battery also last slightly longer. Assuming those last two things roughly cancel out, the overall package appears to be getting significantly more performance in that test from the same input power. Hardly indicative of the A10 having thermal issues relative to previous Ax generations.
I guess that ultimately, if the chip has higher CPU performance and higher GPU performance, then it has the potential to generate more heat, and in fundamentally the same package, throttling of the higher performance has to happen. But Linley's argument is that much of the theoretical improvements aren't being seen, and is blaming it on GPU frequency increase. Also throwing in futuremarks physics test in a GPU discussion doesn't seem relevant.
thoughts ?