Will OpenGL ES 3.0 prove to be a short-lived standard?

Looks like the amount of performance that can be sustained, and not just the short-term peak, can be tested.

https://kishonti.net/news_single.jsp?id=18910880

From what I've seen in the application's internal explanations it's logging at application startup the battery persentage, runs 30x times a stressful benchmark, re-checks the battery persentage after that and is calculating how long a device should last based on that.

1. Does it run T-Rex or something else? (at least they're displaying a screenshot of T-Rex in the test information)
2. I assume it runs offscreen 1080p?
3. Without adjusting brightness between devices to be compared aren't the results questionable again?

The applications estimate for how long a device and its battery would last sounds dubious to me; however the performance result minus throttling could prove to be a quite useful tool for reviewers.

Whether onscreen or offscreen since the T628 in the Note3 has a native 1080p resolution, it throttling down by 42% (22.5 down to 13.0 fps) in those early results is quite a tough cookie to swallow. Quo vadis Samsung?

***edit:

for question (2) it seems it runs onscreen; neither the iPadAir nor the Motorola X throttle one bit with 30 consecutive T-Rex runs (or at least that's what it looks like from the so far results).
 
Why is there an iOS 7.2 in the OS Build list for the iPhone 5S?

Good observation; it's both listed in the GLB2.7 and GLB3.0 results.

On a sidenote does anyone have a clue why only the A7 is so obscenely ahead of the rest of the pack in the driver overhead offscreen test (if you look at former Apple SoC results they level nicely with the rest of the pack)? Neither the description inside the benchmark nor the test itself indicate anything that would help me explain it.
 
GfxBench 3.0 just became available to iOS's App Store (it was only available to Android's Play Store when the benchmark first went live), so scores should marginally creep up as more people get to run it and also get to run it multiple times where the devices have been "warmed up" with more optimal memory accesses.

As mentioned before, too many uncontrolled variables for power consumption and lack of comparability among devices might not make for meaningful results in the battery test, but the performance stability test, hopefully doing a good job accounting for the performance degradation from thermal throttling, should really be a meaningful test by which to compare.

Funny to see the 550 MHz Adreno 330, "AB" binned MSM8974 version of the Xiaomi MI3 finally show up as the new benchmark hits. Scores really well, too; quite a lot higher than the earlier Tegra 4 version.
 
... (continued) Funny also, that in all this time, no one has benchmarked a new iPad mini with its A7 (on either GfxBench 2.7 or 3.0); I'm curious to see just how it ranks against the Air and 5S to see the effects of the different power profile for its form factor vs clock speed vs slight variations in the tablet "version" of iOS7 versus the phone "version".

The state of mobile graphics benchmarking has been steadily improving when the collective of benchmark suites is considered. Basemark is nice because it's built on the real game/graphics engine of Unity and includes a high amount of detail to be both forward-looking and a realistic workload. 3DMark doesn't isolate graphics as much and its CPU tests represent unrealistic multi-core scaling, but it at least provides another mix of workloads to consider.
 
I'm running the latest public version on ios, v7.04, and glbench reports the driver as 27.11.4

That latest glbench listing includes drivers 27.14, 27.15 and 27.19, three more recent drivers than iOS7.04. Seems strange.

So A7 is getting nearly x2.5 better score than the T628. And T628 is in phablet/tablet form factor.
 
Last edited by a moderator:
Considering the results of performance endurance tests run on the 5S by Anandtech and on the Nexus 5 by Ars Technica, I'd guess the gain on the 5S from running in the fridge would be relatively small.

What I find most notable is that the total performance over time of the Nexus 5's S800 is actually lower than that of the later revs of the Nexus 4's S600. The push to higher peaks with the latest Kraits and Adrenos has crossed that critical threshold where the consequential throttling eats away at the total work performed faster than the gains from the speed-up. That kind of narrowing in the performance applicability band can become noticeable if pushed too far.

Though Kepler is a different architecture all together, I wonder how long 800+ MHz clocks would withstand endurance testing.
 
Last edited by a moderator:
If you need at roughly 450MHz a fridge to gain something as boring as 18%, what do you need at more than twice the frequency? A full month's vacation at the north pole?
Required voltages are depend on the architecture, ALU pipelining, VLSI shematic, fab's tech process characteristics and etc, etc. I don't think it's clever to extrapolate A7 characteristics on other SoCs. For example http://images.anandtech.com/doci/7622/Screen Shot 2014-01-06 at 6.19.21 AM.png here A7 consumes more than 550 Mhz Adreno 330 with 2x of ALU horsepower - http://gfxbench.com/result.jsp?benc...e&arch-unknown=true&arch-x86=true&base=device
 
I'm not sure how much trust I would put in those NVIDIA slides. Then again, it is very hard to find accurate power consumption figures anywhere for these chips.
 
Back
Top