Apple A8 and A8X

A8X Onscreen results look normal to me. Typically there is ~ 30% performance drop when moving from 1080p [1920x1080] Offscreen resolution to 2K [2048x1536] Onscreen resolution.
 
The first result is offscreen, the second onscreen (2048*1536)

iPad Air 2: 32.4/24.6fps
Shield tablet: 31.0/29.7fps

First K1 onscreen results in Manhattan were below 20 fps.

For additional reference, the iPad Air scored (13.0/8.8) (off/on), so those numbers are encouraging. It's a shame that Engadget doesn't have a link to the full results.
 
That x2.5 improvement in glbench offscreen matches so nicely with Apples presentation x2.5, that one might suggest that is where it came from.
 
For additional reference, the iPad Air scored (13.0/8.8) (off/on), so those numbers are encouraging. It's a shame that Engadget doesn't have a link to the full results.

They'll appear for sure in the Kishonti database around the time Anandtech has its review ready :LOL:
 
Compared to the iPhone 6/Plus's scores, the iPad Air 2 does 20-25% better in 3DMark IS Unlimited. That scaling is much worse than the other benchmarks and is bizarrely anemic for a chip that physically has 50% more of everything along with slightly higher clocks.

I wish that Futuremark would make post somewhere stating "this is what we do, how we do it, and we are 100% right". Right now, though, I'm quite willing to assume that 3DMark's code has a bottleneck that doesn't apply to any other game/benchmark.
http://www.futuremark.com/pressrele...results-from-the-apple-iphone-5s-and-ipad-air

They've tried to explaining why 3DMark Physics seems to be a poor match for Cyclone before. Basically, Cyclone's memory controller has much improved in order data reads, but is no better than Swift at out of order data reads, which the Bullet Library mainly uses. Bullet's processing is also structured in a way that can't take full advantage of Cyclone's additional out-of-order execution resources. Still that can't be the full explanation since even when they tried to mitigate those issues they only see a 17% performance improvement. They point out that many developers use Bullet so what 3DMark reports is valid and relevant, but there's still a gap where many developers see much better CPU scaling than what 3DMark can achieve. But I suppose the physics in the 3DMark test may be more complex than in other games/benchmarks.
 
Compared to the iPhone 6/Plus's scores, the iPad Air 2 does 20-25% better in 3DMark IS Unlimited. That scaling is much worse than the other benchmarks and is bizarrely anemic for a chip that physically has 50% more of everything along with slightly higher clocks.

I wish that Futuremark would make post somewhere stating "this is what we do, how we do it, and we are 100% right". Right now, though, I'm quite willing to assume that 3DMark's code has a bottleneck that doesn't apply to any other game/benchmark.
They do disclose that to reviewers and partners. Imho their approach is correct and just demonstrates a real-world bottleneck.
 
http://www.futuremark.com/pressrele...results-from-the-apple-iphone-5s-and-ipad-air

They've tried to explaining why 3DMark Physics seems to be a poor match for Cyclone before. Basically, Cyclone's memory controller has much improved in order data reads, but is no better than Swift at out of order data reads, which the Bullet Library mainly uses. Bullet's processing is also structured in a way that can't take full advantage of Cyclone's additional out-of-order execution resources.

Do none of the geekbench memory tests, test random reads ?, or does out-of-order mean something different in this context ?Feel happy to educate someone that has no knowledge of this !
 
I don't know what test they are referring to (IOS memory Mark ?), but dailytech has a chart showing memory performance 12% worse than the ipad air ?
http://www.dailytech.com/Apples+iPa...ore+A8X+Processor+2GB+of+RAM/article36756.htm

I see the benchmarking company have an updated version of that chart, now showing the Air2 having the same score as the Air.
http://www.iphonebenchmark.net/memmark_chart.html

But on that chart iphone6/6+ has a lower score than iphone5s, so perhaps this test isn't any use at all ?
 
They do disclose that to reviewers and partners. Imho their approach is correct and just demonstrates a real-world bottleneck.

Serious question: is that "real-world" bottleneck something that developers are struggling with or is it something game engines have already worked around without much fuss?

iOS is a self-contained platform. Things that A* chips can't do become things that game developers won't do, so at what point do pathological, worse-case scenarios become ivory-tower trivia unrelated to the actual market?

I mean, let's not forget that 3DMark's results are now consistently contrary to other CPU and GPU benchmarks's results.
 
Do none of the geekbench memory tests, test random reads ?, or does out-of-order mean something different in this context ?Feel happy to educate someone that has no knowledge of this !

What you are looking for is pretty much memory latency. The only test I've seen so far is by Anandtech, where the A8 shows improved latency from L3 and main memory.

Geekbench doesn't typically show this, it consists of small snippets of code working on data that is mostly L1 resident, (core tests, basically) complemented by main memory tests. (Note that the increased L2 in the A8X processor doesn't show up much vs the A8 either.) Not a worse benchmark than anything else out there, better mostly, but fails to take into account most improvements in the memory subsystem from L2 out, apart from main memory bandwidth.
I suspect the A8X will have a very healthy performance advantage over the A7 regardless of third core usage, on account of doubled L2, lower latency to L3 (size?) and main memory, and improved main memory bandwidth. Even without knowing the amount of L3, Apple beefed up the memory subsystem performance of the A8X vs the A7 considerably. In the real world, this is likely to be more important than in benchmarking.

I could write dissertations about why the memory subsystems are undervalued in benchmarking generally. There are a number of reasons. It is an industry wide problem, really, and has become particularly obvious in PC-space since the introduction of multi-core CPUs, but was an inherent problem long before that.
 
Serious question: is that "real-world" bottleneck something that developers are struggling with or is it something game engines have already worked around without much fuss?

iOS is a self-contained platform. Things that A* chips can't do become things that game developers won't do, so at what point do pathological, worse-case scenarios become ivory-tower trivia unrelated to the actual market?

I mean, let's not forget that 3DMark's results are now consistently contrary to other CPU and GPU benchmarks's results.

Without wanting to touch the specific case of the 3DMark physics test, you make a good point, and put the finger on a fundamental problem of cross-platform benchmarking. They cannot model well the performance of code targeted to a specific platform, unless the architecture of the platforms are very similar. As a well known example, comparing the performance of the Playstation3 CPU with an x86 processor using cross platform benchmarking.

Stuff like Metal could conceivably cause some additional thorny issues in terms of mobile graphics benchmark relevance.

Cross platform benchmarking is difficult and any results should be taken with liberal heaps of salt. Making comparisons within an architectural family is a lot safer, but even there you can be tripped up - as the 3DMark physics test shows.
 
Closeup of A8X from iFixit:

wX0nLnB.jpg
 
Without wanting to touch the specific case of the 3DMark physics test, you make a good point, and put the finger on a fundamental problem of cross-platform benchmarking. They cannot model well the performance of code targeted to a specific platform, unless the architecture of the platforms are very similar. As a well known example, comparing the performance of the Playstation3 CPU with an x86 processor using cross platform benchmarking.

Stuff like Metal could conceivably cause some additional thorny issues in terms of mobile graphics benchmark relevance.

Cross platform benchmarking is difficult and any results should be taken with liberal heaps of salt. Making comparisons within an architectural family is a lot safer, but even there you can be tripped up - as the 3DMark physics test shows.
These benchmarks are meant to try to mimic real-world games and go as far to use tools also used by game developers (Unity, that physics engine, etc). I don't think the point is to benchmark optimized performance on each platform, for that we have more synthetic suites such as GFXBench.
 
I could write dissertations about why the memory subsystems are undervalued in benchmarking generally. There are a number of reasons.
To no small amount the difficulty of scaling DRAM performance, no doubt? CPU performance goes up 1000-fold in a decade more or less, while DRAM performance increases like 10x, if that much...
 
Back
Top