Apple A8 and A8X

Considering what needs to be done to perform an FP multiply for instance, that makes perfect sense. And I don't think the FP16 capabilities of the GX6650 is a fluke either. The changes were presumably made because licensees find them functional and desirable. I would assume that FP16 is primarily used for graphics, and FP32 for "other codes" whatever the heck it may be that uses the FP32 capabilities of the GPU on these systems.

App developers want their products to look good and perform well, and they will use the available resources accordingly. Benchmarks however aren't under the same pressure, so this is an example where conceivably benchmarks don't necessarily give a good prediction of real world app performance. The same goes for other comparative material. One occasion where the issue of FP formats was raised here was when it was noted that Anandtech was only quoting FP32 FLOPs in their comparison tables, and when the question was raised as to why this was so, it was simply because that was what they used in their desktop GPU tables. No attempt to corroborate with actual use had been made.

My layman's speculative math for Gfxbench3.0 is only because:

1. It's one of the rather accurate GPU synthetic benchmarks out there.
2. Manhattan in particular is heavily ALU bound.

For example when QCOM stated before the 420 launch that it'll have 40% more arithmetic performance than their Adreno330, it was fairly easy to predict where the latter would land more or less in Manhattan.

Now the 6650 in Manhattan will most likely perform quite a bit higher than "typical" 150 GFLOPs FP32 GPUs, but on the other hand not even remotely close to it's 300 GFLOPs FP16 peak values either.

While I agree with your points above - and unless a game is heavily tailored to work well on Rogues only - I'd think that it could have the efficiency of a 200-230 (FP32) GFLOP GPU at best.

It would be interesting to know (if the IMG folks can spare another dime) if and up to which degree the GPU's resources allow FP32 and FP16 to run in parallel.
 
Hum.. so this means that the Snapdragon 805 and Exynos 5433 are close to Apple A8 in 3D performance, with Tegra K1 keeping a distant leadership above all three.
 
Hum.. so this means that the Snapdragon 805 and Exynos 5433 are close to Apple A8 in 3D performance, with Tegra K1 keeping a distant leadership above all three.

Well, the proof of any pudding is in the eating, so we'll see what gives once devices are out and can be compared directly, and give due consideration to power draw, which is the gating parameter in this market segment. (And to make meaningful comparisons between SoCs much more difficult than in desktop space since power draw is tricky to get a sufficiently good handle on.)
Which implementation of "A8" (iP6, iP6+, future iPads) vs. which "K1"?
 
Last edited by a moderator:
But relative to iPhone 5S, the iPhone 6 Plus has 50% more graphics performance yet 185% more pixels.

Something has to give.
 
But relative to iPhone 5S, the iPhone 6 Plus has 50% more graphics performance yet 185% more pixels.

Something has to give.

The iPad Air has another 50% more on top of that, but has the same GPU as the 5S. It would be an issue if they were expected to run the exact same games, but the relative difference in performance now has some precedent.
 
Clearly, if they had release it 2 years ago, it should have maybe pass, but now that everyone have done 2-3 smartwatch ( coming from rectangulat to round etc ) .. Well its terrible how common or ugly it look . ( if gold plated look my grandma watch ).
you're from switzereland so maybe a bit biased :)
Anyways I went on a pro apple website forum and even there the feelings toward the iwatch were generally very negative, "perhaps iWatch 3 will be ok but this ..."

- battery lasts only a day
- must be near an iphone
- not waterproof
- ugly

OK perhaps the first 3 they couldnt fix but they could of at least done something about its looks, I have the feeling it would of been better if they didnt even show it, now we all know, apple sans jobs cn't make a decent iWatch
 
Not waterproof? Sheer madness, that, if true.

Surely it wouldn't be too difficult to waterproof such a small(ish) device?
 
It would be interesting to know (if the IMG folks can spare another dime) if and up to which degree the GPU's resources allow FP32 and FP16 to run in parallel.

We don't run the two datapaths in parallel. They share certain pipeline front-end resources, so they can't be fed and scheduled together.
 
Surely it wouldn't be too difficult to waterproof such a small(ish) device?
Well, there's the microphone and speaker. You'd need perforations of some sort for sound to get through, and it follows thusly that water could also, and then get logged inside the watch for extended periods. You would probably get microbial growth in there as well...NICE! :) So it'd be a challenge, for sure.

Looking at Apple's presentation btw, when they show a view of the iP6 internals there's an oblong object near the bottom right corner (on their screen; lower left of the phone), which looks a lot like the haptic feedback device from the iwatch. Now, I don't know if that's correct or not. Maybe the part leaks have already established iP6 having one of those horrid rotary buzzers like almost all previous phones, but if true that would be incredibly appealing to me. Linear actuators have so far only been used in iP4S, and I am super disappoint that subsequent models did not continue with it. A resurgence might actually get me to upgrade a year early, maybe...

If correct, I wouldn't expect the same tappy stuff as iWatch - they didn't say anything about it in the presentation after all - and you don't wear your phone on your hand like you do with a watch on your wrist, so it'd be a bit silly. It probably functions like a traditional buzzer in most useage cases. However games could put a linear actuator to really good use, if access to it for custom vibrations was opened up to developers.

Speculation is fun, isn't it? :D
 
Last edited by a moderator:
Clear enough; thank you.

Given that FP16 is assumed to be preferred for graphics, and Apple has historically created their factor of speedup on pure raw FLOP theoreticals, we could indeed just be looking at a GX6450. I can see the temptation for GX6650 at reduced clocks though, given we're trying to figure out how they spent an extra 1bn transistors...
 
Given that FP16 is assumed to be preferred for graphics, and Apple has historically created their factor of speedup on pure raw FLOP theoreticals, we could indeed just be looking at a GX6450. I can see the temptation for GX6650 at reduced clocks though, given we're trying to figure out how they spent an extra 1bn transistors...

Yes, I think its entirely possible it's still a 4 cluster unit.
http://forum.beyond3d.com/showpost.php?p=1872959&postcount=42
 
Given that FP16 is assumed to be preferred for graphics, and Apple has historically created their factor of speedup on pure raw FLOP theoreticals, we could indeed just be looking at a GX6450. I can see the temptation for GX6650 at reduced clocks though, given we're trying to figure out how they spent an extra 1bn transistors...

Careful with that supposed "extra 1bn"; they state >1bn whereby > stands for "more than" and not "a tad more than". Why not 1.2 or even 1.3b for A7?

TSMC's own marketing claims state for 20SoC a 1.9x transistor increase compared to 28nm, which I consider a best case scenario. Now I don't know how Samsung's 28nm compares to TSMCs 28nm, let alone the latter's 20SoC, but considering that Apple isn't usually aggressive with those kind of things I wouldn't suggest that they went for a 1.9x difference.

What we have here in all likeliness is something in the 1.1-1.3b ballpark for A7 and A8 probably in the almost 2b region since they actually state ~2bn.

All that aside I think that a 6650 should not mean 50% more die area compared to 6450 either; it might be 50% more clusters for the first but they're not scaling everything with clusters otherwise they would have stayed at core scaling; there is obviously some overhead for ASTC compared to the 6430 GPU in A7 but that's probably the same amount whether for 6450 or 6650.

Besides frankly if they should be aiming for >12" tablets next year as rumors suggest, a 6650 is far more handy because of its 12 TMUs; short story: no one can exclude a 6450 at this stage.
 
Well, I suppose we can't look forward to Anand's rundown of what the A8 SoC consists of.

But yeah, 700 million to 1 billion transistors is still a pretty hefty chunk of change. It all can't be increased L3 cache, seeing as the increased clock speed only accounts for an 8% improvement, out of the 25% the mention.
 
Back
Top