Apple A8 and A8X

Yes,but as he also points out, that stuff and more is included in, and takes up the lesser part of 50mm2 SoCs on 28nm. The unaccounted for area on A8 has more gates than those SoCs do in total.


Honest questions:

* Aren't the proportions of signed blocked in the two die shots roughly comparable between the 2 SoCs or is it just me? (GPU, CPU blocks, SDRAM...)

* Is transistor density the same across all blocks of a SoC or is that something that varies depending on block?

I'm just trying to catch up with you guys, since I can't follow so far :oops:

By the way:

http://www.anandtech.com/show/8562/chipworks-a8

On A8 and its 20nm process this measures at 19.1mm2, versus A7’s 22.1mm2 G6430. As a result Apple is saving some die space compared to A7, but this is being partially offset by the greater complexity of GX6450 and possibly additional SRAM for larger caches on the GPU. Meanwhile looking at the symmetry of the block, it’s interesting that the blocks of texturing resources that every pair of GPU cores share is so visible and so large. With these resources being so big relative to the GPU cores themselves, you can see why Imagination would want to share them as opposed to building them 1:1 with the GPU cores.

That could mean an almost 50% difference in transistor count between G6430/A7 & GX6450. If transistor density is the same across the entire SoC then the G6430@A7 could account for something around 280Mio transistors and the GX6450@A8 around 420Mio transistors. How much exactly is ASTC devouring to get integrated? :confused:
 
Not sure it really matters but I haven’t seen anyone ask how Apple got the Series 6XT GPU out in a product so fast. Didn’t the Series 6XT only get announced January 6 2014? That gives it barely a 9 month cycle time from finished IP to product? How has Apple done this?

So what do we think Apple have some amazing cycle times cutting the typical cycle of 18 months in half to 9 months or something else like Apple got given the IP to work with before everyone else? Or did IMG finish the 6XT IP pass it licenses it to company's to licenses it but not announce it till much later?
 
The GPU is a validated portable synthesizable IP. How is a 4 cluster part more conservative than a 6 cluster one?

Even though this is Apple, they still need to hit certain targets with respect to die size, yield, and consumption, especially as fab process node shrinks become fewer and farther in between. And let's not forget that historically, for two full generations before A7, the iPhone had a more conservative GPU and mem. controller design compared to iPad. My guess is that we will see a six cluster GX6650 in a 12" iPad "Pro" within the next three to six months.
 
Last edited by a moderator:
Not sure it really matters but I haven’t seen anyone ask how Apple got the Series 6XT GPU out in a product so fast. Didn’t the Series 6XT only get announced January 6 2014? That gives it barely a 9 month cycle time from finished IP to product? How has Apple done this?

So what do we think Apple have some amazing cycle times cutting the typical cycle of 18 months in half to 9 months or something else like Apple got given the IP to work with before everyone else? Or did IMG finish the 6XT IP pass it licenses it to company's to licenses it but not announce it till much later?

Well that is a good question, and may explain in part why Intel has chosen to go exclusively with in-house GPU designs in future high end SoC's. Essentially there has been no one that has come even close to Apple when it comes to the speed at which they are able to integrate new ImgTech GPU IP in an SoC that targets the high end of the market.
 
Not sure it really matters but I haven’t seen anyone ask how Apple got the Series 6XT GPU out in a product so fast. Didn’t the Series 6XT only get announced January 6 2014? That gives it barely a 9 month cycle time from finished IP to product? How has Apple done this?

So what do we think Apple have some amazing cycle times cutting the typical cycle of 18 months in half to 9 months or something else like Apple got given the IP to work with before everyone else? Or did IMG finish the 6XT IP pass it licenses it to company's to licenses it but not announce it till much later?

Didn't Apple part of the infamous former AMD Orlando team that was supposed to be a sign that Apple has started developing its own GPUs? My theory since then was that Apple hired as much engineering talent as they could to speed up as much as possible the integration process. However since the timelines are still too tight, I would NOT suggest that Apple doesn't get any lead time between when they get N GPU IP and its formal announcement ;)

Well that is a good question, and may explain in part why Intel has chosen to go exclusively with in-house GPU designs in future high end SoC's. Essentially there has been no one that has come even close to Apple when it comes to the speed at which they are able to integrate new ImgTech GPU IP in an SoC that targets the high end of the market.

I'd suggest that the explanation is far more simple than that. Intel's GPU hw and drivers have made big steps in the past years and it was only a matter of time where they could integrate those GPUs into ULP SoCs. Eventually they'll have one and the same architecture and one and the same driver to work on, for all markets they use GenX GPU technology.

In the past their primary reason to maintain GPU IP from IMG was probably due to power considerations; Intel's hw has also advanced in that regard more than a lot.
 
Imagination keep lead partners for new cores updated on work-in-progress and early deliverables as they finalize the IP design to give them a jump on integration. 6XT is largely similar to the base Series 6. That 6XT core obviously would've been ready before the show at which Imagination chose to publicly debut it.

Even still, Apple's delivery time is impressive. Their roadmap for release of the SoC and its associated end products were likely, at least partially, based upon availability of a major core like the GPU (and vice versa, certainly, as completion of the GPU IP core would've been targeted for the needs of customers like Apple.) With the effort, Apple once again shows the importance they place on graphics, even if they're not pushing the limits of the kind of GPU configuration they're capable of delivering.
 
Last edited by a moderator:
Not sure it really matters but I haven’t seen anyone ask how Apple got the Series 6XT GPU out in a product so fast. Didn’t the Series 6XT only get announced January 6 2014? That gives it barely a 9 month cycle time from finished IP to product? How has Apple done this?

IMG major announcements these days tend to tie into big trade shows, and bear little if any resemblance as to when IP becomes available to be licensed.Series 6 was "announced" at CES 2014.

For example, If IMG are ready to announce series 7, and decide to do so at 2015 CES or, 2015 MWC, you think this means that is when Apple or whoever has access to it ? It could easily be 6 months earlier.

That's not even considering that Apple is basically IMG's default lead partner for graphics. One assumes the design process involves Apple at an extremely early stage anyway.

IMG have also said that the cadence for new series will be accelerating, we will know about both series 7 and series 8 in the next 12 months or so. now much of a new series each one really is, is another matter.
 
Even though this is Apple, they still need to hit certain targets with respect to die size, yield, and consumption, especially as fab process node shrinks become fewer and farther in between. And let's not forget that historically, for two full generations before A7, the iPhone had a more conservative GPU and mem. controller design compared to iPad. My guess is that we will see a six cluster GX6650 in a 12" iPad "Pro" within the next three to six months.

But do they have the unit volumes to justify another soc ? I guess that depends on whether they target an "x" soc at the entire refresh line of ipads.

I don't see them targeting it at just a "pro" as the volume wouldn't be there to recoup the design cost.
 
In the past their primary reason to maintain GPU IP from IMG was probably due to power considerations; Intel's hw has also advanced in that regard more than a lot.

I wonder also if when you get to 14nm, if an IMG GPU saves you 20% power compared to an Intel GPU for a particular task, the absolute power difference is less significant than when you were on 45nm or 22nm, for example.
 
I wonder also if when you get to 14nm, if an IMG GPU saves you 20% power compared to an Intel GPU for a particular task, the absolute power difference is less significant than when you were on 45nm or 22nm, for example.

I didn't imply that current Intel GPUs are more power efficient; in all fairness we haven't yet an equal metric either, since there aren't any DX11 Rogues available yet for a true apples to apples comparison.
 
With TSMC's 16nm Finfets using a 20nm back end of line (BEOL), I can't see Apple having much of a density increase to work with for their A9 SoC. So unless they are willing to create a much larger chip, and suffer the attendant increase in max power consumption, I wonder how much extra performance the finfets will deliver over the A8.
 
With TSMC's 16nm Finfets using a 20nm back end of line (BEOL), I can't see Apple having much of a density increase to work with for their A9 SoC. So unless they are willing to create a much larger chip, and suffer the attendant increase in max power consumption, I wonder how much extra performance the finfets will deliver over the A8.

I've been told that 20SoC doesn't actually deliver the power savings one would expect from it compared to 28nm.

I obviously don't know Apple's plans but why a hypothetical A9 on 16FF@TSMC and not just an A8X instead and the A9 being laid out for Samsung/GloFo 14nm? That kind of scenario would actually justify why Apple would want to remain at TSMC for another round early next year.
 
I've been told that 20SoC doesn't actually deliver the power savings one would expect from it compared to 28nm.

I obviously don't know Apple's plans but why a hypothetical A9 on 16FF@TSMC and not just an A8X instead and the A9 being laid out for Samsung/GloFo 14nm? That kind of scenario would actually justify why Apple would want to remain at TSMC for another round early next year.

Interesting, It'd be weird if the lower than expected power saving of the new node, forced Apple to remain with only 1 GB of RAM!
 
Honest questions:

* Aren't the proportions of signed blocked in the two die shots roughly comparable between the 2 SoCs or is it just me? (GPU, CPU blocks, SDRAM...)
My eyeballs say roughly the same thing as yours. The parts that anandtech hasn't put square blocks around may have grown somewhat in proportion to the whole, at least it hasn't shrunk.

* Is transistor density the same across all blocks of a SoC or is that something that varies depending on block?
The die uses the same process across its entire area. That doesn't necessarily mean that either the density, or the density changes between nodes are the same everywhere. (The most common example is the circuitry that drives the off-chip IO.) Now, that said, common IP-blocks should scale "normally", so its a reasonable way of looking at it if you say that the undescribed parts have probably grown by roughly a factor two in number of transistors compared to the A7. It may be that there is functionality there which we haven't got a handle on yet, or that this initial interpretation of the chip isn't perfectly accurate. But I don't think it makes sense to simply ignore what constitutes easily half the die! Personally, I just don't see what the heck could have changed between the A7 and A8 to justify that. Arouses curiosity, it does.
 
I just don't see what the heck could have changed between the A7 and A8 to justify that. Arouses curiosity, it does.

The only three things that I remember being called out were the H.265 video enc/dec, "secure element", and "desktop class scaler". The first replaces earlier generation video IP blocks. The 2nd one would be brand new, I guess there is some sort of scaler in the A7..
 
Decoding at which max resolution?

The specs on apple.com don't say. Also it appears its only for facetime, or else they failed to update the video playback spec, as there is no mention of H.265 in the video playback list.

And isn't there also VolTE IP now in A8?
Yes, forgot about that, but i don't know if that is mostly hardware or predominantly software IP ?
 
Now that Anandtech has confirmed that Apple decided to go with the GX6450 rather than the GX6650, can anyone speculate on why they might have made that decision? (I'm trying to teach myself more of the technical details as to why this decision may have been made, but it's a lot to take in - please forgive my lack of expertise here..)

I find it odd that Apple has traditionally pushed their GPU technology to the limit, using the top of the line chips available to them, in their previous mobile chips (A5, A6, A7..), but stuck with a 4 core GPU here - especially when considering how many more pixels these phones, especially the 6 Plus, are required to handle. I've downloaded the Epic Zen Garden demo, for example, and have experienced less than perfect performance on my 6 Plus, despite use of the Metal API and being optimized for the 6 and 6 Plus.
When following Apple's design cadence it's consistent that the CPU/GPU improvements in the A8 are more modest, after-all it's the "s" iPhone refreshes that focus on speed. The iPhone 3G used the same SoC as the original iPhone with no performance improvements. The iPhone 4 A4 used the same CPU/GPU architecture as the iPhone 3GS and was only up to 50% faster despite having 4x the pixels so it's probably the most comparable. The iPhone 5 A6 is the exception with 2x faster CPU and GPU, but the GPU was the same PowerVR Series 5XT architecture as the iPhone 4s A5, so no new features, just faster. The A8 does move to a different architecture, the Series 6XT with new features, a transition that previously only occurred in iPhone "s" refreshes, but the raw performance increase is a more modest 50%.

As to why Apple has this cadence, Apple seems to keep overall BOM costs of the iPhone roughly the same from generation to generation. So if they are allocating a bigger budget to the chassis and other design elements in one year, they need to be more conservative on the SoC to keep overall costs in-line and maintain profit margins.

The peak power consumption of Apple SoCs has also been climbing from generation to generation with Apple lowering idle power consumption and keeping average power consumption roughly steady to compensate. It's a common refrain that Apple should not be so focused on thinning down the phone every redesign and instead put a bigger battery to improve battery life. In the iPhone 6, battery life is improved despite the bigger screen even though the battery capacity hasn't increased much. I wouldn't be surprised if the CPU/GPU were more conservative this generation to decrease both peak and average power consumption to improve battery life.

With TSMC's 16nm Finfets using a 20nm back end of line (BEOL), I can't see Apple having much of a density increase to work with for their A9 SoC. So unless they are willing to create a much larger chip, and suffer the attendant increase in max power consumption, I wonder how much extra performance the finfets will deliver over the A8.
http://www.cadence.com/Community/bl...-ahead-for-16nm-finfet-plus-10nm-and-7nm.aspx

TSMC 16 nm FinFET doesn't seem to provide any density improvement over 20 nm, but provides 40% more performance at the same power consumption. To address area concerns, TSMC is now bringing out a 16 nm FinFET Plus process that has 15% area improvement and 15% performance improvement. It looks like FinFET Plus won't enter volume production until the end of next year so it's probably too late for Apple. Apple has shown they don't mind working with larger chips and the original 45 nm A5 which was broadly used was 122 mm2 so they do have room to grow in the A9 from the 89 mm2 A8. They probably won't use the full 40% transistor performance improvement and take the power consumption savings to offset the increased transistor count.
 
It would take a lot of effort. Different libraries for the processes. Different design rules, etc. Unless they consolidated them into one rule subset somehow, potentially taking lowest common denominator performance wise. Biggest issue is doing full custom designs and having two different libraries and validation rules to work against.
Is it possible that all A8s for the iPhone are manufactured by TSMC and the A8s claimed to be made by Samsung are actually for the rumored iPad Air update later this year, and are somewhat different from the iPhone A8s? 40% seems too much for an iPad-only chip though.

Re: 4 core GPU in the A8:

The A7 also used a 4-core GPU so I'm not surprised that the A8 also does. Yes I did expect a G6630 but that was under the assumption that Apple wouldn't jump to 6XT this soon.

Even though this is Apple, they still need to hit certain targets with respect to die size, yield, and consumption, especially as fab process node shrinks become fewer and farther in between. And let's not forget that historically, for two full generations before A7, the iPhone had a more conservative GPU and mem. controller design compared to iPad. My guess is that we will see a six cluster GX6650 in a 12" iPad "Pro" within the next three to six months.
An iPad "Pro" with the same pixel density as the iPad Air would have nearly 2x the total pixel count of the iPad Air, which, even considering the increase in resolution of the iPhone 6 over the iPhone 5s, expands the range of pixel counts wider than ever. I can't say that Apple won't use just one chip to cover all those products, but it's probably more of a reason now than last year for Apple to use two chips across the iPhone and iPad lineups.

Given what others have said regarding volume considerations, maybe the iPad Air and Pro will have the A8X, but with the former featuring lower clock speeds for power consumption reasons. If there is no iPad Pro until the A9 then I don't expect an A8X at all.
 
Could one could make the statement that out side of resolution and memory available to the gpu in several ways X360 Xenos gpu still outperforms the iphone6 gpu. 240flops vs 178flops?
 
Back
Top