Apple A10 SoC

Discussion in 'Mobile Devices and SoCs' started by iMacmatician, Sep 7, 2016.

Tags:
  1. iMacmatician

    Regular

    Joined:
    Jul 24, 2010
    Messages:
    757
    Likes Received:
    195
    Apple announced the A10 Fusion at its special event today.

    [​IMG]

    [​IMG]

    [​IMG]

    [​IMG]

    (Pictures from AnandTech.) I'd like to know whether the little cores are Apple-designed or Cortex-A series. (I assume the big cores are an evolution of Twister—not that they aren't also interesting.) The 40% faster performance of the big cores aligns with some clock-for-clock improvements and the rumor of a 2.4-2.45 GHz clock speed.

    It was mentioned during the event that the GPU has 6 cores.
     
    #1 iMacmatician, Sep 7, 2016
    Last edited: Sep 7, 2016
    London-boy and BRiT like this.
  2. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    7,982
    Likes Received:
    2,427
    Location:
    Well within 3d
    Curious about the hardware controller, and what its function set is.
     
  3. Pressure

    Veteran Regular

    Joined:
    Mar 30, 2004
    Messages:
    1,270
    Likes Received:
    220
    You can be certain they are not run of the mill cores.
     
  4. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    949
    Likes Received:
    98
    Location:
    Luxembourg
    Reposting what I said on AT forums:

    So my bet is that simply because they likely went with 16FF+ again this generation to achieve the higher clocks they needed to do a physical implementation with faster/higher leakage transistors which adversely affects power at low frequency so opening up the need for low leakage/low power cores for low perf/idle scenarios.

    Because they mention the controller, i.e. it being a hardware governor it should mean that the kernel/OS only sees 2 cores and the switching is transparent between pairs of big and little cores.
     
  5. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    425
    Location:
    Cleveland, OH
    It's interesting that not very long ago many people were saying that big.LITTLE was a failed concept because no one other than ARM was designing CPU cores for phones this way. Since then Samsung and Qualcomm both started doing it with their custom cores, Apple now appears to be doing it, and Intel has given up on getting into phones altogether. I think we can safely say that it wasn't a dumb idea.
     
    ToTTenTranz likes this.
  6. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    7,982
    Likes Received:
    2,427
    Location:
    Well within 3d
    To confirm in case I'm reading too much into the marketing blurb, is the Apple-designed performance controller a piece of actual hardware?

    This is potentially an evolution beyond ARM's existing concept, which has scheduling handled at the software level.

    In theory, a hardware control block could access items like DVFS activity counters and instruction retirement counts to get a very rapidly updated picture of power consumed vs progress made, as well as reference values for long-latency events or the percentage of a time slice a thread seems to be hitting resources hard.
    Context like latency hints or whether the FPU is needed might feed into it.
     
    Entropy and Grall like this.
  7. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,462
    Likes Received:
    724
    They had to follow suit. Big and hot cores wins performance benchmarks, low power cores wins battery-life benchmarks.

    Cheers
     
  8. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,366
    Likes Received:
    218
    Location:
    NY
    For the record, I don't recall calling big.little "dumb". :razz: Just not convinced it wasn't an effort by arm to sell more cores! :-D Kidding aside, it's still not clear to me that big.little is the most optimal solution (ignoring marketing benefits).

    I will say though that Qualcomm doesn't count. @Rys can correct me, but I believe their "big.little" cores are all exactly the same. I don't interpret that as a ringing endorsement of big.little. And let's be honest, intel leaving had nothing to do with their lack of big.little design. But you're right, apple did switch! Hopefully for technical reasons! :-D
     
  9. AlBran

    AlBran Just Monika
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    19,787
    Likes Received:
    4,723
    Location:
    ಠ_ಠ
    Shrunk Cyclone/Swift? :p
     
    #9 AlBran, Sep 8, 2016
    Last edited: Sep 8, 2016
  10. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    7,982
    Likes Received:
    2,427
    Location:
    Well within 3d
    Maybe?
    There are lower-cost ways of accomplishing that goal than what Apple has done.
    A software driver or explicit .exe detection has been an option that has been utilized in this space to get those benchmark wins. It would seem as if Apple wants something beyond that, and that the effort provides them some avenue for value-add or product differentiation.
    Given the complexity, I think the decision for this would have been decided earlier, maybe before knowing what the latest configurations are for Qualcomm and Samsung.

    Qualcomm's Kryo core differentiation is a modest one, Samsung's custom core is a bigger custom core and standard little ones, which Apple may or may not have done.
    We might need to wait for Samsung's next custom core effort. If the narrative is that Mongoose is their equivalent of Swift, a more distinctive core could result and it's not clear if Samsung would opt for the same little cores.

    Have either added dedicated hardware for transfer or as a governor?
    Depending on what elements that controller plugs into and what it's responsible for, it's not the playbook of Samsung or Qualcomm Apple was cribbing notes from.
     
  11. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,302
    Likes Received:
    3,958
    How possible is it that apple is just using a pair of A35 or A53 as LITTLE cores but they simply won't ever mention it?
     
    ImSpartacus likes this.
  12. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    2,894
    Likes Received:
    745
    There is more than a little uncomfortable truth in this.
    For whatever the reason, Apple seems to have wanted to increase clocks substantially. Switching to InFO is probably part of that, but switching to higher speed higher leakage circuit solutions for the main cores is probably another contributor. And if so, power draw at lower power states will get worse, all other things being equal. Which will strengthen the case for big.LITTLE.
    I would guess software only ever sees two cores, but it will be interesting to get a little more flesh on the bones here. I'm both impressed and surprised by their effort though, given that this SoC will be superceded by a 10nm product within a year.
     
  13. Wynix

    Veteran Regular

    Joined:
    Feb 23, 2013
    Messages:
    1,041
    Likes Received:
    57
    I think it's unlikely that they are using off the shelf parts, though they could be very similar.
     
  14. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    7,982
    Likes Received:
    2,427
    Location:
    Well within 3d
    Vendor openness is spotty at best, and Apple sees those sorts of details as a point of product differentiation.
    I'd like to see more details.
    In theory, a custom little core would be more amenable to any custom hooks Apple might want, possibly for the hardware controller to use, or whatever internal tweaks Apple may have and won't talk about.

    They have a good architectural team. If there were to be a benefit to doing so Apple would be positioned well to achieve it.

    Another set of benefits I can think of is that a simpler core could be used as a pathfinder for some more experimental changes without risking problems with the big cores, and optimizing the two cores with the knowledge that neither needs the same sort of dynamic range that big cores from Intel have.
    One possible difference is that there's still demand for using standard little cores as primary cores in larger-count cheap SOCs, so there is some provision in the design for that scenario. Apple could in theory skip that and optimize further.
     
  15. tangey

    Veteran

    Joined:
    Jul 28, 2006
    Messages:
    1,403
    Likes Received:
    148
    Location:
    0x5FF6BC
    I didn't notice/ don't remember them talking about a new M co-processor. Did I just miss that bit ?

    Scratch that, I see it mentioned on the iphone 7 specs.
     
  16. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    949
    Likes Received:
    98
    Location:
    Luxembourg
    Impossible. The cores are said to be in the same L2 cache hierarchy as the big cores thus making this a micro-architectural impossibility of it being anything ARM.

    Plus they said the small cores are 1/5th the power of the big cores, if the big cores are still the same power envelope as the A9's that makes the small cores vastly exceed power envelopes of ARM's small cores and I suspect that they're also much better performing.
     
    ToTTenTranz likes this.
  17. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,202
    Likes Received:
    304
    What if the small cores are just the dual core from the S2 in the Apple Watch? Do we know if Watch OS 3 is 64 bit? Or would Apple possibly run 64 bit cores in a compatibility mode with Watch OS?
     
  18. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    425
    Location:
    Cleveland, OH
    Hopefully Anandtech or someone else will do another dive on the S2 to get some idea of what the CPU characteristics are. They did one on the S1 and found that it had timings in accordance with Cortex-A7's, 32KB L1 dcache/256KB L2 cache, and a 520MHz clock speed. So it probably was a pretty standard Cortex-A7 implementation. There have been some rumors that S2 is using Cortex-A32, which is pretty suitable as ARM claims it's more efficient than Cortex-A7. These CPUs represent a pretty substantially different perf vs. efficiency design point than any of the custom CPUs Apple has done, even their first "Swift" microarchitecture. So it'd seem to be in their best interests to not have to divert resources away from their main uarch design to support a very low power core, but who knows how all of the trade-offs play out.

    It is possible to configure some ARM cores to not include L2 cache at all, something that is almost certainly the case with A32. You could technically fit such a configuration with some other external L2. But it'd be a pretty inefficient design.
     
  19. tangey

    Veteran

    Joined:
    Jul 28, 2006
    Messages:
    1,403
    Likes Received:
    148
    Location:
    0x5FF6BC
    gfxbench comparison with A9 in the 6s is very interesting.
    https://gfxbench.com/compare.jsp?be...S&api2=metal&hwtype2=GPU&hwname2=Apple+A9+GPU

    Manhattan 3.1 offscreen A10=1631 A9=1521 7%
    Manhattan offscreen A10=3036 A9=2184 39%
    Trex offscreen A10=4973 A9=4139 18%

    Texturing offscreen A10= 8473 A9= 5972 42%
    ALU off screen A10=5406 A9=3823 52%

    Long term performance Mahattan A10=3553 A9=1804 97%

    Big variations in improvement. Manhattan 3.1 barely shows any improvement at all. Trex less that 20%. But ALU & Texturing are close to the 50% that was stated in Apple's presentation.
     
  20. TomK

    Newcomer

    Joined:
    Oct 15, 2014
    Messages:
    9
    Likes Received:
    0
    It could be result of memory bandwidth limit.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...