Apple A10X SoC

Discussion in 'Mobile Devices and SoCs' started by iMacmatician, Jun 5, 2017.

Tags:
  1. iMacmatician

    Regular

    Joined:
    Jul 24, 2010
    Messages:
    759
    Likes Received:
    198
    tangey, Entropy, BRiT and 1 other person like this.
  2. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    970
    Likes Received:
    135
    Location:
    Luxembourg
    That does look like a Buttload(TM) of cache on the CPUs.
     
  3. tangey

    Veteran

    Joined:
    Jul 28, 2006
    Messages:
    1,406
    Likes Received:
    149
    Location:
    0x5FF6BC
    Hats off to kabraham2 for the best first post I've ever seen on here, being only 1.5 mm2 out based solely on a crappy photo, and all the more impressive for going against the flow of assumptions.
     
    Entropy, Ryan Smith and iMacmatician like this.
  4. kabraham2

    Joined:
    Jun 25, 2017
    Messages:
    7
    Likes Received:
    11
    Thanks for the clarification, I should have assumed some kind of arbitration on a shared cache...
    But still, I guess in this case what matters is the (worst case) distance between cache and arbitration, arbitration delay and distance to the execution units. And partitioning the cache like that does at least reduce the first part. However I have no idea how relevant that is here.

    Which brings me to an interesting point. I attempted to Identify some of the easy blocks and noticed the cache is actually offset to the bottom two cores. You can clearly see the cores based on the L1 caches, and the top core begins at the point where the cache arbitration ends.
    Now I don't think that the top core has a higher cache latency, and with a mobile chip on 10nm, wire resistance may be more relevant to apple than wire delay, so this might be a case of apple optimizing the common case, i.e. moving the bottom two cores closer to the arbitration unit.

    Screen Shot 2017-07-01 at 6.16.08 PM.png

    Some other interesting observations:
    -the 4M cache blocks are ˜2.1 mm2, both of them together are still smaller than the 4.8mm2 4M cache on the A8
    -the cores are ˜2.7mm2 each
    -I can't see any separate L1 for the little cores, although this might be due to image quality. I assume the dark area near the cache is are the little cores.
    -with all of these differences, and the geekbench regression in some sub-tests, I am pretty sure this isn't a 10nm Hurricane.
     
    Lodix likes this.
  5. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    970
    Likes Received:
    135
    Location:
    Luxembourg
    The L1D for both small and big are next to the L2 slices as that has to be their positioning next to L2 arb. The cache your marked as L1 is the new Hurricane caches next to the front-end.

    The small cores are towards the outer edge to the L2, but the picture isn't high resolution enough delineate them enough.

    [​IMG]
     
    #45 Nebuchadnezzar, Jul 2, 2017
    Last edited by a moderator: Jul 2, 2017
    kabraham2 and BRiT like this.
  6. kabraham2

    Joined:
    Jun 25, 2017
    Messages:
    7
    Likes Received:
    11
    After comparing to your A10 depictions, I agree, on your Image you can even see the L shape of the 1LD. And comparing with the A10 I'll also agree on the little cores and new cache.
    Would really like better images though.

    Any Idea what that 'new cache' is good for? some kind of decoded µop cache? Maybe even caching dependencies for energy efficiency? Seems awfully large.
     
  7. wco81

    Legend

    Joined:
    Mar 20, 2004
    Messages:
    6,173
    Likes Received:
    257
    Location:
    West Coast
    I think I saw a Digitimes report that the A11 is in production on the TMSC 10nm process.

    Meanwhile, Intel apparently just announced its roadmap for 10 nm next year:

    https://arstechnica.com/gadgets/201...ip-plans-ice-lake-and-a-slow-10nm-transition/

    Assuming no other problems that is, since 10 nm was originally due in 2016.

    Has Intels fab prowess been permanently surpassed? Maybe they didn't have the resources to keep their 14nm and 10 nm on schedule while the volumes and money from mobile have allowed TMSC (and presumably Samsung) to get to 10 nm before Intel?
     
  8. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,491
    Likes Received:
    5,595
    Location:
    ಠ_ಠ
    My impression was that everyone lagged Intel by a node, and that the naming was all PR/marketing fluff.

    i.e.

    22nm Intel ~= "16nm/14nmFF" (20nm w/FF)
    14nm Intel ~= "10nmFF" GF/SS/TSMC
    10nm Intel ~= "7nmFF"
     
  9. wco81

    Legend

    Joined:
    Mar 20, 2004
    Messages:
    6,173
    Likes Received:
    257
    Location:
    West Coast
    Hmm, I thought the part of Samsung that fabs chips was already making more money than Intel.

    So if they haven't passed Intel yet in process, they will soon enough?
     
  10. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    2,735
    Likes Received:
    100
    Location:
    Taiwan
    A large part of Samsung's fab profits are from memory chips (DRAM and NAND) though. They are very different from logic chips.
    However, it probably matters much less now, as the advantages from a node shrink are not as much as they used to be (for logic chips, memory chips still love density). Everyone's hitting a brick wall and it's just a matter of who hits the wall sooner.
     
  11. rekator

    Regular

    Joined:
    Dec 21, 2006
    Messages:
    779
    Likes Received:
    20
    Location:
    France
    The More important is: "Who hits the wall first?" :lol2::wink4:
     
  12. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    2,988
    Likes Received:
    878
    Well, yes and no.
    As there are increasing issues with current approches, focus shifts to packaging, alternative materials, the tool chains... - there are way to squeeze more into the envelope even if we can't shrink what we stuff into it much more, as well as making the process more accessible and thus migrating more of the industry to finer lithography even when the bleeding edge can't move that fast.
    But yes, I'll very probably live to see the practical end of geometrical scaling. It's been a ride! :)
     
  13. wco81

    Legend

    Joined:
    Mar 20, 2004
    Messages:
    6,173
    Likes Received:
    257
    Location:
    West Coast
    So one of the rumors about the event tomorrow is that the new Apple TV which will support 4K and HDR will be powered by an A10X.

    That would be quite a jump from the A8 in the Apple TV 4th generation.

    Also, the price point starts at $149.

    Would they really put the same SOC as the one in their current iPad Pro, which starts at $649? Maybe they could raise the price of the Apple TV but it's already priced high relative to competing streaming set top boxes.
     
  14. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,170
    Location:
    La-la land
    Sure, why not. No fancypants wide color gamut LCD multitouch panel, no battery, quad speaker system and so on. More forgiving form factor too, probably making it easier and cheaper to build.

    And then there's the console factor as well. You want your box in as many homes as possible to maximize software (and movies) sales. And streaming subscription fee cuts and all that other nickle and dime bullshit that Apple and other box manufacturers are up to these days. That means attractive price point. Besides, at 150 bux you can bet they're taking home big margins anyway.
     
  15. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,491
    Likes Received:
    5,595
    Location:
    ಠ_ಠ
    <100mm^2 chip should help mitigate costs per chip, even at 10nmFF. Potentially, "worse" TDP chips may still be ok for the Apple TV bin since cooling can be less of an issue.

    4GB LPDDR4 may not be too expensive?
     
  16. Pressure

    Veteran Regular

    Joined:
    Mar 30, 2004
    Messages:
    1,298
    Likes Received:
    232
    I see no reason why they wouldn't stay with 3GB LPDDR3.
     
  17. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,170
    Location:
    La-la land
    There's 4GB in the iPad Pros using the same processor. Cutting to 3GB would increase hardware fragmentation. Besides, with the apple TV's partial focus on gaming, the more memory you offer the better. They've had 3GB for a while now, tech tends to move on, not stay the same. :p

    But, then again, Apple is Apple. They'll do whatever the hell they want. Maybe you're right.
     
    BRiT and PawKRK like this.
  18. Pressure

    Veteran Regular

    Joined:
    Mar 30, 2004
    Messages:
    1,298
    Likes Received:
    232
    The iPad Pro is also marketed for productivity and not just media consumption, where the Apple TV will only be about the latter. Sure I would like 4GB of LPDDR4 RAM but its always better to expect less (like with Nintendo but that is a whole other hornets nest). Besides, the SoC in the Apple TV has always been crippled last generation.

    They could be moving it away from hobby status though ... we will fint out soon enough :)
     
    Grall likes this.
  19. davygee

    Newcomer

    Joined:
    Nov 7, 2008
    Messages:
    115
    Likes Received:
    29
    I know it was some time ago...actually 2 years ago that the last AppleTV was released....and don't forget that the 4th gen AppleTV was announced with an A8 the same year the A9 was announced with an A9. I remember being surprised that they put effectively an iPhone 6 chip in the AppleTV that year.

    2 years on...yes the A10x was only just announced a few months ago, but with the A11 in the new iPhone...I see no problem or surprise with the A10X being in the new AppleTV.

    Yes, it's very powerful for what is effectively a set-top box....but it should help it be current for quite some time...and hopefully they start getting some proper games on it now.
     
    Grall likes this.
  20. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    2,988
    Likes Received:
    878
    I'll submit that if they do elect to go with the A10x as opposed to the iPhone chips, then it is probably due to the additional graphics performance.
    And if so, it wouldn't surprise me if gaming was pointed out during the presentation. Apple hamstrung the gaming market by initially insisting that all games be playable with the stock TV remote. That limitation is no longer there, so they may make an effort to at least call attention to the gaming capabilities. The A10x has very respectable graphics capabilities, downports from XB1 or upports from the Switch would be well within the realm of the possible, although publishers already active within the iOS eco system are probably more likely to produce apps for the new AppleTV.
    If indeed it has an A10x. We'll know in five hours.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...