Intel Skylake Platform

Discussion in 'PC Industry' started by DSC, Jul 4, 2013.

Tags:
  1. Kaarlisk

    Regular Newcomer Subscriber

    Joined:
    Mar 22, 2010
    Messages:
    293
    Likes Received:
    49
    $ / mm² is not always proportional to $ / transistor cost when comparing across processes
     
  2. Albuquerque

    Albuquerque Red-headed step child
    Veteran

    Joined:
    Jun 17, 2004
    Messages:
    3,845
    Likes Received:
    329
    Location:
    35.1415,-90.056
    Realistically, the cost-per-mm^2 measurement ignores all complexity involved in the lithography process standup, the masking of the chip, the layout and design of the chip, and all the qualification steps between paper and product. Final pricing is based on actual die size about as much as it is about the cost of the PCB substrate underneath. Said another way, final price of each device is multiple orders of magnitude higher than the component cost of the part itself.

    I assumed Vox's statement was more tongue-in-cheek, as a polite and well-humored "haha, the profit on this chip has to be good for you, Intel." The actual measurement is effectively meaningless as soon as you allow the conversation to stretch to "upper echelon" parts. I think he knows that, too.
     
    BRiT and Kaarlisk like this.
  3. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    708
    Likes Received:
    279
    All valid points.
    But you have to wonder what is the driving force these days to go to smaller processes. Going from Sandy 32 nm to Skylake 14 nm didn't bring much for performance and power consumption in the mainstream performance desktop processor. Performance increase is mostly due to architectural improvements. (the GPU I don't care)
     
  4. Kaarlisk

    Regular Newcomer Subscriber

    Joined:
    Mar 22, 2010
    Messages:
    293
    Likes Received:
    49
    That's the problem. Your requirements do not align with the requirements of the market.
    Going from 32nm Sandy to 22nm Haswell has given me a laptop with a usable GPU and a 50% faster CPU within the same thermal envelope (with Sandy Bridge, one wanted a 35W CPU; with Haswell+, one only wants the 28W TDP CPU if one cares about GPU performance).
     
  5. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    708
    Likes Received:
    279
    Didn't I write "in the mainstream performance desktop processor"
     
  6. Rodéric

    Rodéric a.k.a. Ingenu
    Moderator Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,976
    Likes Received:
    837
    Location:
    Planet Earth.
    Anyone mentioned it's faster than Haswell/Haswell-E and it also seemed to be lower power, which is nice...
    And it has AVX2, although using SSE/AVX properly isn't as easy as one would think... :(
     
  7. fellix

    fellix Hey, You!
    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,478
    Likes Received:
    383
    Location:
    Varna, Bulgaria
    Skylake seems to have more robust sustained loads from the L2 that helps the two FMA pipes. Those, on the other hand, have reduced instruction latency, as well (the FDIV unit's throughput is again doubled up). The L3 improvements are more nebulous -- most likely improved prefetching, sort of.
     
    Grall likes this.
  8. Raqia

    Regular

    Joined:
    Oct 31, 2003
    Messages:
    508
    Likes Received:
    18
  9. fellix

    fellix Hey, You!
    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,478
    Likes Received:
    383
    Location:
    Varna, Bulgaria
    Very interesting layout for the LLC and the CPU cores. A bit like IBM's POWER7 scattered array design.
     
  10. Raqia

    Regular

    Joined:
    Oct 31, 2003
    Messages:
    508
    Likes Received:
    18
    The shared L3 seems to take up much less die space relatively speaking as well. Looking forward to more details at the IDF.
     
  11. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,095
    Likes Received:
    2,814
    Location:
    Well within 3d
    Is the L3 size estimate including what appears to be storage arrays flanking each core?
    The area labeled L3 looks consistent with the ring stop and interface logic, but does not resemble the storage componenent.
     
    Kaarlisk likes this.
  12. Raqia

    Regular

    Joined:
    Oct 31, 2003
    Messages:
    508
    Likes Received:
    18
    Fair point, it's interesting that they labeled that "CPU!" The SRAM cells blocks look much more entangled with the rest of the labeled CPU part than prior dies that used the ring bus. (Shorter wires, better latency?) Prior cores seem to have a clean separation and all the cells arranged in a regular grid. From homologous die shots of other Intel CPUs, (page 3 top left core) I assume the top left rectangle is the FPU area and the square cells near the middle left are the L2.
     
  13. Kaarlisk

    Regular Newcomer Subscriber

    Joined:
    Mar 22, 2010
    Messages:
    293
    Likes Received:
    49
    One would expect that, as L3 cache size did not change.
     
  14. fellix

    fellix Hey, You!
    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,478
    Likes Received:
    383
    Location:
    Varna, Bulgaria
    The L2 is located in the lower left corner (referencing the top row of cores) and definitely spots altered layout of the SRAM banks, compared to the previous generations.
    The "square cells" array is in fact the L1 d-cache.
     
    Grall likes this.
  15. Raqia

    Regular

    Joined:
    Oct 31, 2003
    Messages:
    508
    Likes Received:
    18
    So do you think in the bottom region (page 3 top left core) the two yellowish sram banks are L2 and the bluish one are L3? I had thought all the cells there would be L3, but this would imply more mingling of the banks. (It would make sense to reduce latency for wires going back and forth from L3 to L2.)
     
  16. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,583
    Likes Received:
    703
    Location:
    Guess...
  17. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,095
    Likes Received:
    2,814
    Location:
    Well within 3d
    The wires would be going between the core and the interface/ring stop that is part of the green box labelled as the L3. That in turn hooks into the local L3 slice and the ring bus. I don't think the L3 is in the blue region.
     
  18. Raqia

    Regular

    Joined:
    Oct 31, 2003
    Messages:
    508
    Likes Received:
    18
    Gotcha, so the bright borders of the cores are where the L3 SRAM cells are now located. Very interesting.
     
  19. Raqia

    Regular

    Joined:
    Oct 31, 2003
    Messages:
    508
    Likes Received:
    18
  20. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,095
    Likes Received:
    2,814
    Location:
    Well within 3d
    Another possible consideration for the changed CPU arrangement is packing efficiency. The small dark blue rectangles at the bottom of the die, past the edges of the arrays could be dead space.
    Keeping to the original row of CPUs with the height of the GPU and System Agent would add length to the die and leave more silicon below unused.

    If the GPU and System Agent were adjusted so that they could match the height of the CPU/cache section in order to eliminate the dead space, the die would be even more rectangular, which might have implications for how flexibly the GPU could scale if it stretched even further, and for the blank area at the top of the die past the end of the memory interface.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...