Intel Broadwell (Gen8)

Discussion in 'Architecture and Products' started by Paran, Aug 16, 2014.

Tags:
  1. entity279

    Veteran Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,332
    Likes Received:
    500
    Location:
    Romania
    Hmm. Are we actually excusing Intel on the basis of "lack of resources" ? :eek:
     
  2. Kaarlisk

    Regular Subscriber

    Joined:
    Mar 22, 2010
    Messages:
    293
    Likes Received:
    49
    Not Intel. Intel's driver development team.
    Intel simply didn't have it's GPUs as a priority.
     
  3. entity279

    Veteran Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,332
    Likes Received:
    500
    Location:
    Romania
    I did notice the context of the initial statement.
    I've simply expressed my amazement ; indeed the problem would rather be priority (so management/strategy/vision) rather than resources.
     
  4. Kaarlisk

    Regular Subscriber

    Joined:
    Mar 22, 2010
    Messages:
    293
    Likes Received:
    49
    Just had a Broadwell Pentium (3805U) laptop in my hands.
    Was surprised when it turned out it only supports OpenCL 1.2. The i3 variants support 2.0.
    I wonder whether that's due to segmentation or because GT1 is on a different die or something.
     
  5. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,678
  6. swaaye

    swaaye Entirely Suboptimal
    Legend

    Joined:
    Mar 15, 2003
    Messages:
    9,044
    Likes Received:
    1,116
    Location:
    WI, USA
    They really pushed forward with GPU performance and the eDRAM does some nice things for CPU performance as well. It's going to be interesting to see what this does to NV in the notebook market.
     
  7. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,678
    I'm really looking forward to Intel Skylake. GT4e should have 72 EUs, on top of some significant architectural changes. This Iris Pro 6200 looks pretty good though. That's a pretty power friendly integrated GPU, with decent performance. Should look even better with DX12 on Windows 10.
     
  8. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
  9. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,176
    Location:
    La-la land
    Yeah, I don't understand why Apple won't put eDRAM chips in their 13" line; the screen rez is only marginally lower than the 15" macbook, so it could really use the extra graphics performance. Besides, CPU and battery life would all receive a boost from it.
     
  10. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,678
    I'm interested in a mini PC with a quad core i5 and an Iris Pro GPU during the Skylake timeframe. Ideally, a surface pro 4 would be great, but I don't think Iris Pro will ever make its way into that device because of power.
     
  11. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Not sure if it's a problem with our setup and/or individual CPU, but with the Gaming6-Board and UEFI from June 1st, the GT-cores get decidedly more power than would fit in the whole 65 watt thermal envelope for the whole processor package. Might be that initial performance for Iris Pro Graphics 6200 testing is higher than normal.
     
  12. Rurouni

    Veteran

    Joined:
    Sep 30, 2008
    Messages:
    1,101
    Likes Received:
    432
    Btw, is there a way to know how much Iris Pro advantage is more because of the eDRAM or the improved GPU?
     
  13. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    The Z97A Gaming6 has a UEFI option to disable the eDRAM multiplier, but it continues to function anyway. So no, apparently not until Intel releases a SKU with Iris Graphics 6100 - and 65 watt TDP.
     
  14. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    Faster GPU + more bandwidth (edram) is a good combo. Iris 6100 (without esram) shows much smaller gains in benchmarks (this was expected, since Haswell GPU was already bandwidth starved and the memory speed remains the same).
     
  15. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Problem is, even with 14 nm and the apparent power savings, Iris Pro Graphics seems to be power limited in 65 watt parts.
     
  16. Kaarlisk

    Regular Subscriber

    Joined:
    Mar 22, 2010
    Messages:
    293
    Likes Received:
    49
    Was it possible to up the TDP limit and test just by how much?
     
  17. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Yeah, you can do that in the UEFI. I haven't tested though if there's a limit.
     
  18. Kaarlisk

    Regular Subscriber

    Joined:
    Mar 22, 2010
    Messages:
    293
    Likes Received:
    49
    It seems I did not express myself clearly.
    What I mean is:
    Did you test Iris Pro at an increased TDP?
     
  19. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Ah, I see. No, not comprehensively yet. I've seen an unconfirmed (read: not thoroughly reproduced) gain in Luxmark 2.0 Sala by about 14 percent from having a (way) higher TDP limit.
    But of course if strongly profits from higher memory clocks as well as higher power budget.
     
    Kaarlisk likes this.
  20. Andrew Lauritzen

    Andrew Lauritzen Moderator
    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,629
    Likes Received:
    1,227
    Location:
    British Columbia, Canada
    Are you seeing it drop GPU clocks when running a workload with the CPU ~idle? I've actually seen that far less on BDW than HSW (HSW's turbo clocks were a bit higher to start with). Obviously the CPU can easily eat the entire TDP and more if asked to, but in GPU-only workloads my BDW GT3e 65W tends to stay pegged to max turbo clock much more than my HSW machines did.

    Haven't played with configurable TDP at all though so curious what you're seeing.

    In terms of turning off the eLLC, no I don't believe there is any consumer way to do that. Intel has published some numbers on how much it helps though and it is obviously quite large. If you read the Ars review of the i7 BDW NUC you can see that even the Iris Pro 6100 is memory bandwidth limited - it scales almost perfectly across the board from DDR1600 to DDR1866. That's obviously why we have the eLLC there in the first place :)
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...