Haswell vs Kaveri

Discussion in 'Architecture and Products' started by AnarchX, Feb 8, 2012.

  1. msxyz

    Newcomer

    Joined:
    May 5, 2006
    Messages:
    122
    Likes Received:
    54
    Yes it does :smile:

    Thanks for the clarification. From what I've seen, battery life seems to be okay under Windows 7 although not as high as under MacOS. (but this is just my impression from using it for a few days). I agree that Bootcamp support is terrible. The drivers are outdated and, after I used the Bootcamp Assistant to create a USB installer, I had to visit Intel to get the latest chipset and cpu drivers. If you don't do this before trying to install Windows, the keyboard, touchpad and other peripherals are not recognized! Way to go Apple: you're selling these machines since November and you still haven't updated the Bootcamp driver package to support them!

    The CPU runs very cool when it's not doing some intensive computational tasks. Btw, I use Throttlestop to check the temperature, TDP and voltage. I find this utility very useful, especially if you intend to do a lot of gaming on your laptop.

    I'll do some proper benchmarking in the near future but, so far, I'm really impressed by the advances Intel made on the GPU front. I think AMD has a lot of reasons to be worried. Not only their CPU seems to be less competitive in single threaded performance and FPU but know the advantage they had in integrated graphics is being eroded pretty fast.

    I don't know if the situation would be different if Kaveri had employed GDDR5 instead of DDR3. The impression I have both with my old Llano APU and this Haswell is that they're already bandwidth limited: most of the time, disabling AA is usually a better remedy to low frame rates that turning down details, viewing distance or the graphic quality.
     
  2. Andrew Lauritzen

    Andrew Lauritzen Moderator
    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,632
    Likes Received:
    1,251
    Location:
    British Columbia, Canada
    They seem to completely lack any motivation to do anything here, which is sad. If they were to enable EFI boot you could at least enumerate and use both GPUs in Windows, even if there wasn't any special driver magic for switching display output between the two on the fly.

    The Intel Extreme Tuning Utility is pretty reasonable too I find. You can not only monitor a lot of useful power and frequency metrics (for both CPU and GPU) but also screw around with TDPs and frequencies if you're feeling adventurous :)

    Haswell GT3e (Iris Pro) is only usually not bandwidth limited, but MSAA on Haswell kind of sucks. Absolutely never use 2x (as there is no native support and thus it is no faster than 4x) on Haswell and even 4x takes a really big performance hit. As you note, MSAA is often not very usable on these chips anyways, but in the case of Haswell it's an architectural issue, not a bandwidth one.

    Agreed that it would be interesting to see how much difference faster memory would make to Kaveri. Hopefully someone will at least do some tests with varying DIMM frequencies (say 1600 - 2400 or something).
     
  3. Ryan Smith

    Regular

    Joined:
    Mar 26, 2010
    Messages:
    629
    Likes Received:
    1,131
    Location:
    PCIe x16_1
    It's a mix of databases and hardware queries. If you know what registers to poke, it's pretty easy to get a GPU to give up its shader count. But it means you have to already know something about the GPU.
     
  4. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    746
    Likes Received:
    41
    Location:
    Copenhagen
  5. kalelovil

    Regular

    Joined:
    Sep 8, 2011
    Messages:
    568
    Likes Received:
    104
    And:
    http://www.corsair.com/us/blog/cat/tech/post/kaveri-ddr-part1/


    Although the gains when moving to DDR3-2400 memory are very limited, this does not appear to point to a diminishing of the GPU bandwidth bottleneck.
    It is rather a case of Kaveri's memory controller not being capable of realising anywhere near the theoretical bandwidth benefits when moving above DDR3-1866, according to hardware.fr and Corsair's AIDA64 tests.
    Haswell's memory controller by comparison is significantly more capable: http://www.hardware.fr/articles/909-2/latence-bande-passante-memoire.html

    Or AIDA64 is not a good tool for measuring bandwidth available to the integrated GPU in an AMD HSA/Garlic/Onion APU setup, and the above information is useless.
     
    #1246 kalelovil, Jan 29, 2014
    Last edited by a moderator: Jan 29, 2014
  6. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,661
    Likes Received:
    1,114
  7. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,022
    Likes Received:
    122
    I don't quite understand the difference between two single rank modules and one dual rank module though. Shouldn't that appear to the memory controller as more or less the same? But apparently that's not true.
    And it's probably worth noting that while dual-rank looks to be quite a big win, it is more problematic to reach the higher frequencies with that - officially kaveri only supports one single-rank dimm (per channel) at ddr3-2133, or one dual-rank dimm at ddr3-1866 (and with two dimms, one speed grade less for both, so ddr3-1866 for two single-rank dimms, ddr3-1600 for two dual-rank dimms). (At 1.5V, less with the low-voltage options). (This information is per BKDG.)
     
  8. pMax

    Regular

    Joined:
    May 14, 2013
    Messages:
    327
    Likes Received:
    22
    Location:
    out of the games
    Dual rank means you can hide your commands during the other transfer, reducing net latency and thus increasing available bandwidth.
    See also http://en.wikipedia.org/wiki/Memory_rank
     
  9. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,022
    Likes Received:
    122
    Even the article there mentions there is near zero difference between one dual-rank dimm and two single-rank ones (other than the obvious, two pcbs). Though maybe the difference mentioned there is indeed responsible for this, though I'd think a memory controller optimized for it wouldn't suffer from it.
     
  10. Andrew Lauritzen

    Andrew Lauritzen Moderator
    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,632
    Likes Received:
    1,251
    Location:
    British Columbia, Canada
    Interesting links, thanks!

    That gets me thinking... has anyone done a similar test on Haswell GPUs (both w/ and w/o EDRAM ideally)? In the past I seem to recall it has had less of an effect due to both narrower/slower GPUs and large LLCs (that the GPU can use), but I'm curious if that has changed at all recently. My guess is that main memory bandwidth is much less important but it would be interesting to see, especially in a game that might hammer it pretty hard on the CPU to start with.

    Thanks!
     
  11. Paran

    Regular

    Joined:
    Sep 15, 2011
    Messages:
    251
    Likes Received:
    14
  12. revan

    Newcomer

    Joined:
    Nov 9, 2007
    Messages:
    55
    Likes Received:
    18
    Location:
    look in the sunrise ..will find me
  13. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    Is it usually the case that an employee run their LinkedIn disclosures with corporate first, or is this someone AMD already let go?

    Is it common to reveal product plans in LinkedIn for other manufacturers?
    I know at one point there were a few things about shrinks of previous gen consoles from IBM by the same means.

    Is AMD just more likely to be sniped by forum goers, or something else?
     
  14. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,382
    I think it's just stupidity. I've seen employees of very secretive startups post more than enough information to know very well what they were doing… It's the natural progression of disclosing in detail what you're doing during a job interview with a competitor. :wink:
     
  15. Wynix

    Veteran

    Joined:
    Feb 23, 2013
    Messages:
    1,052
    Likes Received:
    57
  16. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,455
    Likes Received:
    471
    20nm wouldn't solve current issues - neither BW limitation nor CPU clocks. I think the more interesting part is confirmation, that Carrizo is SoC.
     
  17. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    The very same document calls Kaveri an SoC too, so I wouldn't read too much into that.
     
  18. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,455
    Likes Received:
    471
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...