Trinity vs Ivy Bridge

Discussion in 'Architecture and Products' started by rpg.314, Jun 29, 2011.

  1. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Looked too much like FAIL? :p
     
  2. Andrew Lauritzen

    Andrew Lauritzen Moderator
    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,629
    Likes Received:
    1,227
    Location:
    British Columbia, Canada
    Right but even if everything else is ignored, as long as they share a power budget, the two are irreducibly coupled. The reality is that the allocation of power (and area) to the CPU or GPU portions is fundamentally going to affect any comparison between the architectures, and since by my knowledge no one has attempted to measure that or even understand the power policy of the two chips, it's impossible to make general comments about the architectural efficiency of *portions* of the chips related to one another. To play devil's advocate, it could be that trinity is giving 95% of its 17W/45W/whatever budget to the GPU portion, or the same for Ivy. Without that information, it's impossible to compare the architectures.

    Not sure about Fusion, but Ivy definitely shares a budget and allocates frequency differently (turbo) to the different portions of the chip depending on what is going on. And there's a power policy there that certainly affects things. On desktop class power budgets it's a minor detail, but when you're comparing power-constrained parts like the 17W SKUs it is critically relevant.
     
  3. EduardoS

    Newcomer

    Joined:
    Nov 8, 2008
    Messages:
    131
    Likes Received:
    0
    No, someone else already had the Fusion trademark registered.
     
  4. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
  5. Ernestds

    Newcomer

    Joined:
    Dec 10, 2011
    Messages:
    19
    Likes Received:
    0
  6. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    It should help both. AMD's mem hierarchy really blows.
     
  7. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    Are AMD's CPU cores still sitting behind a too narrow crossbar?
    The GPU was explictly singled out as having a quite generous bus connection to the memory controllers, while AMD has in the past throttled its CPUs with its slower northbridge and constricted data paths. Later Phenoms with higher speed memory support wound up getting no scaling from it because they wound up sucking the data through a straw once on-die.

    Historically, AMD's memory controller efficiency has lagged Intel, but the apparent gulf this time around seems almost like there's something that constrains CPU side metrics more than it does the GPU.
     
  8. itsmydamnation

    Veteran

    Joined:
    Apr 29, 2007
    Messages:
    1,349
    Likes Received:
    470
    Location:
    Australia
    clocking the uncore with memory saw additional performance improvements above the combined of just one at a time on Phenom II's. That should be a pretty simple test on bulldozer/pile-driver.


    given that there was such a strong focus on throughput could it be that latency just took a back seat?


    also the way RPG always says the same one liner over and over and nothing more, yay for point out the obvious and nothing more. Should become a politician.
     
    #628 itsmydamnation, May 27, 2012
    Last edited by a moderator: May 27, 2012
  9. yuri

    Regular

    Joined:
    Jun 2, 2010
    Messages:
    283
    Likes Received:
    296
    I remember various users stating that, unlike 'K10', BD's NB+L3/uncore overclocking has almost negligable effect on real-world performance.

    3200MHz vs 2200MHz with 4.5GHz BD 8 thread CPU
    http://www.madshrimps.be
     
  10. Paran

    Regular

    Joined:
    Sep 15, 2011
    Messages:
    251
    Likes Received:
    14

    It was a i7-3517U. Here is another 17W test with a i5-3427U: http://www.anandtech.com/show/5872/intel-dual-core-ivy-bridge-launch-and-ultrabook-review/1
     
  11. denev2004

    Newcomer

    Joined:
    Apr 28, 2010
    Messages:
    143
    Likes Received:
    0
    Location:
    China
    I do have seen reports like this but this can't make sense to me....Still, according to AIDA64's cache test, BD performs badly

    Or they've got another bottle neck?
     
  12. AnarchX

    Veteran

    Joined:
    Apr 19, 2007
    Messages:
    1,559
    Likes Received:
    34
  13. EduardoS

    Newcomer

    Joined:
    Nov 8, 2008
    Messages:
    131
    Likes Received:
    0
    Or testing artifact?

    CPU workloads usually doesn't need to much bandwidth so memory controller is organized (and optimized) to minimize latency, not to maximize bandwidth, the testing software will check CPUID and branch to path where code is written and memory organized in a way supposed to be optimal on the given CPU, not always the case...

    On Trinity bandwidth is important for the graphics portion, so why not write a shader program and measure the bandwidth avaliable to the IGP?
     
  14. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,235
    Likes Received:
    4,259
    Location:
    Guess...
    Then add another 100 GFLOPS for the cpu.
     
  15. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,022
    Likes Received:
    122
    You don't need to write one, it already exists and is called 3dmark (take any version) color fill...
    None of the reviews I've seen published a score for it though. In fact I can't remember having seen scores for that for Llano neither, you'd expect it to be in the neighborhood of 128bit ddr3-based hdd5570 when equipped with ddr3-1600.
     
  16. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
  17. fellix

    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,552
    Likes Received:
    514
    Location:
    Varna, Bulgaria
    eDRAM with tight integration couldn't come fast enough. With IGP solutions marching boldly to new performance heights faster and faster, even DDR4 won't relieve the mounting BW disparity.
     
  18. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    Wow, according to that data, AvP and Borderlands 2 are ~60% bandwidth limited with DDR3-1600.

    Seems like they did a good job with the graphics. I made a post somewhere in this thread comparing SB to IVB to see how much Intel is benefitting from 22nm vs 32nm, and how big HD4000 would be at 32nm. AMD seems to have an architectural advantage, but of course it's overwhelmed by Intel's process advantage..
     
  19. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    That's pretty much it. AMD has a huge amount to gain with eDRAM and a better (stronger single thread) cpu core that we hope will be seen with Steamroller. Intel is pretty much at the limit of their shaders and will be forced into expending even more die area on the igp. Haswell will get them close for a few months before Kaveri, but as you mentioned a 32nm Haswell would actually be larger than this Trinity.

    AMD are bandwidth limited and intel are shader limited and how they both fix these issues will decide the "winner" in the end.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...