HSAIL. The F got changed to H, from Fusion to Heterogeneous
Looked too much like FAIL?
HSAIL. The F got changed to H, from Fusion to Heterogeneous
Right but even if everything else is ignored, as long as they share a power budget, the two are irreducibly coupled. The reality is that the allocation of power (and area) to the CPU or GPU portions is fundamentally going to affect any comparison between the architectures, and since by my knowledge no one has attempted to measure that or even understand the power policy of the two chips, it's impossible to make general comments about the architectural efficiency of *portions* of the chips related to one another. To play devil's advocate, it could be that trinity is giving 95% of its 17W/45W/whatever budget to the GPU portion, or the same for Ivy. Without that information, it's impossible to compare the architectures.That's putting things further along the integration curve than they are. It's not a simple cut and paste job, but there's nothing disclosed about the Trinity core that is fundamental to running on-chip with a GPU. The two sides are hooked onto an interconnect, power management unit, and uncore that insulates each side from the particulars of the other.
I remember various users stating that, unlike 'K10', BD's NB+L3/uncore overclocking has almost negligable effect on real-world performance.clocking the uncore with memory saw additional performance improvements above the combined of just one at a time on Phenom II's. That should be a pretty simple test on bulldozer/pile-driver.
The first performance numbers with a dual-core ULV Ivy Bridge are out (along with the review of the Asus Zenbook Prime):
http://www.anandtech.com/show/5843/asus-zenbook-prime-ux21a-review/6
They won't say which model this, other than it being a dual-core Ivy-Bridge with a 17W TDP. Intel is still keeping the NDA for the dual-core Ivy-Bridge.
I do have seen reports like this but this can't make sense to me....Still, according to AIDA64's cache test, BD performs badlyI remember various users stating that, unlike 'K10', BD's NB+L3/uncore overclocking has almost negligable effect on real-world performance.
3200MHz vs 2200MHz with 4.5GHz BD 8 thread CPU
http://www.madshrimps.be
Or testing artifact?Or they've got another bottle neck?
You don't need to write one, it already exists and is called 3dmark (take any version) color fill...On Trinity bandwidth is important for the graphics portion, so why not write a shader program and measure the bandwidth avaliable to the IGP?
eDRAM with tight integration couldn't come fast enough. With IGP solutions marching boldly to new performance heights faster and faster, even DDR4 won't relieve the mounting BW disparity.Seems to be even more bandwidth limited than Llano was, with 1866 probably being the new sweet spot.
Wow, according to that data, AvP and Borderlands 2 are ~60% bandwidth limited with DDR3-1600.http://www.xbitlabs.com/articles/graphics/display/amd-trinity-graphics_8.html
Seems to be even more bandwidth limited than Llano was, with 1866 probably being the new sweet spot.
Wow, according to that data, AvP and Borderlands 2 are ~60% bandwidth limited with DDR3-1600.
Seems like they did a good job with the graphics. I made a post somewhere in this thread comparing SB to IVB to see how much Intel is benefitting from 22nm vs 32nm, and how big HD4000 would be at 32nm. AMD seems to have an architectural advantage, but of course it's overwhelmed by Intel's process advantage..