I Can Hazwell?

So while I can't say for sure that Intel can't do it I know they can't do it as well as someone else can; whether or not this is worth the cost overhead I also couldn't say.
I just find it extremely unlikely that intel of all companies would go to an outside source to manufacture an integral, key part of its upcoming flagship product. I just don't see it happening, for any number of reasons. One being that intel is the semiconductor manufacturing masters of this planet, reason two would be intellectual property issues (would they want to hand over complete specs and blueprints of their best and most recent tech to another, outside entity?) ...And so on.

It is eDRAM if it embeds the DRAM controller or anything else for that matter.
All DDR RAM devices contain a certain amount of logic alongside the DRAM arrays, might well be true for older RAM tech as well. GDDR5 is particularly heavy on the logic overhead due to the high-speed transcievers needed and stuff like ECC checking+resend capability and so on, and while I'm not a semiconductor tech-head, I'm quite sure these chips are NOT all built like eDRAM devices... :)

eDRAM is used when mixing traditional logic together with DRAM; a DRAM device would use logic tailored for a DRAM process.
 
Thanks for the information tunafish.
np.

I was actually a bit unsure about that DTI thing, and showed it to an actual practicing EE, and he said that I was a little wrong about it. The plates in the DTI cap are not separate in the hole, but instead the larger one envelops the smaller one. As in, first the hole is dug, it's coated with a conductor, then that is coated with a dielectric, and then the hole is filled with the conductor. The first plate is the outer conductor, and the second plate is the plug that fills the hole. Which I should have gotten right, because it makes a lot more sense that way. The point about it being hard to manufacture is still valid -- if the plates touch anywhere the cap is bust. That's gotta be fun to manufacture for a micrometer-long shaft through a 50-nanometer hole.

Do you have any guess as to whether the separate die is eDRAM or not?
There are precious few manufacturers that ship eDRAM in any large quantity, and most of them are bitter rivals of Intel. Also, given that the module is tied right next to a fast logic chip, there is little need to ship any fast logic in it. I expect it to be a more traditional chip.

It sounds like both traditional DRAM and eDRAM need special capability and are not necessarily something that a traditional logic fab can automatically provide..

Yeah, DRAM production has really gone of the deep end in recent years. Back when I was in school, DRAM was still made using standard tools and it really was just a planar macro that anyone could use. Then it got hit by quantum effects pretty hard -- DRAM can't be made that way anymore because at modern feature sizes the capacitor would hold very close to (potentially less than) just one electron, which is arguably the worst kind of quantum effect you can have...

So all the DRAM manufacturers went for all kinds of exotic tech and materials. For the future, as I understand it, they can keep making the caps in the buried delay line style dram proportionally longer, so we should see it scale well at least as long as everything else does.

I just find it extremely unlikely that intel of all companies would go to an outside source to manufacture an integral, key part of its upcoming flagship product. I just don't see it happening, for any number of reasons. One being that intel is the semiconductor manufacturing masters of this planet, reason two would be intellectual property issues (would they want to hand over complete specs and blueprints of their best and most recent tech to another, outside entity?) ...And so on.

Like they did with their chipsets?

Actually, I can totally see Intel handing this over to someone else. DRAM is commodity stuff. They don't need to do any special design for it, they can just order a bunch of chips with a certain kind of interface from any of the manufacturers. And while Intel is the very best at making logic, they are not a DRAM manufacturer anymore, and haven't been for a while. Given how low the margins are in the market, I can't see them wanting to try.

Also, I think it's particularly likely especially because they had "an escape hatch". If something went wrong, shipping a lineup without GT3e would not have been catastrophic.
 
Anand talks about the integrated VRM for haswell, that's the first time I personally see "official" confirmation that it's actually implemented in the final processors. Of course, you could imply that it pretty much has to be there from the new power saving mode intel talked up last year, but still... It's real.

Makes me wonder how well it will work with overclocking, what's the limits on wattage/current it's able to supply? While multipliers apparantly go up to 80, will we actually ever get near that even with LN2 cooling before the VRM blows up...?

Questions, questions...
 
http://diybbs.zol.com.cn/11/11_106368.html

20130415_115723_395.png

20130415_115723_439.png

20130415_115723_497.png

20130415_115723_533.png
 
If it's legit, ~5% more performance using 10% less energy seems "ok". The gains in encoding performance were expected IMO, but I'm wondering what that DX11 title is...
 
If it's legit, ~5% more performance using 10% less energy seems "ok". The gains in encoding performance were expected IMO, but I'm wondering what that DX11 title is...

I assume the first number is power consumption while idle and the second under load. That makes power consumption about 41W vs 27W, or over 50% higher.

But this is just an engineering sample, I don't put much stock in that representing production level power consumption.
 
I assume the first number is power consumption while idle and the second under load. That makes power consumption about 41W vs 27W, or over 50% higher.

But this is just an engineering sample, I don't put much stock in that representing production level power consumption.

You know, you're absolutely right. I didn't read it that way on my first pass, but your interpretation makes more sense than mine.
 
If those numbers are legit then the long-standing notion that Haswell would usher in a new era in laptop gaming/graphics is dead. Hell, at one point Haswell was going to be 2x the intel 4000 silicon...
 
If those numbers are legit then the long-standing notion that Haswell would usher in a new era in laptop gaming/graphics is dead. Hell, at one point Haswell was going to be 2x the intel 4000 silicon...

I'm reasonably sure we're not comparing integrated graphics here... It looks like a discrete card was being used in those benchmarks. Look at the CSAA and MSAA values they were using -- and not getting a slide-show.
 
I'm reasonably sure we're not comparing integrated graphics here... It looks like a discrete card was being used in those benchmarks. Look at the CSAA and MSAA values they were using -- and not getting a slide-show.

Ah shit, you're totally right...it even says GTX 680! Doh!
 
Ah shit, you're totally right...it even says GTX 680! Doh!

To be fair, I don't exactly understand why we're comparing framerate from a discrete card when using options like 8xMSAA or 32xCXAA or the like at 1080P resolutions. That really isn't going to show us much for CPU changes...

I would've rather, like you, seen the IGP performance even if it was a slideshow.
 
I would've rather, like you, seen the IGP performance even if it was a slideshow.

Definitely IGP. Even ensuring the benchmarks aren't GPU bound by running at 640x480 with no AA or AF isn't all that interesting. Hazwell's biggest promise is gameworthy IGP (at least from my vantage point) for laptops and ultrabooks.
 
In the link, there's an integrated GPU test. Apparently it's ~25% faster in 3DMark Vintage, and 30% ~ 40% in 3DMark 11. It's also from 11% ~ 26% faster in four games they tested. Since it's just GT2, GT3 and GT3e should be even faster.
 
Well, for a GT2-level part, those scores seem promising. Would still rather see the GT3e of course :D
 
Memory copy being nearly a third faster is pretty interesting. Wonder what happened there.
 
Possibly also due to being able to execute two loads and a store per cycle instead of just two loads or a load plus a store.

I doubt it - for memcopy, you need one store for every load. No use having two loads per store, since you have to read each word before you can store it.

Besides, memcopy is memory bandwidth bound, not to mention uncachable, since it's read once. This would mean that the memory clocks and bus width would be the important factor. There's a rumor about Haswell supporting DDR4, could this be it?
 
I doubt it - for memcopy, you need one store for every load. No use having two loads per store, since you have to read each word before you can store it.

Besides, memcopy is memory bandwidth bound, not to mention uncachable, since it's read once. This would mean that the memory clocks and bus width would be the important factor. There's a rumor about Haswell supporting DDR4, could this be it?

There's a rumor Haswell-E will support DDR4, but that's way off in the future. If the standard line supported it I think we'd know by now, and if there was a big memory bandwidth improvement I think we'd see it manifest more blatantly than this..

Fundamentally I agree with you, these low level execution port and cache changes shouldn't cause a difference for a test that should be purely bandwidth bound, but there can be some variation based on how the test is implemented. If for whatever reason 100% of the latency can't be hidden all the time these things can sometimes make some difference.. I've seen the numbers pretty much scale with CPU clock speed instead of memory speed (these are a lot lower than ones I've seen with normal clocked IBs) so I think something is up beyond just saturating the DDR3..

If the test uses the x86 string instructions then it could be down to improvements in the implementation there.
 
Back
Top