Sandy Bridge

Funny, isn't it, that intel should come out with a hybrid cpu+gpu before AMD while it is AMD which is talking up fusion all over the place.

Not when Intel can basically throw loads of money at an idea when others can't. This seems to be Intel's strategy as of late. Frankly though I don't see the point of this quad-core with igp other than killing off Nvidia's chipset business unless it can do some GPGPU stuff.
 
Not when Intel can basically throw loads of money at an idea when others can't. This seems to be Intel's strategy as of late.
Care to elaborate?
Frankly though I don't see the point of this quad-core with igp other than killing off Nvidia's chipset business unless it can do some GPGPU stuff.
It's Weitek all over again.
 
You people are a tad optimistic. The 45nm IGP in the Clarkdale MCM has a die size similar to NVIDIA's GT215 on 40nm, aka GTS260M with 96SPs/32 TMUs (>120mm²). Yet I don't think anyone is seriously expecting it to be within striking distance or GT215, or even GT216. Furthermore, given the die size of the 32nm IGP on this CPU, it's pretty likely they're aiming at a similar performance target. Which would still be slower than Ion2 coming out one year earlier in theory...

Either way it's certainly quite obvious that NVIDIA's chipset business is just going to get locked out in the long-term. Fighting for a DMI license is frankly ridiculous; all it has done is hurt their business because of the legal disputes. It wouldn't even improve their prospects if they won given what Intel is doing.

The only opposing dynamic is that, frankly, Intel's business has plenty of problems of its own and I think several very large factors will contribute to rapidly deteriorating their ASPs (and profits) in 2010 & 2011. I don't have the time to go into those now (and won't for some time), so you'll just have to believe me. Back to lurking I go...
 
Odds are quite good that it'll be an evolution of the current architecture, with a few extra EUs thrown in and some internal shuffling done to include what's required for DX11 compliancy(if it is actually a DX11 part).

Now you'll have to tell me what's so great about the current Intel IGP architecture :rolleyes:
 
You're right I should have been clearer on that one; it better not be an evolution to the current architecture but the biggest possible revolution.
 
I fail to see how any IGP will ever be exciting simply because they can't give it remotely enough bandwidth compared to discrete vid cards.
Depends to which cards you compare that to. Certainly performance discrete parts have more memory bandwidth. But it compares quite favorably to low-end parts (which are stuck with 64bit memory interface) and even things like HD 4650 (which has 128bit memory interface but only clocked at 400(800)Mhz (that is, same memory bandwidth as a G45 IGP would have with dual-channel ddr2-800)) - granted it's shared but it shouldn't matter that much. I guess though the lack of any buffer compression (plus the relative lack of execution resources) hurts current G45 more than any lack of pure memory bandwidth.
So this thing here will probably be coupled with something like dual-channel ddr3-1333 at minimum. Sure discrete parts will also use faster memory by then but I'd certainly expect low-end chips to still be stuck with 64bit (if they even still exist by then...). Also note that "L3-connected" part - not sure how much that could help compensate for a lack of memory bandwidth but certainly that's way more cache than any low-to-mid end graphics chip at that time will have.
 
Last edited by a moderator:
Also note that "L3-connected" part - not sure how much that could help compensate for a lack of memory bandwidth but certainly that's way more cache than any low-to-mid end graphics chip at that time will have.
Considering that apps use 10s of MBs of data per frame in the form of texture, render target, depth and vertex data, I can see the L3 cache getting trampled by the GPU constantly, meaning it's usefulness to the CPU would be decreased. What about your desktop? Would RAMDAC reads also go through L3? If so, then just refreshing your display will pollute the L3.
 
cutoff nvidia and ati from the huge igp market
Ah yes, the marketing reason. I should have said that I meant that I didn't see the technological reason.

Although frankly I think even 945G does that well enough still. It runs Vista Aero and it can play videos. That's all the vast majority care about really. They could stick a simple little IGP in there instead of this monster and have room for more CPU to annihilate AMD at the low end where this thing is going to go.
 
Considering that apps use 10s of MBs of data per frame in the form of texture, render target, depth and vertex data, I can see the L3 cache getting trampled by the GPU constantly, meaning it's usefulness to the CPU would be decreased. What about your desktop? Would RAMDAC reads also go through L3? If so, then just refreshing your display will pollute the L3.
I'd expect this to be configurable in some ways. There's no point for the display controller to go through L3, probably neither for texture requests (I certainly still expect the gpu itself to have small, local caches). But it would be useful maybe for, say, compressed z buffer.
 
Care to elaborate?
Well it seems pretty apparent to me that after getting a bloody nose from the K8 era Intel has sought to keep AMD from ever surprising them again by way of throwing their vastly more abundant engineering resources into every idea AMD comes up with or any direction that AMD wants to go. So it doesn't surprise me at all that they get there first.
 
The only opposing dynamic is that, frankly, Intel's business has plenty of problems of its own and I think several very large factors will contribute to rapidly deteriorating their ASPs (and profits) in 2010 & 2011. I don't have the time to go into those now (and won't for some time), so you'll just have to believe me. Back to lurking I go...

That's intriguing. Is it bad economy? Commoditization? Quad-core ARM invading the desktop? Or an old rival reviving? At least give some hints before going back to lurking;)
 
Well it seems pretty apparent to me that after getting a bloody nose from the K8 era Intel has sought to keep AMD from ever surprising them again by way of throwing their vastly more abundant engineering resources into every idea AMD comes up with or any direction that AMD wants to go. So it doesn't surprise me at all that they get there first.
I knew it was coming.
 
The only opposing dynamic is that, frankly, Intel's business has plenty of problems of its own and I think several very large factors will contribute to rapidly deteriorating their ASPs (and profits) in 2010 & 2011. I don't have the time to go into those now (and won't for some time), so you'll just have to believe me. Back to lurking I go...

Lemme guess. Larger and larger fraction of CPU's being sold having low profits, like Atom or related SoCs:?:
 
what's an ASP?

That's intriguing. Is it bad economy? Commoditization? Quad-core ARM invading the desktop? Or an old rival reviving? At least give some hints before going back to lurking;)

I'd bet on the quad-core and eight-core communist 64bit MIPS with low-level x86 emulation :).
thought they're intended for servers, low power multi-user computers and supercomputers rather than high end desktops.
 
Back
Top