Should intergrated Gaphics in CPU or Chipset?

iwod

Newcomer
I think for now it should Stay in Chipset. As die shrink and we move memory controller and PCI Express into CPU, we have more die space left in the Chipset.

It would only make sense when we could fit half of larabee inside the CPU, and have the graphics part sitting inside the chipset.
 
For the foreseeable future we'll have both options, but at some point it will probably disappear onto the CPU. After all, both Intel and AMD want to increase CPU ASPs and lock 3rd party chipset vendors out of the market, so it is the next logical step in integration.
 
If the chipset is pad limited, then it makes 100% sense to use that otherwise wasted silicon.
 
I think it depends on the CPU really.
Currently we can fit 4 cores onto a CPU, but most CPUs are still dualcore.
So instead of 2 more cores they could fit a GPU in the extra space on the CPU.
4 cores + GPU seems a bit much.

I wouldn't mind seeing the chipset go away altogether, or at least become more compact. On current Intel boards, you generally have 2 large chips on the board, with large heatsinks and heatpipe constructions and all. I could do without the load of copper on my mainboard, which is just in the way.
 
From a user's POV, I think that if I had a PC with integrated graphics, I'll be more inclined to upgrade the CPU if I get a GPU upgrade at the same time.

A CPU with integrated graphics will also allow graphics upgrades for devices with no expansion slots and custom motherboards. This will make such PC's more attractive.
 
If the chipset is pad limited, then it makes 100% sense to use that otherwise wasted silicon.
But last time I looked at the pin numbers, one of the main reasons why southbridges are pad limited is that they still support PCI and PATA, along perhaps with other legacy fluff. Of course, there's also the fact all the modern southbridge functions are very small yet take a non-negligible amount of power, so that certainly doesn't make things any better.

I would be very curious to see if an eWLB package (as announced recently by Infineon) could do the trick for a small very cost-efficient southbridge. Let's say 12xUSB/6xSATA/3xPCIe/1xGigE and practically nothing else. My guess is yes but I obviously don't have the raw data. If so, that would completely change the economics quite a bit.
 
Next year both AMD and Intel will have integrated graphics on-package and a single-chip chipset, so there's not much to debate.. It's coming soon whether you like it or not. ;)

I'm just wondering what Nvidia is going to do then, obviously the solutions won't compete with their high end stuff but most of their money comes from cards like the 9500GT or 8500GT and IGPs are almost at that level now. Most of their product line will become redundant as people get a decently working GPU with their nice-price CPU anyways.
 
In-CPU seems to be the way to go- retail side it seems that chipmakers would rather sell CPUs with higher ASPs with a cheaper platform to go with. You can't budge too much on mobo prices, but you can certainly on self-suppling CPUs.


I'm skeptical of near-future mobile implementations though. 2 seperated hotspots are seemingly easier to cool than 2 under 1 IHS. nVidia's lowend might just live through that niche, all depends on how they're going to improve their current lineup (on the same arch?) by about... twofold first. (And that's on a general perspective instead of just being relative to this discussion.)
 
Next year both AMD and Intel will have integrated graphics on-package and a single-chip chipset, so there's not much to debate.. It's coming soon whether you like it or not. ;)

I'm just wondering what Nvidia is going to do then, obviously the solutions won't compete with their high end stuff but most of their money comes from cards like the 9500GT or 8500GT and IGPs are almost at that level now. Most of their product line will become redundant as people get a decently working GPU with their nice-price CPU anyways.

No IGP is at 9500GT level. a 9500GT is basically an 8600GT shrunk.
 
I still wonder of the performance having the GPU inside of the CPU. Will the GPU share any of the cache of the CPU? Memory controller works with both the GPU and CPU or separate memory controllers. How about having dedicated memory for the GPU externally? Then again will the CPU/GPU work together in different ways then like now?
 
I think that there will be no changes at least in the short term (1-2 years).
The IGP will be moved from the chipset to the CPU package, but will still use the CPU's memory controller, and will still communicate with the CPU via the same bus.
It will probably only make hardware slightly smaller, cheaper and more economical to run, but have no effect on performance.
 
Communicate through same bus when GPU is inside of CPU? GPU cache separate from CPU cache? External GPU memory support? I would think number of pins on GPU+CPU would limit options of what can be done. All the outputs from GPU will be needed and maybe some inputs. Very interesting for me to think about since I've havn't a clue if a new socket will be needed to support this.
 
Basically both Intel and AMD will just copy-paste an IGP core onto the CPU die or package.
Nothing to see here, move along.
 
OK, clock speeds for GPU on the CPU? Dissagree it will be more then just an IGP pasted onto core. Just seems interesting to me and wondering if any great advantages can be obtain by being in the same chunk of silicon. Seems to be a waste if just what you say.
 
It's not a waste, as I said:
It will probably only make hardware slightly smaller, cheaper and more economical to run, but have no effect on performance.
 
For the foreseeable future we'll have both options, but at some point it will probably disappear onto the CPU. After all, both Intel and AMD want to increase CPU ASPs and lock 3rd party chipset vendors out of the market, so it is the next logical step in integration.

Also from if you want to reduce power consumption, you really want the GPU and the memory controller to be on the same chip - IO eats a lot of power.
 
Every CPU so equipped becomes a throughput monster courtesy of OpenCL, which would be pretty interesting in certain spaces. But that's a long way off.

Jawed
 
Why no effect on performance? Does not make sense at all, specially now that the MC is inside the CPU in bot AMD and Intel's arch.

Would it be much different than the performance boost achieved by integrating the FPU into the CPU back on the 386/486 days?
 
Why should there be a performance effect in the first place?
I don't see why people expect that.
In AMD's case, it doesn't matter much, the IGP could already access the MC via the HT link.
For Intel there might theoretically be something to win, but I doubt their IGPs are even fast enough that it would make a difference.
 
If the chipset is pad limited, then it makes 100% sense to use that otherwise wasted silicon.
That logic only makes sense if the IGP doesn't add any more pad requirements. Future standalone low end graphics chips will certainly be pad limited, and merging two pad limited chips doesn't buy you anything.

With the MC on the CPU, there's no need for a high BW link to the chipset (except for PCIe 2.0 x32, but we're talking IGP systems here), thus reducing the pincount needed on the chipset.
 
Back
Top