Should intergrated Gaphics in CPU or Chipset?

so if we did have a gpu on the cpu would we have a gpu core running at 3ghz ?
i wounder what a cuurent igp would perform like running at that speed
ps: what speed do current igp's run at ?
 
so if we did have a gpu on the cpu would we have a gpu core running at 3ghz ?
i wounder what a cuurent igp would perform like running at that speed
ps: what speed do current igp's run at ?

No.. It would run at the same speed it does now (lower than regular graphics card chips). Not every component on a CPU run at the same speed. If they did the level of integration seen today would be impossible.
 
ps: what speed do current igp's run at ?

The chipset speed :)
Basically it can vary between 266 and 667 MHz more or less.
Clockspeed isn't the problem with performance though. Bandwidth is the main problem.
You could throw more clockspeed or more pipelines at an IGP but I doubt it'd go any faster in most cases.
This problem won't go away by integrating it on the CPU. A dedicated card will have a huge performance advantage for years to come, because it has its own high-bandwidth memory.
But still, an IGP is better than nothing... probably also for GPGPU tasks.
 
Integrating a GPU into the CPU could mean more memory bandwidth and less latency for the IGP. But the main reason why Intel and AMD are doing this is obvious - to get rid of nVidia in the IGP chipset market.
GPGPU was also mentioned here. Perhaps sometime in the future software will be able to take advantage of it, although I think the first generation will be long obsolete before that happens. Seriously, did anyone ever run 64bit apps on a socket 754 Clawhammer?
As die shrink and we move memory controller and PCI Express into CPU, we have more die space left in the Chipset.
Once we move the PCI Express controller into the CPU, we don't need a northbridge anymore. But you sure could glue a low-end GPU onto a motherboard and add some dedicated GDDR memory.
 
Integrating a GPU into the CPU could mean more memory bandwidth and less latency for the IGP. But the main reason why Intel and AMD are doing this is obvious - to get rid of nVidia in the IGP chipset market.

When large semiconductor manufacturers make such major architectural changes, then it means that they've detected clear advantages that overweigh any possible disadvantages. As Kristof used to say the CPU compared to the GPU sits always on the wrong side of the bus usually (and you're implying something very similar above). Intel has probably seen after dealing over two generations with SoCs that its not a bad idea after all. Firms like INTEL/AMD making such a move just to "piss off" NVIDIA sounds silly in the least; especially for Intel which is consistently dominating that market for years.

Last but not least I severely doubt that INTEL started years ago the Larabee project without having any relevant prospects to integrate those cores into their CPUs in the less foreseeable future. In the meantime I wouldn't be surprised either if they use Imagination IP for their first steps. And before anyone says it there's a clear reason why INTEL picked IMG's and not NV's IP for the PDA/mobile market. IMG's IP is superior in most aspects compared to what NV has.

GPGPU was also mentioned here. Perhaps sometime in the future software will be able to take advantage of it, although I think the first generation will be long obsolete before that happens.

http://arstechnica.com/journals/app...joins-working-group-to-hammer-out-opencl-spec

If OpenCL makes a startup from firms that mostly deal with the PDA/mobile space, why wouldn't it make sense for the lowest end PC space too? As I said above it might even be that Intel uses IMG IP for their first moves and in such a case SGX should be close to ideal for anything GPGPU.
 
Firms like INTEL/AMD making such a move just to "piss off" NVIDIA sounds silly in the least;
I don't think so. Engineers don't make the decisions, managers do. And if the managers think it would be nice to screw nVidia out of the market, so it happens. It's roughly the same thing as nVidia blocking SLI on other chipsets, so they sell an nForce with every SLI setup. Of course it's mainly about higher margins, but if AMD and Intel can screw nVidia at the same time, the better for them.
Ailuros said:
Last but not least I severely doubt that INTEL started years ago the Larabee project without having any relevant prospects to integrate those cores into their CPUs in the less foreseeable future. In the meantime I wouldn't be surprised either if they use Imagination IP for their first steps.
Larrabee is not a GPU built from scratch using another company's know-how. It's more like a many-core CPU with vector computing units, some simple texturing-related logic and display output logic. So I'm not sure whether they used Imagination's technologies for that, I always thought they're gonna use it for IGPs only.
As to the question of integrating a Larrabee into the CPU, that sure as hell will happen, but we're not talking Fusion or Havendale here, we're talking CPUs similar to what Larrabee is. Oh did I mention Larrabee's using P54-based x86 cores?
Ailuros said:
If OpenCL makes a startup from firms that mostly deal with the PDA/mobile space, why wouldn't it make sense for the lowest end PC space too? As I said above it might even be that Intel uses IMG IP for their first moves and in such a case SGX should be close to ideal for anything GPGPU.
Using GPUs for general computing would make a lot of sense. But it won't happen until there's a unified API. Right now, if you want to program something and want it to run on a GPU, you need to write a separate code for CUDA, Stream SDK and Toshiba SpursEngine. Plus a separate CPU path. If everybody agreed on an API (be it OpenCL, CUDA or whatever), you'd only need to write things twice (GPU path and CPU path). And the future is only having to write it once and have the compiler/runtime layer decide what will run on the CPU and on the GPU. The latter (runtime layer) would be especially great as it could take into account how powerful your CPU and GPU are and their actual load.
 
Since both GPU and CPU does functions or computing, can't redundant tasks be combined? As in FPU or math functions be done with the GPU and left off of the CPU? Maybe a much more intergrated GPCPU then what will come out initially but then Cache for this could be shared. Clock speed I can't help but think can be greatly increased as well. X86 getting a GPU extension of tasks. Now taking GPCPU with multiple cores, as in a quad GPCPU, 4 cpu's with 4 GPU cores sharing a wider memory bus then what is available now. Just interesting, don't think any of this will be in first or second generation but later on, why not?

The next idea to poke at is discreet cards with GPCPU ability with dedicated wide fast memory, hmmmmm?
 
Back
Top