This may be a possibility, but not for some time to come. Bear in mind how long it takes Intel to come out with a new architecture, and CPU's are vastly simpler in their processing requirements than GPU's (they're only more complex in some ways because companies like Intel and AMD want them to run a legacy instruction set at high performance and at extremely high clockspeeds)Dave B(TotalVR) said:Well it looks to me like if you have Eurasia you have vertex processing. The reason I think it is ideal is because of its small silicon area. Given this it should make a huge impact on chip yields for Intel. On top of that, given their excellent ability to produce highly optimized logic blocks for making their processors would could envisage the Eurasia core running at the full speed of the processor.
That's a scary though, having a GPU running in the ~3Ghz region. Couple this with an integrated memory controller - lets say DDR 400 dual channel as a minimum, probably higher. thats 6.4 gb/s. 1600x1200x32 at 60 fps requires about 440 MB/s for framebuffer writes. The question is, how much memory bandwidth will the texturing and scene composition require? anybodies guess but clearly this is a bandwidth restricted system, a system where PowerVR would shine above its competitors.
That said, I do expect GPU's to catch up to CPU's in clock speed in the coming years. Basically, I expect the featureset of GPU's to solidify over the next 2-4 years, making the majority of improvements ones in clockspeed, parallelism, and efficiency, as opposed to featureset. After about 2-4 years, ATI and nVidia will have the time required to really push the clockspeed of their parts through the roof.
I don't believe that Intel will ever compete with ATI or nVidia in the high-end GPU market.