Intel, Integrated video and Eurasia

Dave B(TotalVR) said:
Well it looks to me like if you have Eurasia you have vertex processing. The reason I think it is ideal is because of its small silicon area. Given this it should make a huge impact on chip yields for Intel. On top of that, given their excellent ability to produce highly optimized logic blocks for making their processors would could envisage the Eurasia core running at the full speed of the processor.

That's a scary though, having a GPU running in the ~3Ghz region. Couple this with an integrated memory controller - lets say DDR 400 dual channel as a minimum, probably higher. thats 6.4 gb/s. 1600x1200x32 at 60 fps requires about 440 MB/s for framebuffer writes. The question is, how much memory bandwidth will the texturing and scene composition require? anybodies guess but clearly this is a bandwidth restricted system, a system where PowerVR would shine above its competitors.
This may be a possibility, but not for some time to come. Bear in mind how long it takes Intel to come out with a new architecture, and CPU's are vastly simpler in their processing requirements than GPU's (they're only more complex in some ways because companies like Intel and AMD want them to run a legacy instruction set at high performance and at extremely high clockspeeds)

That said, I do expect GPU's to catch up to CPU's in clock speed in the coming years. Basically, I expect the featureset of GPU's to solidify over the next 2-4 years, making the majority of improvements ones in clockspeed, parallelism, and efficiency, as opposed to featureset. After about 2-4 years, ATI and nVidia will have the time required to really push the clockspeed of their parts through the roof.

I don't believe that Intel will ever compete with ATI or nVidia in the high-end GPU market.
 
Ailuros said:
I've lost you here....

hmm, i was in a hurry and did not check that post before i posted it.

it should have read more like:

Well it looks to me like if you have Eurasia you have vertex processing. The reason I think it is ideal is because of its small silicon area. Given this it should **NOT** make a huge impact on chip yields for Intel. On top of that, given their excellent ability to produce highly optimized logic blocks for making their processors one could envisage the Eurasia core running at the full speed of the processor.
 
Mariner said:
Eurasia offers a unified shader similar to Xenos, doesn't it?

I think Dave's saying that this ought to lead to a smaller silicon area as no additional vertex shaders need to be added. Whether or not this is the case I don't know (would separate pixel & vertex shaders take up more space?), but I can certainly see the logic behind it.


Eurasia is clearly a unified shader architecture, much like the Xbox 360's graphics chip - but wihtout the funky smart dram (doesn't need it)
 
Ailuros said:
I don't even know to what target market he is exactly refering to, because 3GHz for a graphic core is just wishful dreaming and that for years to come.

The Intel SoCs where 2700G (MBX Lite) has been integrated so far, are with CPUs over 600MHz and the graphics unit merely at 75MHz.

Since in the according Eurasia/SGX announcements things like procedural geometry are being mentioned (and that the specifications exceed dx9.0) it doesn't take a wizzard to assume that it's able to handle at least as complex geometry calls as required in WGF2.0, but that's not my question mark.


With a 3Ghz graphics core, you will not need so many processing elements to achieve the same performance. If anyone could get a graphics core running at 3Ghz, Intel could, they have the time and RnD. you have a very good point regarding the MBx, however.
 
Yes Chalnoth but not up to 3GHz all that soon. I don't expect GPUs to exceed the 2GHz range in the coming decade, but that's just me.

***edit:

I don't believe that Intel will ever compete with ATI or nVidia in the high-end GPU market.

It wouldn't make much sense for Intel's business scheme as it is either. I think what some here are trying to say is, that Intel might not have had time enough to design a graphics core for WGF2.0 compliance for their future IGPs. Nothing is guaranteed though yet until we'll know in the future what exactly Intel plans to use Eurasia for in the future.
 
Last edited by a moderator:
Ailuros said:
Yes Chalnoth but not up to 3GHz all that soon. I don't expect GPUs to exceed the 2GHz range in the coming decade, but that's just me.
Do you mean by 2010, or in the next ten years?

A decade is an exceedingly long time in the world of processors. I expect clock speeds of GPU's to increase rapidly after it becomes clear that there is little need for much more in the way of features, and even moreso as the increase in transistor densities slows. Unlike current CPU's, GPU's still have a lot of clockspeed headroom, despite the dramatically larger number of logic transistors.
 
Chalnoth said:
Unlike current CPU's, GPU's still have a lot of clockspeed headroom, despite the dramatically larger number of logic transistors.

As geometires shink i gather wire delay is becoming an increasingly important factor in diminishing returns for clockspeeds. In this regard with their numerous wide internal busses and crossbars wouldn't GPU's be more more clock limited than CPU's for a given process?
 
Chalnoth said:
Do you mean by 2010, or in the next ten years?

A decade is an exceedingly long time in the world of processors. I expect clock speeds of GPU's to increase rapidly after it becomes clear that there is little need for much more in the way of features, and even moreso as the increase in transistor densities slows. Unlike current CPU's, GPU's still have a lot of clockspeed headroom, despite the dramatically larger number of logic transistors.

I'm taking various industry predictions under account; up to 2014 the estimate is lower than 2GHz for GPUs. That's already a huge increase in frequency, but not as high as 3GHz.
 
Even if you used more sophisticated techniques to design higher clocked GPUs you'd just end up thermally limited (wire delay is not an issue, the GPU already deals with large latencies ... it can take a little more pipelining of cross die busses).
 
Ailuros said:
I'm taking various industry predictions under account; up to 2014 the estimate is lower than 2GHz for GPUs. That's already a huge increase in frequency, but not as high as 3GHz.

10 years ago were were using GPU's with clock speeds of ~50 Mhz, now they are ~500Mhz. In another 10 years I think its going to increase to more than just 2Ghz.


Besides, the main reason that GPU's are clocked so much lower than CPU's is the time to market. Companies like Nvidia and ATi just dont have the time and resources to create their own silicon logic that is optimised for maximum clock speeds, instead they have to use standard blocks because getting the timings right takes years. Intel and AMD can do this because they use basically the same core over a number of years, just increasing its clock speed (by optimisation and process shrinks) and adding little bits here and there, like extra cache or SSE2 or whatever.
 
MfA said:
Even if you used more sophisticated techniques to design higher clocked GPUs you'd just end up thermally limited (wire delay is not an issue, the GPU already deals with large latencies ... it can take a little more pipelining of cross die busses).


Another good point, I think there is a bit more room for heat management in GPU's at the moment though.
 
A couple of years ago, I'd have expected a rising interest in reaching higher clockspeed simply due to the extra size needed to add ever more execution ressources. However, as both ATi and nVidia seem to be fine with producing 250+ mm² dies for the high end, I wonder whether we'll see that sea change in a while.
 
Dave B(TotalVR) said:
10 years ago were were using GPU's with clock speeds of ~50 Mhz, now they are ~500Mhz. In another 10 years I think its going to increase to more than just 2Ghz.

Then kindly count the amount of transistors in 1995 on GPUs and compare it to today's 302M on G70 (despite that I'd prefer to think of real 3D accelerators only past 98').


Besides, the main reason that GPU's are clocked so much lower than CPU's is the time to market. Companies like Nvidia and ATi just dont have the time and resources to create their own silicon logic that is optimised for maximum clock speeds, instead they have to use standard blocks because getting the timings right takes years. Intel and AMD can do this because they use basically the same core over a number of years, just increasing its clock speed (by optimisation and process shrinks) and adding little bits here and there, like extra cache or SSE2 or whatever.

If ATI/NVIDIA dont' have the time then who has? I don't recall any massively different clockspeeds from Intel's IGPs so far nor from any anyone else.

The future is more in the direction of having different clock domains or "smart" clockspeeds if you prefer to keep power consumption and heat dissipation as much under control as possible. Here's an estimate from NVIDIA itself from a GPU_gems2 whitepaper:

estimate.jpg


Take it only as a prediction, but have a very close look on estimated power consumption, transistor counts, capability estimate and then only to the clockspeeds. As a bundle it makes way more sense then what you're estimating there.

What time to market anyway? A new generation sits in development from chalkboard to mass-production over 2.5 years these days on estimate and you can bet that either ATI or NVIDIA dedicate more resources to each development cycle of graphics accelerators than Intel ever has or ever will. Besides Intel designs at most IGPs at best and not high end GPUs.

Of course anything is possible but I'd say you'd want to keep power consumption and heat dissipation under control.
 
Ailuros said:
If ATI/NVIDIA dont' have the time then who has? I don't recall any massively different clockspeeds from Intel's IGPs so far nor from any anyone else.
They don't have the time now. They will have the time when the featureset of GPU's solidfies.
 
Chalnoth said:
They don't have the time now. They will have the time when the featureset of GPU's solidfies.

I think we have still a very long road ahead when it comes to feature-sets.
 
The main problem with designing GPUs for very high clock speeds, other than design effort/time, is that to support the higher clock speeds, each pixel has to pass through a larger number of pipeline steps, causing the power consumption per rendered pixel to increase almost linearly with the targeted clock speed of the chip.If you were to e.g. replace the G70 with, say, an 8-pipeline part running at 1.4 GHz (assuming same process etc), you would get similar performance, a substantially smaller chip, and about 2-3X the power consumption, which sounds like it would be quite expensive to cool properly.

When designing a GPU for a given performance level, scaling the target clock speed up and down should allow the GPU designer to trade off the size/cost of the die itself (higher clock speed->smaller die) versus the size/cost of the infrastructure needed to power/cool the die (higher clock speed->more power); as such, just pushing the clock speed as far as possible into the multi-GHz range is unlikely to be an optimal tradeoff for many process generations to come.

As for how big chips can become: AFAIK, most processes are limited to somewhere between 450 and 650 mm2; the biggest actual chip I've heard about is the Intel Montecito at 594 mm2.
 
arjan de lumens said:
When designing a GPU for a given performance level, scaling the target clock speed up and down should allow the GPU designer to trade off the size/cost of the die itself (higher clock speed->smaller die) versus the size/cost of the infrastructure needed to power/cool the die (higher clock speed->more power); as such, just pushing the clock speed as far as possible into the multi-GHz range is unlikely to be an optimal tradeoff for many process generations to come.
Pushing clockspeed is a good thing to do up to a point (Intel's taken it too far, for example). The reality is that while the same chip run at a higher clockspeed will take up a lot of power, there are things that one can do to reduce the power consumption of the chip run at a high clockspeed. The simple fact that we have CPU's that run at 2-3GHz without power consumption exploding is proof of this.
 
Comparing CPUs to GPUs doesn't really make a lot of sense. A CPU has a much lower number of execution units and a much lower average utilization of execution units that what a GPU has - if you set up a CPU and a GPU of similar process, die area and clock speed, the GPU will likely draw about 3-10x more power.
 
arjan de lumens said:
Comparing CPUs to GPUs doesn't really make a lot of sense. A CPU has a much lower number of execution units and a much lower average utilization of execution units that what a GPU has - if you set up a CPU and a GPU of similar process, die area and clock speed, the GPU will likely draw about 3-10x more power.
Ah, but CPU's (at least Intel's) have hit a veritable wall in clock speeds. I fully expect CPU's to hover right around 3-4GHz max for a long time. As the GPU manufacturers get to spend more time on each architecture, they'll start to push towards a similar maximum clockspeed, set roughly by the size of the chip (obviously it'll be somewhat lower, as high-end GPU's are typically larger).

And while CPU's may not have many functional units, they have a whole lot of other logic, such as instruction decode and branch prediction. What they do have as a benefit to power consumption is much more cache for the die area.
 
Back
Top