[Beyond3D Article] Intel presentation reveals the future of the CPU-GPU war

2) I doubt they can fab Larrabee on an N-1 process and have it be competitive. This means taking up prime fab space from far higher-margin CPUs to make it.

Just so we're clear, Intel's N-1 process tends to be the same as other companies N process...

Also be careful with those graphs, while CPUs do make up the majority of the revenue, even 2.5% on that graphics is a BILLION $ in revenue! Also keep in mind that the wafer start slide isn't process normalized either.

Aaron Spink
speaking for myself inc.
 
Not only do they not have other sources of substantial revenue, but the sources they do have are even more laughable in terms of gross profit. I'm pretty sure chipsets have substantially lower gross margins than CPUs despite lower fab amortization costs, and let's not even talk about the utter disaster that is flash.

A lot of that is there just aren't really that many areas in the semi biz that make anywhere near the ASP or volume of CPUs. For instance, all of Nvidia generates less revenue than the other/other silicon categories in that graph!

Aaron Spink
speaking for myself inc.
 
Another thing is... GPUs, especially the way Intel is going to implement them, are a lot closer to Intel's core CPU business than chipsets, network devices or flash memory.
Depending on how successful nVidia is at pushing Cuda/GPGPU into the mainstream, this (GP)GPU-thing actually could *become* part of Intel's core business.
 
Just so we're clear, Intel's N-1 process tends to be the same as other companies N process...
Theory is one thing, practice is another. In the Larrabee timeframe, Intel's N-1 process is 45nm and TSMC's N-1 process is 40nm. Furthermore, TSMC's 40nm SRAM density is 0.242μm² (or, more precisely, from 0.202μm² to 0.324μm² depending on the performance/power/area trade-offs). Intel's SRAM density on 45nm is 0.346μm². On 32nm, TSMC's SRAM density is ~0.15μm² and Intel's is ~0.18μm².

In terms of gate density, Intel's process is believed to have more restrictive design rules, especially on 45nm to compensate for the lack of immersion litography. Relative data between TSMC and Intel is obviously inexistent too there, but my expectation would be that TSMC will also have a substantial gate density advantage on 45nm compared to Intel. As for performance, yes, Intel is obviously going to be better there - but you would expect that to be more than compensated by their higher wafer costs.

I said it and I'll say it again, Larrabee being on 45nm would be a disastrous decision. It *needs* to be on 32nm. Heck, if it's a high-end product, it won't have huge volumes anyway so that shouldn't be a big concern. I sincerly hope that's part of the reason why it was apparently delayed slightly from the original schedule (i.e. a few quarters ago they denied there was any chance it would come out in 2010, and now Otellini has said publicly that's possible).

Also be careful with those graphs, while CPUs do make up the majority of the revenue, even 2.5% on that graphics is a BILLION $ in revenue! Also keep in mind that the wafer start slide isn't process normalized either.
Agreed, it's still big business and it'd be very unwise to forget about that. However I'm not sure being process normalized really matters, unless we're talking about 200mm wafers for the old stuff, which is not the case with chipsets AFAIK, even southbridges. The fact that the wafer cost is lower in older fabs is mostly (but not exclusively of course) an accounting issue...
 
Another thing is... GPUs, especially the way Intel is going to implement them, are a lot closer to Intel's core CPU business than chipsets, network devices or flash memory.
Depending on how successful nVidia is at pushing Cuda/GPGPU into the mainstream, this (GP)GPU-thing actually could *become* part of Intel's core business.

The funny thing is that Intel was so disinterested in this. Sure they have had a graphics business, but it prety much seems that it was only there out of chipset necessity. Out of all the businesses they bought and continued to pour money in to become leaders, GPUs were just not one of them because clearly the CPU was the only answer to the future. When you think about the potential uses for GPUs now, and see how CPU scaling is crap, this couldn't be more ironic.

If it were anything else than clearly they would have been working years ago on high end GPUs. It is now desperation time because CPU scales in comparison to CUDA-like devices.

Anyhow, there are many reasons why this is nothing like what Intel does, some of which are technical and some of which are economic. GPUs are much bigger than CPUs and generate much lower revenues. If Intel could magically cut its fab costs in half they would still have trouble matching NVIDIA's economics. The idea that they will all the sudden outperform GeForce because of a process advantage is highly dubious.
 
nVidia G80: ~686 million transistors per die (480 mm²)
Intel Montecito: ~1.7 billion transistors per die (596 mm²)

Why are you comparing a consumer-level chip to one that is targeted towards servers?
 
Why are you comparing a consumer-level chip to one that is targeted towards servers?

What? They don't count?
The point is that to Intel these GPUs are still very small, not to mention outdated.
Both chips were introduced in 2006, and obviously Intel had a much higher transistor density (even though it didn't even leverage their new 65 nm process, it's 90 nm).
 
Look at future trends in the fab industry, though.
If Intel manages to push for 450mm wafers by 2012 (might not happen as equipment manufacturers are still recouping costs from the move to 300mm), moves to 32nm, and goes to immersion or somesuch to fix the throughput issues with its current process, there will be a lot of spare silicon that the CPU market will not absorb.

Utilization charges start to matter if Intel upgrades a lot of fabs. Even if GPUs pull in less money per die area, the cost of idling fab capacity on a leading-edge fab is huge. Just ask AMD.
 
What? They don't count?
The point is that to Intel these GPUs are still very small, not to mention outdated.
Both chips were introduced in 2006, and obviously Intel had a much higher transistor density (even though it didn't even leverage their new 65 nm process, it's 90 nm).

It's not that they don't exist, it's that the comparison isn't even a remotely valid one to make in the first place.

Montecito is a low-volume (read: niche), incredibly-high ASP/margin, MPU whose transistor count (and, subsequently die size) are due overwhelmingly to the use of comparatively large amounts of multiple levels of cache.

G80 has sold orders of magnitude more units than any specific member of the Itanium Product Family, costs an order of magnitude less, is targeted at an entirely different segment of the computing market, and most of its transistor budget has been allocated to logic, not cache.

Unfortunately, transistor allocation for G80 is not known outside of NV, however this information is readily available for Montecito. Out of 1.7202 Billion transistors in Montecito, only 63.5 million are used for logic. That's a meager 3.7%!

Also, when contemplating the die size and transistor count of G80 it is important to include NVIO in these totals, as G80 is not a functional graphics processor without NVIO to display the contents of the frame buffer.
 
Look at future trends in the fab industry, though.
If Intel manages to push for 450mm wafers by 2012 (might not happen as equipment manufacturers are still recouping costs from the move to 300mm),

So what about that Intel+TSMC+Samsung consortium working towards 450mm wafer rollout in 2012?
 
Yeah, but none of the companies that actually make the fab equipment needed to do this has signed on as of yet.
They are the ones that are still recovering the costs from the last transition.
 
Yeah, but none of the companies that actually make the fab equipment needed to do this has signed on as of yet.
They are the ones that are still recovering the costs from the last transition.

Ah, gotcha. When did that 200->300mm switchover take place anyway?
 
Theory is one thing, practice is another. In the Larrabee timeframe, Intel's N-1 process is 45nm and TSMC's N-1 process is 40nm. Furthermore, TSMC's 40nm SRAM density is 0.242μm² (or, more precisely, from 0.202μm² to 0.324μm² depending on the performance/power/area trade-offs). Intel's SRAM density on 45nm is 0.346μm². On 32nm, TSMC's SRAM density is ~0.15μm² and Intel's is ~0.18μm².

As always I reserve all judgment of TSMC's leading edge process capabilities until they've actually shipped volume on a given process node. But even the available data points to a significant performance/power differential with TSMC on the downside. And it doesn't matter if your process is 100x more dense if it burns 100x more power...



Agreed, it's still big business and it'd be very unwise to forget about that. However I'm not sure being process normalized really matters, unless we're talking about 200mm wafers for the old stuff, which is not the case with chipsets AFAIK, even southbridges. The fact that the wafer cost is lower in older fabs is mostly (but not exclusively of course) an accounting issue...

process normalization does matter, esp when looking at wafer starts and fully amortized fab costs. You are looking at wafer start differentials as a bad thing, when the reality is that a 90nm wafer start in a depreciated fab is pretty much free. You may call it an accounting issue, but the only reason a company like Samsung has gotten where it is today is by paying attention to those "accounting issue(s)"! This has allowed samsung to become the number 2 semiconductor manufacturer in the world from nothing in an extremely short time.

So the accounting issues matter and complaining about using more wafers without taking process normalization into account is foolish.

Aaron spink
speaking for myself inc.
 
It's not that they don't exist, it's that the comparison isn't even a remotely valid one to make in the first place.

I'm just putting things in perspective.
It's something like:
Core2 ---- GPU --------------------------------------------------- Itanium

So I think the way GPUs were portrayed as MUCH larger is a complete exaggeration.
I think Intel can quite comfortably compete with nVidia, even though it may not be as lucrative as the Core2 currently is... Then again, Core2 has virtually no competition whatsoever at this point, so it's not a good indication anyway.
 
I'm just putting things in perspective.
It's something like:
Core2 ---- GPU --------------------------------------------------- Itanium

So I think the way GPUs were portrayed as MUCH larger is a complete exaggeration.
I think Intel can quite comfortably compete with nVidia, even though it may not be as lucrative as the Core2 currently is... Then again, Core2 has virtually no competition whatsoever at this point, so it's not a good indication anyway.

It is not an exaggeration. Compare like price points and you can see that the average die size of a GPU given x cost is > than that of a CPU costing the same amount.
 
Back
Top