[Beyond3D Article] Intel presentation reveals the future of the CPU-GPU war

It is not an exaggeration. Compare like price points and you can see that the average die size of a GPU given x cost is > than that of a CPU costing the same amount.

Yes, and that is relevant how? As I say, Core2 has no competition, nVidia and ATi don't have that luxury, they need to have low prices. With the Pentium 4/Pentium D the cost per die size was completely different, when it was competing with K8.
Even so I still think the difference is exaggerated.
Aside from that, the discrete GPU market is still MUCH lower volume than the CPU market.
 
Ah, gotcha. When did that 200->300mm switchover take place anyway?

The switch happened or started in 2001.
AMD only recently completely stopped using its 200mm equipment last year, so it obviously wasn't a universal transition.

The trend was that such increases would happen every ten years.
200mm was done in 1991, and 300mm was done in 2001.

Equipment manufacturers are unhappy because they still have to make back the expense of the last transtion when the fabs want to push for another increase.
Because 300mm research was expensive, and not everyone made the switch, the break-even point was pushed way back.

Even fewer manufacturers could make a 450mm transition, and the equipment and R&D aren't getting cheaper, so break-even would be very far out (and probably not until after Intel or someone else pushes for another increase to line its own pockets).
 
Yes, and that is relevant how? As I say, Core2 has no competition, nVidia and ATi don't have that luxury, they need to have low prices. With the Pentium 4/Pentium D the cost per die size was completely different, when it was competing with K8.
Even so I still think the difference is exaggerated.
Aside from that, the discrete GPU market is still MUCH lower volume than the CPU market.

Here are relevant statistics on the largest CONSUMER-GRADE microprocessor for sale from each respective company:

G92:
Cost: $200 (256MB 8800 GT) to $600 (9800 GX2)
330mm^2 (single die)

Yorkfield:
Cost: $266 (Q9300) to $1500 (QX9775)
214mm^2 (dual die)

Discussion = over.
 
The switch happened or started in 2001.
AMD only recently completely stopped using its 200mm equipment last year, so it obviously wasn't a universal transition.

The trend was that such increases would happen every ten years.
200mm was done in 1991, and 300mm was done in 2001.

Equipment manufacturers are unhappy because they still have to make back the expense of the last transtion when the fabs want to push for another increase.
Because 300mm research was expensive, and not everyone made the switch, the break-even point was pushed way back.

Even fewer manufacturers could make a 450mm transition, and the equipment and R&D aren't getting cheaper, so break-even would be very far out (and probably not until after Intel or someone else pushes for another increase to line its own pockets).

Thanks for the history lesson and the explanation of current events. It all makes sense now.
 
But even the available data points to a significant performance/power differential with TSMC on the downside. And it doesn't matter if your process is 100x more dense if it burns 100x more power...
Absolutely agreed - however, I don't have a lot of data on that point. Is there any public source or indirect evidence you might point me to there?
ShaidarHaran said:
G92:
Cost: $200 (256MB 8800 GT) to $600 (9800 GX2)
330mm^2 (single die)
Uhm, that's not quite extreme enough, the overall BoM has little to do with chip revenue... :)

G80
Price: $125 (Average Chip ASP)
Die Size: 480mm^2
 
Uhm, that's not quite extreme enough, the overall BoM has little to do with chip revenue... :)

You are of course correct, but you're also strengthening my argument. GPUs aren't directly comparable to CPUs in the manner by which Scali is attempting to compare them. The fact that GPUs are not sold separately as CPUs are only illustrates this point further.

G80
Price: $125 (Average Chip ASP)
Die Size: 480mm^2

Thanks for the info. This makes a direct comparison somewhat more feasible, although a bit odd since a consumer would be hard-pressed to buy a bare Geforce graphics core from anyone, unlike a CPU.

So $125 buys 480mm^2 of silicon in the GPU world, and over in CPU land we've got 214mm^2 chips selling for more than twice that amount. This supports my counter-argument against Sculi's supposition.

I know die area isn't a direct measure of performance, but it's about the only direct comparison that can be drawn between CPUs and GPUs.
 
Here are relevant statistics on the largest CONSUMER-GRADE microprocessor for sale from each respective company:

G92:
Cost: $200 (256MB 8800 GT) to $600 (9800 GX2)
330mm^2 (single die)

Yorkfield:
Cost: $266 (Q9300) to $1500 (QX9775)
214mm^2 (dual die)

Discussion = over.

No, those are retail prices, which have little to do with production costs.
Yorkfield is more expensive because it has less competition, not because TSMC can produce chips cheaper than Intel can.
In fact, the Q6600 is currently cheaper than the 45 nm quadcores. Does that mean Q6600 is cheaper to produce? Unlikely.
So yes, GPUs are (currently) cheaper to consumers per die size, but your conclusion that nVidia produces its GPUs more economically than Intel could is just wrong.
In fact, even comparing 65 nm chip diesizes against 45 nm chips is quite strange... You are holding it against Intel that they have superior technology that allows them to make smaller chips with better performance per mm^2?
 
Indeed, as I already tried to explain, production costs have little to do with retail price.
Just because a chip is expensive to a consumer doesn't mean it was expensive to produce it.
In fact, the actual cost of the production of a CPU from raw material to the finished endproduct is ridiculously low.
The actual cost depends on the investments in R&D and manufacturing facilities the company had to make in order to produce the chips.
Prices of the same chip change massively during their lifetime... Take the Q6600 for example. I believe it was introduced at about $850, but now it's less than $200, for the exact same chip. So it has little to do with the production cost, but more with the investments and the strategy to return on those investments.
In fact, I'm a bit shocked that people on this forum don't seem to be aware of this, and instead just pick random CPUs and GPUs to 'prove' their claims. As Jawed demonstrates, pick a different model GPU (Quadro and Tesla are basically still just G80/G92 designs) and the tables are turned.

I see no reason why Intel couldn't compete with nVidia/TSMC. In fact, doesn't Intel already compete with TSMC, because AMD outsources some of its production there?
 
Feel free to continue disregarding basic arithmetic, I just hope for Intel's sake that Otellini takes that stuff more seriously than you do despite his non-engineering background. I have provided the exact same ideas myself in a wide variety of threads with different and more 'best-case' (for Intel) examples; the *cost* difference isn't as massive as implied above, but it's still very big. Oh, and AMD doesn't off-source CPUs to TSMC, only Chartered.
 
I'll presume you didn't see my edits, sorry about that - I tend to abuse from my mod privileges a bit too much to edit the post in the next ~5 minutes. Anyhow, I definitely am not disregarding basic economics; I have more than enough experience there, thank you very much. Once again, please consider Intel's gross margins.
 
But you ARE disregarding basic economics.
You have to admit that production costs basically come down to how you speed-bin your chips.
The rest just comes down to how much you invested in the design and production facilities, and the resulting performance determines how much of a profit you can make (the marketleader defines price/performance, all others just have to adapt to the scale of the marketleader).
Look at what happened with the Radeons... AMD is now practically giving them away because they perform poorly compared to the GeForces.
If AMD's R700 turns out to outperform the 8800s and 9800s then nVidia will have drop their prices.
Which has little to do with production costs, but everything with how much performance your design can produce.
In fact, I'm quite sure that the Radeon 2900 was actually about as expensive for ATi to produce than the 8800GTX/Ultra were for nVidia. But the lacking performance determined that the 2900 fell into the 8800GTS 640 pricebracket.
 
No, those are retail prices, which have little to do with production costs.

I don't see how production cost affects the consumer at all WRT this discussion. Besides, I don't have access to production costs on any of the products in question, and I doubt you do either as such information is certainly a company secret.

Yorkfield is more expensive because it has less competition, not because TSMC can produce chips cheaper than Intel can.

Margins.

In fact, the Q6600 is currently cheaper than the 45 nm quadcores. Does that mean Q6600 is cheaper to produce? Unlikely.

Q6600 is being closed out to make room for those very 45nm quadcores. This is a red herring.

So yes, GPUs are (currently) cheaper to consumers per die size, but your conclusion that nVidia produces its GPUs more economically than Intel could is just wrong.

This is not my contention. Perhaps you're thinking of another poster.

In fact, even comparing 65 nm chip diesizes against 45 nm chips is quite strange... You are holding it against Intel that they have superior technology that allows them to make smaller chips with better performance per mm^2?

My comparison was one between the two largest consumer-grade products currently shipping from each company, nothing more than that. Quite fair if you ask me.

If you want to compare products which are no longer in production against ones that are, let's take your absolute best-case scenario and compare the firesale-priced Q6600 to the still-full-priced G92-based 9800 GTX:

Q6600:
Cost: $180-$220 (best etailer pricing)
Size: 286mm^2
Cost/mm^2: ~$.62 (lowest price)

9800 GTX:
Cost: $260-$300 (best etailer pricing)
Size: 330mm^2
Cost/mm^2: ~$.79 (lowest price)

Advantage: Q6600

However, this is only if you use the closeout pricing, and not established MSRP/average sale price.

So you see, even in this absolute best-case-scenario for Intel, by which all breaks are given to them, and none to NV, they still BARELY come out on top. Any *fair* product comparison will show the opposite, and by quite the large margin. Compare G80 to Kentsfield and it's like G92 vs. Yorkfield all over again. Hmm, I'm seeing a trend there...

I think you misunderstand why GPUs have larger die sizes than CPUs, and it has NOTHING to do with any manufacturing process advantage/disadvantage either company has. It is because cache is much denser than logic, and Intel allocates more of their transistor budget (percentage wise) to cache than NV does.

Which is why Intel has 99% gross margins, right? Oh wait...

LOL, funny you should mention that as I was going to bring up gross margins in my last post, and how NV and Intel are very similar in that regard ;)

A several thousand $ "professional" card is, for all intents and purposes just a GPU. Tesla rigs go up to $8 or $10 thousand don't they?

Jawed

Depends on how much you want to get into semantics ;)
 
So basically you needed 20 quotes to say 'margins'.
But what is your point?
If we go back to the post that started this discussion:
"GPUs are much bigger than CPUs and generate much lower revenues. If Intel could magically cut its fab costs in half they would still have trouble matching NVIDIA's economics. The idea that they will all the sudden outperform GeForce because of a process advantage is highly dubious."

So the argument was made that it would be impossible for Intel to match "nVidia's economics"... So, margins basically. Then the whole nonsense about retail prices and diesizes started for 'proof' of this statement. I just mentioned the diesizes to show that even though GPUs are larger than the x86 CPU dies that Intel currently sells, they have produced MUCH larger dies than these GPUs, so there will not be much of a technical challenge in manufacturing GPU dies for Intel.
I really don't think there's any relation between diesize and retail price. Apparently you agree, even though you still seem to think Intel is not going to match nVidia. I think there's no relation and Intel will be able to balance their R&D investments and production costs so their margins will be adequate to match nVidia (and unlike nVidia, for Intel the GPUs don't need to be cash-cows either, as already mentioned earlier. Intel could afford to lose money on their GPUs, so for Intel it doesn't even have to be about margins in the first place if they so choose).
 
Last edited by a moderator:
In fact, I'm quite sure that the Radeon 2900 was actually about as expensive for ATi to produce than the 8800GTX/Ultra were for nVidia. But the lacking performance determined that the 2900 fell into the 8800GTS 640 pricebracket.

Smaller die size, no NVIO chip, and less GDDR means it was almost definitely not as expensive to produce as the GTX/Ultra.
 
ShaidarHaran: I don't think cache vs logic is really the main factor here, and clearly you've got a die size budget, not a transistor budget.
Scali: Once again, do you have ANY idea how high the ASPs are for Montecito? I agree there's no technical challenge for Intel here, but that's not the point. I'm most definitely not the one disregarding basic economics here. In fact...
I really don't think there's any relation between diesize and production cost.
Unless that was a typo or brainfart of epic proportions, this conversation is now over.
 
Unless that was a typo or brainfart of epic proportions, this conversation is now over.

"He doesn't agree with me so I must insult his intelligence".
Indeed this conversation is over, you killed it.

But now that you mention it... I did mean to say "retail price", as I said before when making the same statement.
 
Back
Top