[Beyond3D Article] Intel presentation reveals the future of the CPU-GPU war

Discussion in 'Architecture and Products' started by Arun, Apr 11, 2007.

  1. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Yes, and that is relevant how? As I say, Core2 has no competition, nVidia and ATi don't have that luxury, they need to have low prices. With the Pentium 4/Pentium D the cost per die size was completely different, when it was competing with K8.
    Even so I still think the difference is exaggerated.
    Aside from that, the discrete GPU market is still MUCH lower volume than the CPU market.
     
  2. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    The switch happened or started in 2001.
    AMD only recently completely stopped using its 200mm equipment last year, so it obviously wasn't a universal transition.

    The trend was that such increases would happen every ten years.
    200mm was done in 1991, and 300mm was done in 2001.

    Equipment manufacturers are unhappy because they still have to make back the expense of the last transtion when the fabs want to push for another increase.
    Because 300mm research was expensive, and not everyone made the switch, the break-even point was pushed way back.

    Even fewer manufacturers could make a 450mm transition, and the equipment and R&D aren't getting cheaper, so break-even would be very far out (and probably not until after Intel or someone else pushes for another increase to line its own pockets).
     
  3. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    3,984
    Likes Received:
    34
    Here are relevant statistics on the largest CONSUMER-GRADE microprocessor for sale from each respective company:

    G92:
    Cost: $200 (256MB 8800 GT) to $600 (9800 GX2)
    330mm^2 (single die)

    Yorkfield:
    Cost: $266 (Q9300) to $1500 (QX9775)
    214mm^2 (dual die)

    Discussion = over.
     
  4. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    3,984
    Likes Received:
    34
    Thanks for the history lesson and the explanation of current events. It all makes sense now.
     
  5. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Absolutely agreed - however, I don't have a lot of data on that point. Is there any public source or indirect evidence you might point me to there?
    Uhm, that's not quite extreme enough, the overall BoM has little to do with chip revenue... :)

    G80
    Price: $125 (Average Chip ASP)
    Die Size: 480mm^2
     
  6. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    3,984
    Likes Received:
    34
    You are of course correct, but you're also strengthening my argument. GPUs aren't directly comparable to CPUs in the manner by which Scali is attempting to compare them. The fact that GPUs are not sold separately as CPUs are only illustrates this point further.

    Thanks for the info. This makes a direct comparison somewhat more feasible, although a bit odd since a consumer would be hard-pressed to buy a bare Geforce graphics core from anyone, unlike a CPU.

    So $125 buys 480mm^2 of silicon in the GPU world, and over in CPU land we've got 214mm^2 chips selling for more than twice that amount. This supports my counter-argument against Sculi's supposition.

    I know die area isn't a direct measure of performance, but it's about the only direct comparison that can be drawn between CPUs and GPUs.
     
  7. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    No, those are retail prices, which have little to do with production costs.
    Yorkfield is more expensive because it has less competition, not because TSMC can produce chips cheaper than Intel can.
    In fact, the Q6600 is currently cheaper than the 45 nm quadcores. Does that mean Q6600 is cheaper to produce? Unlikely.
    So yes, GPUs are (currently) cheaper to consumers per die size, but your conclusion that nVidia produces its GPUs more economically than Intel could is just wrong.
    In fact, even comparing 65 nm chip diesizes against 45 nm chips is quite strange... You are holding it against Intel that they have superior technology that allows them to make smaller chips with better performance per mm^2?
     
  8. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Which is why Intel has 99% gross margins, right? Oh wait...
     
  9. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    A several thousand $ "professional" card is, for all intents and purposes just a GPU. Tesla rigs go up to $8 or $10 thousand don't they?

    Jawed
     
  10. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Indeed, as I already tried to explain, production costs have little to do with retail price.
    Just because a chip is expensive to a consumer doesn't mean it was expensive to produce it.
    In fact, the actual cost of the production of a CPU from raw material to the finished endproduct is ridiculously low.
    The actual cost depends on the investments in R&D and manufacturing facilities the company had to make in order to produce the chips.
    Prices of the same chip change massively during their lifetime... Take the Q6600 for example. I believe it was introduced at about $850, but now it's less than $200, for the exact same chip. So it has little to do with the production cost, but more with the investments and the strategy to return on those investments.
    In fact, I'm a bit shocked that people on this forum don't seem to be aware of this, and instead just pick random CPUs and GPUs to 'prove' their claims. As Jawed demonstrates, pick a different model GPU (Quadro and Tesla are basically still just G80/G92 designs) and the tables are turned.

    I see no reason why Intel couldn't compete with nVidia/TSMC. In fact, doesn't Intel already compete with TSMC, because AMD outsources some of its production there?
     
  11. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Feel free to continue disregarding basic arithmetic, I just hope for Intel's sake that Otellini takes that stuff more seriously than you do despite his non-engineering background. I have provided the exact same ideas myself in a wide variety of threads with different and more 'best-case' (for Intel) examples; the *cost* difference isn't as massive as implied above, but it's still very big. Oh, and AMD doesn't off-source CPUs to TSMC, only Chartered.
     
  12. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Feel free to disregard basic economics and marketing stragegy, an area that Paul Otellini is very familiar with.
     
  13. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    I'll presume you didn't see my edits, sorry about that - I tend to abuse from my mod privileges a bit too much to edit the post in the next ~5 minutes. Anyhow, I definitely am not disregarding basic economics; I have more than enough experience there, thank you very much. Once again, please consider Intel's gross margins.
     
  14. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    But you ARE disregarding basic economics.
    You have to admit that production costs basically come down to how you speed-bin your chips.
    The rest just comes down to how much you invested in the design and production facilities, and the resulting performance determines how much of a profit you can make (the marketleader defines price/performance, all others just have to adapt to the scale of the marketleader).
    Look at what happened with the Radeons... AMD is now practically giving them away because they perform poorly compared to the GeForces.
    If AMD's R700 turns out to outperform the 8800s and 9800s then nVidia will have drop their prices.
    Which has little to do with production costs, but everything with how much performance your design can produce.
    In fact, I'm quite sure that the Radeon 2900 was actually about as expensive for ATi to produce than the 8800GTX/Ultra were for nVidia. But the lacking performance determined that the 2900 fell into the 8800GTS 640 pricebracket.
     
  15. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    3,984
    Likes Received:
    34
    I don't see how production cost affects the consumer at all WRT this discussion. Besides, I don't have access to production costs on any of the products in question, and I doubt you do either as such information is certainly a company secret.

    Margins.

    Q6600 is being closed out to make room for those very 45nm quadcores. This is a red herring.

    This is not my contention. Perhaps you're thinking of another poster.

    My comparison was one between the two largest consumer-grade products currently shipping from each company, nothing more than that. Quite fair if you ask me.

    If you want to compare products which are no longer in production against ones that are, let's take your absolute best-case scenario and compare the firesale-priced Q6600 to the still-full-priced G92-based 9800 GTX:

    Q6600:
    Cost: $180-$220 (best etailer pricing)
    Size: 286mm^2
    Cost/mm^2: ~$.62 (lowest price)

    9800 GTX:
    Cost: $260-$300 (best etailer pricing)
    Size: 330mm^2
    Cost/mm^2: ~$.79 (lowest price)

    Advantage: Q6600

    However, this is only if you use the closeout pricing, and not established MSRP/average sale price.

    So you see, even in this absolute best-case-scenario for Intel, by which all breaks are given to them, and none to NV, they still BARELY come out on top. Any *fair* product comparison will show the opposite, and by quite the large margin. Compare G80 to Kentsfield and it's like G92 vs. Yorkfield all over again. Hmm, I'm seeing a trend there...

    I think you misunderstand why GPUs have larger die sizes than CPUs, and it has NOTHING to do with any manufacturing process advantage/disadvantage either company has. It is because cache is much denser than logic, and Intel allocates more of their transistor budget (percentage wise) to cache than NV does.

    LOL, funny you should mention that as I was going to bring up gross margins in my last post, and how NV and Intel are very similar in that regard ;)

    Depends on how much you want to get into semantics ;)
     
  16. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    So basically you needed 20 quotes to say 'margins'.
    But what is your point?
    If we go back to the post that started this discussion:
    "GPUs are much bigger than CPUs and generate much lower revenues. If Intel could magically cut its fab costs in half they would still have trouble matching NVIDIA's economics. The idea that they will all the sudden outperform GeForce because of a process advantage is highly dubious."

    So the argument was made that it would be impossible for Intel to match "nVidia's economics"... So, margins basically. Then the whole nonsense about retail prices and diesizes started for 'proof' of this statement. I just mentioned the diesizes to show that even though GPUs are larger than the x86 CPU dies that Intel currently sells, they have produced MUCH larger dies than these GPUs, so there will not be much of a technical challenge in manufacturing GPU dies for Intel.
    I really don't think there's any relation between diesize and retail price. Apparently you agree, even though you still seem to think Intel is not going to match nVidia. I think there's no relation and Intel will be able to balance their R&D investments and production costs so their margins will be adequate to match nVidia (and unlike nVidia, for Intel the GPUs don't need to be cash-cows either, as already mentioned earlier. Intel could afford to lose money on their GPUs, so for Intel it doesn't even have to be about margins in the first place if they so choose).
     
    #196 Scali, May 7, 2008
    Last edited by a moderator: May 7, 2008
  17. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    3,984
    Likes Received:
    34
    Smaller die size, no NVIO chip, and less GDDR means it was almost definitely not as expensive to produce as the GTX/Ultra.
     
  18. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I don't think that is anywhere near accounting for the massive price difference at the time.
     
  19. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    ShaidarHaran: I don't think cache vs logic is really the main factor here, and clearly you've got a die size budget, not a transistor budget.
    Scali: Once again, do you have ANY idea how high the ASPs are for Montecito? I agree there's no technical challenge for Intel here, but that's not the point. I'm most definitely not the one disregarding basic economics here. In fact...
    Unless that was a typo or brainfart of epic proportions, this conversation is now over.
     
  20. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    "He doesn't agree with me so I must insult his intelligence".
    Indeed this conversation is over, you killed it.

    But now that you mention it... I did mean to say "retail price", as I said before when making the same statement.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...