[Beyond3D Article] Intel presentation reveals the future of the CPU-GPU war

Discussion in 'Architecture and Products' started by Arun, Apr 11, 2007.

  1. apoppin

    Regular

    Joined:
    Feb 12, 2006
    Messages:
    255
    Likes Received:
    0
    Location:
    Hi Desert SoCal
    nope nothing else

    - you say it all and i really get your PoV

    Good luck guys ... you are very insightful to me

    . . . as we say in Hawaii:

    Mahalo and Aloha!!
    [i think i found "where" the white paper discussions are =P]
     
  2. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    Could you clarify?
    Except for some common system platform technology, IA-64 is not going to be folded into x86 in any way.

    Intel's plans seem to keep Itanium up in a very high-level niche that x86 isn't going into.
     
  3. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Apart from using the same bus and sucket as the x86 processors, they will also move Itanium to the same manufacturing technology as x86.
    Obviously they're aimed at different markets, but by borrowing the bus and manufacturing technology from x86, Itanium could get a more pronounced role in Intel's product offerings.
    So far it's been a tad outdated and underpowered.
     
  4. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    It's probably also a matter of resources. Which is likely one reason they waited so long too really trumpet the GPU. And having a supercomputer on the desktop is pretty revolutionary for science and potentially huge for profits. Who wouldn't want 100 Tim Murray's except for maybe you and his mother.

    But I find it hard to believe NVIDIA management has not been thinking mainstream applicability of CUDA since well before it was conceived. It's not like they woke up one morning and said oh we can use this on the desktop too. Personally I think it looks like their strategy is working. This is all very new stuff and of course CUDA itself is improving. When these first applications hit I bet programmers all over will take a good look.
     
  5. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Sure, it's hardly cheap for a hardware company to suddenly position itself to take over many of the performance-centric parts of the software industry. However, I think it's pretty easy to see that it wouldn't be *that* expensive either; I mean, 100 software developers could do so much it's not even funny, and you can get that for~$3M/quarter. The return on investment is deliriously good - but of course, there are two problems: 1) when your stock price goes down massively in part because analysts become worried about your operating expenses, $3M is a lot of money to approve!
    2) It's pretty hard to find 100 Tim Murray's! I once caught one in the wild, but they seem to be going extinct...
    Heh, not going to disagree with that, but my point was more specific: they didn't realize the impact on sales this could have (hint: it dwarfs HPC in the short, mid and long term if executed properly), and they didn't realize how important it was to do a good chunk of that R&D in-house and release the results as closed-source freeware or libraries.

    Look at it this way: they got lucky that a couple of guys decided there was an opportunity to accelerate H.264 encoding on massively parallel hardware. They wouldnt have that much to show in the consumer space otherwise except some Photoshop stuff. Clearly had Cleopatra's nose been shorter, the whole face of the world would have been changed - and so things could have turned up either worse or better, and there will no doubt be more third-party applications using CUDA for the consumer space. But the point is that the potential impact you could make with a more aggressive plan is orders of magnitude greater.

    When you have the right technology at your disposal, it's easy to say your strategy is working; what would be surprising is if it wasn't. The value add from good decision making in this kind of scenario is the magnitude of the impact of your strategy, not its absolute success or failure.
     
  6. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    Of course it's not really about analysts caring in the short term. It's this financial discipline that's helps them grow and become more profitable and invest in more initiatives. It's like Marv says, they have tons of projects they'd like to do, but haven't gotten around to yet. There is organic growth in the sense of not doing acquisitions and then there is organic growth in the bigger sense of the word - growing in a natural way. This is getting esoteric, but NVIDIA are masters of this.
     
  7. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    And your $3M per quarter number is low, realistic costs would be on the order of 2-3x that amount. Total allocated costs for that type of specialized programmer are likely to be in the 240-360 k per year range. Especially in the valley where software salaries in the 80-100 k range are pretty much entry level. Add on incentive, perks, benefits, office space, equipment, etc and you are looking at ~200k for an entry level programmer per year, especially at an established company with limited upside. Rockstar programmers are much more expensive because you have to compete with other options they have available including the numerous game companies and startups with large profit sharing upsides as well as the general startup lottery.

    Aaron Spink
    speaking for myself inc.
     
  8. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Yeah, they are - although the way I was looking at it is that a good bunch of those guys can be entry-level, because the concepts are sufficiently new that not a lot of people have experience in there. There are plenty of CUDA university courses nowadays, and simply recruiting out of that could give you 50+ good developers very fast. Of course, you also need field experts in the 240-360k range as you point out, so the average is indeed going to be higher than what I estimated - oops! :) (but probably still substantially below 200K).

    My estimate was based on a 90-110K average and some extra for that - I did think I was on the low side, but guess I really need to review my employment basics if it's an order of magnitude higher than I thought!

    Obviously, managing your business based on what analysts think is possibly one of the worst possible strategies in the entire universe. However, that doesn't mean you should do everything in your power to anger them or shareholders for a wide variety of very good reasons.

    Oh sure, Marv is right as always; I'm merely pointing out that I believe this should be on the top of the short list, and warrants much more short-term and mid-term extra investment than handhelds, chipsets or HPC. It's also deliriously more time-sensitive than any of these things...
     
  9. aaronspink

    Veteran

    Joined:
    Jun 20, 2003
    Messages:
    2,641
    Likes Received:
    64
    Hey, I never thought there was that much overhead but after seeing what the cost factors were are numerous companies, generally 2x salary is a reasonable baseline! Depending on the jobs/area it can be as high as 3x.

    As far as NCGs go, I'm not even sure you can get away with as high as 50. You have to remember these pieces of code/programs are going to have to interact with highly complex programs in a variety of fields. You probably though could get away with a 33/33/33 distribution of NCG/JR/SR engineers but for a while it will be experience weighted because you are going to have to rely on the senior people to find the areas, set scopes, figure out relationships etc. One of the worse things you can do is overweight your staff with NCG and JR people as you essentially can't get anything done. You need enough experienced people and a low enough ratio of NCG to JR to SR people such that the on the job training and mentoring can actually take place.

    Aaron Spink
    speaking for myself inc.
     
  10. Wesker

    Newcomer

    Joined:
    May 3, 2008
    Messages:
    103
    Likes Received:
    0
    Location:
    Australia
    I see where you're getting at (don't worry about drawing up a list of applications), but I still can't see OEM's spending the extra cost to boost performance of high volume PC's. For example, your average mainstream PC, sold by Dell, currently looks like this:

    - Pentium Dual-Core E2160 (or Core 2 Duo E4400)
    - Intel G31 Express Chipset
    - Intel GMA X3100 graphics
    - 1GB of DDR2-667 RAM
    - 160GB+ HDD
    - DVD-RW drive
    - 19" widescreen monitor

    Now, I don't see why an OEM would bother adding (or rather advertise) in a non Intel GPU/IGP, when they could use the money to add in bonuses such as (I'm sure you've all seen this before):
    "Buy online now to get a free upgrade to 2GB of RAM for faster performance!"
    "Buy online now for a free hard drive upgrade to 500GB to store xxx amount of movies and mp3's!"
    ...and Dell's personal favourite:
    "Buy online now to get a free upgrade to a visually stunning Dell 22" widescreen LCD!"

    I'm sure it would be much for easier for OEM's to convince consumers to purchase a PC, if they throw in something that consumers sort of have a vague idea about (e.g. Processor speed increase, RAM increase and HDD space increase), or something that has a tangible bonus to the consumer (e.g. Upgrade to a 22" LCD).

    CUDA looks to be a wonderful thing. Something that promises to utlise the massively paralling processing nature of GPU's for practical applications off course sounds wonderful.

    It's just that I don't see consumer behavior shifting towards taking advantage of CUDA, if you know what I mean. Someone who's going to only do basic stuff on their PC like browse the web, type documents, read emails and occasionally play some online poker, isn't going to need the power of a dedicated GPU, nor will they sacrifice an extra 3 inches of LCD real estate for a GPU.

    Of course the market doesn't only consist of people such as those that I previously mentioned. There are users (such as ourselves) who are looking to squeeze performance from every component, and who can measure the costs and benefits of choosing higher GPU performance over...CPU performance (for example).

    Definitely agree with your points here.

    I guess, what I'm trying to (in summary) is that:
    - The majority of consumers ("mainstream consumers") won't see much benefit of CUDA, and thus OEM's would be better off luring consumers with factors other than a discrete [NVIDIA] GPU;

    - Intel does recognise that GPU's and CPU's are heading towards a clash course, and that they're doing something about it (in the form of Larrabee and in the effort of exponentially improving their IGP's).
     
  11. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Cuda doesn't have to be all that expensive though. The lower-end chips also support it, although they are obviously not as fast. But Cuda is nice and scalable.
    So perhaps there is a market for adding a $40-$80 GPU to a system and advertising it as an "ideal home video/picture workstation" with the hardware-accelerated h264/PhotoShop processing etc.
     
  12. Mart

    Newcomer

    Joined:
    Sep 20, 2007
    Messages:
    27
    Likes Received:
    0
    Location:
    Netherlands
    Oh sure, Intel has the resources for an exhaustion war, but they will still need to create a decent product for that. Intel could be pumping billions of dollars for ten years into the project, but if they don't create a product that has value, they will never sell anything. (And in that case, they won't be an opponent to NVIDIA)
    Intel has less risk, if they screw things up, there won't be problems right away, but that doesn't change the situation for NVIDIA. All I'm saying is that Intel as a competitor will not be that different from ATI as a competitor, technically speaking. As long as NVIDIA produces a superiour product, they will "live"/win.

    Yes, I agree that Intel is remarkably lean and flexible. They still have more layers than NVIDIA though. NVIDIA doesn't have the same amount corporate politics and the like. Of course, Intel's large orginisation has a lot of benefits NVIDIA lacks, but it's still a disadvantage when it comes to making radical and business-changing decisions, while the Intel Larrabee team has to work within certain confines the Intel corporation imposes. It's debatable how much of an impact these confines will have, but what's certain is that the field we're talking about is rapidly changing.

    No argument there, Intel has a financially better position in this war.
     
  13. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I think this is exactly the difference. ATi/AMD can not go for exhaustion war, Intel can. Another factor is that Intel will probably always be ahead in process technology, where ATi and nVidia use the same third-party, so all that mattered was how good the design was.
    Intel can compensate for design with superior manufacturing, just like they did with the Pentium 4/D. They had competitive prices even though their CPUs required far more transistors and higher clockspeeds to get the same performance as AMD's processors.

    So Intel basically has a few aces up its sleeve, making the actual design/performance of their GPU less important in the bigger picture. Besides, I don't think it will take Intel more than 2-3 years to catch up with nVidia in the GPU world. And then what is nVidia going to do?
     
  14. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    The key point you are missing is that any decision maker worth his daily dose of oxygen shouldn't care about unit sales, only revenue and profits. If you average revenue rather than units, the 'sweet spot' is substantially higher. And if you average *chip* revenue rather than full-PC revenue, it's even higher.

    The DRAM market is dead, they just don't know it yet.
    The HDD market is dead, they just don't know it yet.
    Obviously that's not dead, and continues to be a desirable value-add. However LCD prices have 'crashed' nicely in recent years, and as such the price tag for a 'good enough' monitor in most people's mind is gradually decreasing. Just like in the DRAM market or even the PC market in general, users' perception of what they need isn't going up as fast as the prices - the dynamic just isn't quite as extreme. All of these markets would be extremely unattractive if it wasn't for emerging markets.

    That's why aggressive marketing is necessary, and why delivering many tangible value-adds rather than just a few is necessary. As I said, CPUs/DRAMs/HDDs are all dead, they just don't know it yet because consumer perception hasn't shifted yet. It won't shift overnight, but it would be naive to think that it is static and cannot change. On the other hand, it would be just as naive to think that it will naturally shift without proper effort.

    I agree about the LCD part of your point, but I think you're missing the big picture when it comes to the rest of it. Those who only browse the web and read emails won't need a $100 GPU, but they won't need a $100 CPU either, or $100 of DRAM, or even a $100 HDD. That part of the market is commoditizing, and my expectation is that in 2010, nobody with those kinds of requirements will need more than <$25 of logic/analogue chips and <$15 of DRAM. The <$200 PC will be an attractive reality, and the vast majority of those with such requirements will start only buying computers in that segment of the market.

    BTW, totally off topic, but my opinion is that single-chip SoCs on leading-edge process nodes are a fad. It's much smarter to do part of the southbridge stuff on another chip, and integrate some analogue on there including a Gigabit Ethernet PHY. Similarly, in the handheld area, I believe having distinct chips for digital and analogue also makes sense; RF complicates things a bit but not massively so.

    Yes, and what you need to consider once again is chip revenue, not unit sales.

    Facts are disagreeing with you. HP is gaining share, Dell is losing share. Which of the two has used the most discrete GPUs in recent years? In practice, it's not just about what OEMs decide; it's also about what consumers decide and how that affects each company's market share.

    Yeah, as I said I'm just not convinced this clash course makes sense for them financially when you consider chip revenue rather than end-product revenue.
     
  15. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    All the people who think otherwise and there are many because Intel is such the industry darling need only to look at Microsoft and Google. Just because you are bigger doesn't mean you will automatically create a superior product or win in the marketplace.


    I have an argument here. How lean is Intel? They made some cuts in recent years. But they still have an awful lot of employees and expenses. What would happen if their average price of CPUs were cut in half or more? If the CPU becomes less important to PC makers why wouldn't this be the case?

    NVIDIA looks to be correct. CPU scaling is basically no more for most consumers. And the gains from CUDA are so dramatic it doesn't even matter. Photoshop and video transcoding are the tip of the iceberg. The type of applications that are exciting in the future are going to benefit from GPUs or whatever you want to call these devices. Guess what - NVIDIA charges a lot less for them than Intel does for a CPU. Guess what else, PC makers are listening. Because the differences in performance and price are dramatic. So if you are a PC maker you can sell a better machine for less or take more profit for yourself. Either way lower prices translates into more demand. NVIDIA is certainly not going to raise prices so that it is easier on Intel's cost structure. NVIDIA's cost structure looks to me to be an order of magnitude better than Intel's. They basically serve the sames markets and have the same profitability.

    It's hard to know how competitive you are when you are a monopolist. There is no real competition to compare yourself to. These companies are out to maximize profits and that is what they have been doing. But is there real economic basis for the prices Intel and Microsoft have been charging? I would argue no. There certainly is a large price umbrella for the well positioned newcomer to take advantage of.
     
  16. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I may not have been entirely correct about nVidia needing Intel.
    They're negotiating with Via... they have an x86 CPU aswell. If they can manage to make Cuda the replacement for the lack of performance of Via's CPU, then it could be a strong team. Ofcourse that's going to require proper support from all applications 'that matter', else Intel will always have the upper hand. All software already takes advantage of x86, and Intel has the fastest x86.
     
  17. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,139
    Likes Received:
    5,074
    There's a big difference here in that Microsoft is operating under a microscope. Any misstep immediately puts them back into "juicy lawsuit" territory. Also R&D for software while similar is very different from R&D for hardware. Hardware just has to work and be fast. Software has to work, but it doesn't necessarily have to be fast (in the areas where Google and MS are competing, if it's slow you can always throw more server clusters at it) and it also has to have some "unquantifiable" attraction to consumers that has virtually nothing to do with how well it performs.

    Where Intel could potentially have Nvidia by the balls similar to how they have AMD by the balls is that they can R&D many more seperate and overlapping designs simultaneously. Nvidia does this currently but the last I ready they have have 3-4 "different" design teams working on "seperate" yet similar GPU designs. Intel has much more than that doing R&D on different CPU designs, some of which aren't very similar. Out of those they will eventually pick a design to implement for consumer or other uses. Some of those designs will be scrapped and some will continue with R&D.

    Intel can afford to do this as their revenue is much higher than say AMD which can't afford to do this to the same degree. "IF" Intel wanted to. They could do the same thing to Nvidia. Sure they'll be behind for a bit as they catch up. But if they were to dedicated 5-10 R&D teams to it (and who says they haven't?) they could make up ground rapidly.

    Designing a GPU isn't some mythical design process. Most of the basics are well known and understood. Getting the performance and rendering quality correct just requires time and manpower. Introducing new ways of doing things requires experimentation. All of this falls well within Intels capabilities...if they so wish. The fly in the ointment here is that I don't think Intel is all that serious about supplanting Nvidia or ATI. Rather just taking a bite out of the pie and diversifying.

    Currently CPUs and chipsets are a better and more reliable source of revenue. However, who's to say that doesn't change at some point in the future when processing power is no longer marketable for use in Business machines and consumer machines?

    While CPU's and the chipsets they run on are the lions share of Intel's revenue, it's hardly the only source of revenue they have. And that point ties in directly to their renewed R&D into GPU's. Likewise they continue with R&D in various other technologies.

    Relying on only one line of products for your revenue stream is a recipe for the eventual death or marginalization of your company.

    Nvidia not only wants to push more specialized/generalized computing to the GPU, but they HAVE to. They currently live and die by how well their GPU's do. They would love to be able spread the risk out a bit more similar to Intel. And in that sense, it's far more important to Nvidia that they succeed in broadening the use and acceptance of CUDA than it is for Intel to succeed at Larrabee.

    Except that it's currently only applicable to a very small and narrow range of applications that currently do not affect most Business customers [Intels main source of revenue] nor the average consumer who doesn't even know what h.264 is nor how to use photoshop beyond playing around with filters.

    Sure for just the GPU. The cost ratio changes dramatically in the favor of Intel for OEM machines when you factor in the memory, PCB, etc. required to actually make that GPU useful.

    Except that while the CPU will run pretty much every program, that cannot be said for a GPU. Nor are there any programs currently marketed to consumers or businesses that run on a GPU. And I don't see any OEMs offering special low power [celeron or pentium] coupled with a Tesla packaged GPU for running your average business workstation or grandma's e-mail and office applications computer.

    Well, I suppose you could call Nvidia, Intel, and Microsoft monopolies.

    Except they still have to contend with [ATI, S3, and a few others], [AMD, Sun, ARM, and a few others], and [Linux, Unix, MacOS, Sun, Novell, Star Office, etc.].

    And while Nvidia, Intel, and MS have some lattitude for pricing their products how they wish, they still have to price according to how much their target market is willing to pay and with regards to whatever competition they may have.

    I'd argue that a company such as Adobe has a far stronger, if not larger, Monopoly than any of those 3.

    Regards,
    SB
     
  18. Voltron

    Newcomer

    Joined:
    May 25, 2004
    Messages:
    192
    Likes Received:
    3
    I mean honestly hands tied or not by regulators - that has very little to do with Google's success versus Microsoft and if you believe otherwise you are a simpleton. Google innovated in search, an area that was previously not thought to be lucrative. None of the other search engines were big commercial successes (Alta Vista was thought to be great for instance but it was commercially worthless). So it was not a priority for Microsoft because nobody knew how to make money from it. The Google guys knew it was valuable and then figured out a way to make money from. The rest is history. Microsoft has been playing catch up with the Internet from the get go, first with MSN, and now everything else.

    It is easy to think that anybody can throw money at a chip and make it work. And maybe Intel can. But think about it - nobody now denies the future is in parallel devices. If Intel were all that wouldn't they have realized this a little sooner? But it is incredibly naive to think that the company that has known this all along, the company that has been getting stronger and more efficient, and better at what they are doing, doesn't have a few tricks up their sleeve that are going to be awfully difficult to compete with. NVIDIA didn't get to be good by throwing lots of money around. They hired especially innovative people and fostered a culture of innovation. If you look around a little at various things Intel has said they are working on, its pretty clear they are trying to learn how to do things that NVIDIA has already done in terms of architecture and circuit design. And that probably means NVIDIA is on to something newer and better, because that is why they are the leaders.

    The idea that multiple Intel design teams are a threat to NVIDIA is completely laughable. I'm not sure that NVIDIA just arrives at a design out of thin air or by throwing darts at a board, if that's how you are implying Intel designs CPUs. Probably a lot of things are simulated. And one thing that strikes me about CUDA is that it might just prove valuable in accelerating R&D efforts for future GPUs, perhaps by an order of magnitude?
     
  19. Rufus

    Newcomer

    Joined:
    Oct 25, 2006
    Messages:
    246
    Likes Received:
    60
    (Disclaimer, these slides are from NV's analyst day, I can't find the original Intel slides on their site)
    [​IMG] [​IMG]

    You would think Intel would have other sources of (meaningful) revenue, but their own graphs pretty clearly show that their CPUs are it. Everything else they make, including chipsets / iGPUs, they sell or give away for next to nothing so they can use up the capacity and finish paying off of their depreciated N-1/N-2 process fabs. This strategy has worked for them so far, but there appear to be 2 problems:
    1) Chipset wafer starts look to be exploding, putting them in the weird position of having capacity pressure on their depreciated fabs to make chips they get next to no money from.
    2) I doubt they can fab Larrabee on an N-1 process and have it be competitive. This means taking up prime fab space from far higher-margin CPUs to make it.

    I think the main business question for Intel is how do they make Larrabee be successful (enough to have been worth the investment) without screwing up their fab business plan or their CPU margins.
     
    #159 Rufus, May 7, 2008
    Last edited by a moderator: May 7, 2008
  20. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    Not only do they not have other sources of substantial revenue, but the sources they do have are even more laughable in terms of gross profit. I'm pretty sure chipsets have substantially lower gross margins than CPUs despite lower fab amortization costs, and let's not even talk about the utter disaster that is flash.

    In fact, I'll reverse the question. In the last 20 years, has a SINGLE one of Intel's many diversification efforts not ended in dismal failure? I can't think of any. It's not indicative of future trends, of course, but it's not a particularly good sign either to say the least. As I said and I'll say it again, I am very pessimistic about both WiMax and Moorestown. Although there is an amusing dynamic with Moorestown & friends that might help it a bit, but I'm skeptical it'll do miracles. One interesting dynamic wrt chipsets is the China/Dalian fab, and I'm very happy about Intel's apparent strategy there, but that's not really a revenue opportunity per se, just a cost reduction thing.

    Of course, for the sake of objectivity, it's probably worth pointing out NVIDIA's MCP business is much less of a success than it might seem on first glance (their only really huge financial success was MCP61, and they lost money in the first few years if you know how to read the numbers correctly; the only real benefit of their initial MCP investment was via the XBox1 contract). And their handheld business has consistently been losing money. And embedded is really not a big business right now; in fact, it's arguably shrunk since the TNT M64 era. Quadro has been a huge success on the other hand, but obviously that's much nearer their core business.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...