[Beyond3D Article] Intel presentation reveals the future of the CPU-GPU war

nope nothing else

- you say it all and i really get your PoV

Good luck guys ... you are very insightful to me

. . . as we say in Hawaii:

Mahalo and Aloha!!
[i think i found "where" the white paper discussions are =P]
 
And Itanium is still going, but will more or less be merged with x86-technology now.
Could you clarify?
Except for some common system platform technology, IA-64 is not going to be folded into x86 in any way.

Intel's plans seem to keep Itanium up in a very high-level niche that x86 isn't going into.
 
Apart from using the same bus and sucket as the x86 processors, they will also move Itanium to the same manufacturing technology as x86.
Obviously they're aimed at different markets, but by borrowing the bus and manufacturing technology from x86, Itanium could get a more pronounced role in Intel's product offerings.
So far it's been a tad outdated and underpowered.
 
I've got good reasons to believe executive management looked at CUDA as primarily a HPC thing until recently.

It's probably also a matter of resources. Which is likely one reason they waited so long too really trumpet the GPU. And having a supercomputer on the desktop is pretty revolutionary for science and potentially huge for profits. Who wouldn't want 100 Tim Murray's except for maybe you and his mother.

But I find it hard to believe NVIDIA management has not been thinking mainstream applicability of CUDA since well before it was conceived. It's not like they woke up one morning and said oh we can use this on the desktop too. Personally I think it looks like their strategy is working. This is all very new stuff and of course CUDA itself is improving. When these first applications hit I bet programmers all over will take a good look.
 
It's probably also a matter of resources. Which is likely one reason they waited so long too really trumpet the GPU. And having a supercomputer on the desktop is pretty revolutionary for science and potentially huge for profits.
Sure, it's hardly cheap for a hardware company to suddenly position itself to take over many of the performance-centric parts of the software industry. However, I think it's pretty easy to see that it wouldn't be *that* expensive either; I mean, 100 software developers could do so much it's not even funny, and you can get that for~$3M/quarter. The return on investment is deliriously good - but of course, there are two problems: 1) when your stock price goes down massively in part because analysts become worried about your operating expenses, $3M is a lot of money to approve!
Who wouldn't want 100 Tim Murray's except for maybe you and his mother.
2) It's pretty hard to find 100 Tim Murray's! I once caught one in the wild, but they seem to be going extinct...
But I find it hard to believe NVIDIA management has not been thinking mainstream applicability of CUDA since well before it was conceived. It's not like they woke up one morning and said oh we can use this on the desktop too.
Heh, not going to disagree with that, but my point was more specific: they didn't realize the impact on sales this could have (hint: it dwarfs HPC in the short, mid and long term if executed properly), and they didn't realize how important it was to do a good chunk of that R&D in-house and release the results as closed-source freeware or libraries.

Look at it this way: they got lucky that a couple of guys decided there was an opportunity to accelerate H.264 encoding on massively parallel hardware. They wouldnt have that much to show in the consumer space otherwise except some Photoshop stuff. Clearly had Cleopatra's nose been shorter, the whole face of the world would have been changed - and so things could have turned up either worse or better, and there will no doubt be more third-party applications using CUDA for the consumer space. But the point is that the potential impact you could make with a more aggressive plan is orders of magnitude greater.

Personally I think it looks like their strategy is working. This is all very new stuff and of course CUDA itself is improving. When these first applications hit I bet programmers all over will take a good look.
When you have the right technology at your disposal, it's easy to say your strategy is working; what would be surprising is if it wasn't. The value add from good decision making in this kind of scenario is the magnitude of the impact of your strategy, not its absolute success or failure.
 
1) when your stock price goes down massively in part because analysts become worried about your operating expenses, $3M is a lot of money to approve!

Of course it's not really about analysts caring in the short term. It's this financial discipline that's helps them grow and become more profitable and invest in more initiatives. It's like Marv says, they have tons of projects they'd like to do, but haven't gotten around to yet. There is organic growth in the sense of not doing acquisitions and then there is organic growth in the bigger sense of the word - growing in a natural way. This is getting esoteric, but NVIDIA are masters of this.
 
Sure, it's hardly cheap for a hardware company to suddenly position itself to take over many of the performance-centric parts of the software industry. However, I think it's pretty easy to see that it wouldn't be *that* expensive either; I mean, 100 software developers could do so much it's not even funny, and you can get that for~$3M/quarter. The return on investment is deliriously good - but of course, there are two problems: 1) when your stock price goes down massively in part because analysts become worried about your operating expenses, $3M is a lot of money to approve!

And your $3M per quarter number is low, realistic costs would be on the order of 2-3x that amount. Total allocated costs for that type of specialized programmer are likely to be in the 240-360 k per year range. Especially in the valley where software salaries in the 80-100 k range are pretty much entry level. Add on incentive, perks, benefits, office space, equipment, etc and you are looking at ~200k for an entry level programmer per year, especially at an established company with limited upside. Rockstar programmers are much more expensive because you have to compete with other options they have available including the numerous game companies and startups with large profit sharing upsides as well as the general startup lottery.

Aaron Spink
speaking for myself inc.
 
And your $3M per quarter number is low, realistic costs would be on the order of 2-3x that amount. Total allocated costs for that type of specialized programmer are likely to be in the 240-360 k per year range. Especially in the valley where software salaries in the 80-100 k range are pretty much entry level.
Yeah, they are - although the way I was looking at it is that a good bunch of those guys can be entry-level, because the concepts are sufficiently new that not a lot of people have experience in there. There are plenty of CUDA university courses nowadays, and simply recruiting out of that could give you 50+ good developers very fast. Of course, you also need field experts in the 240-360k range as you point out, so the average is indeed going to be higher than what I estimated - oops! :) (but probably still substantially below 200K).

Add on incentive, perks, benefits, office space, equipment, etc and you are looking at ~200k for an entry level programmer per year
My estimate was based on a 90-110K average and some extra for that - I did think I was on the low side, but guess I really need to review my employment basics if it's an order of magnitude higher than I thought!

Voltron said:
Of course it's not really about analysts caring in the short term. It's this financial discipline that's helps them grow and become more profitable and invest in more initiatives.
Obviously, managing your business based on what analysts think is possibly one of the worst possible strategies in the entire universe. However, that doesn't mean you should do everything in your power to anger them or shareholders for a wide variety of very good reasons.

It's like Marv says, they have tons of projects they'd like to do, but haven't gotten around to yet. There is organic growth in the sense of not doing acquisitions and then there is organic growth in the bigger sense of the word - growing in a natural way. This is getting esoteric, but NVIDIA are masters of this.
Oh sure, Marv is right as always; I'm merely pointing out that I believe this should be on the top of the short list, and warrants much more short-term and mid-term extra investment than handhelds, chipsets or HPC. It's also deliriously more time-sensitive than any of these things...
 
My estimate was based on a 90-110K average and some extra for that - I did think I was on the low side, but guess I really need to review my employment basics if it's an order of magnitude higher than I thought!

Hey, I never thought there was that much overhead but after seeing what the cost factors were are numerous companies, generally 2x salary is a reasonable baseline! Depending on the jobs/area it can be as high as 3x.

As far as NCGs go, I'm not even sure you can get away with as high as 50. You have to remember these pieces of code/programs are going to have to interact with highly complex programs in a variety of fields. You probably though could get away with a 33/33/33 distribution of NCG/JR/SR engineers but for a while it will be experience weighted because you are going to have to rely on the senior people to find the areas, set scopes, figure out relationships etc. One of the worse things you can do is overweight your staff with NCG and JR people as you essentially can't get anything done. You need enough experienced people and a low enough ratio of NCG to JR to SR people such that the on the job training and mentoring can actually take place.

Aaron Spink
speaking for myself inc.
 
But that's the key point: quad-cores also only increase performance on a range of select applications in practice! So the key isn't to accelerate a billion applications; the key is to make it more valuable for Joe Consumer to have a powerful GPU than a quad-core or even a tri-core. For the 2009 Back-to-School cycle, Intel's line-up will look like this:
Ultra-High-End: 192-bit DDR3 Quad-Core Nehalem
High-End: 128-bit DDR3 Quad-Core Nehalem
Mid-Range 1: 128-bit DDR3 Dual-Core Nehalem
Mid-Range 2: 128-bit DDR2/3 Dual-Core Nehalem [Third-Party Chipset]
Low-End: 128-bit DDR2 Dual-Core 3MiB Penryn
Ultra-Low-End: 128-bit DDR2 Single-Core Conroe [65nm]

So let's make the goal clear: encourage OEMs and customers to stick to dual-cores, and potentially even Penryn, in favour of using a more expensive GPU. As you point out, this won't work if you only accelerate select applications; so the solution is simple: do massive in-house R&D for a wide variety of suitable applications, and release the results as freeware and free closed-sourced libraries for third party applications to use.

I have a list of suitable applications if you are interested. However, it should be easy to make up your own by looking at any modern CPU review and pondering whether the multi-core applications benchmarked could be accelerated via CUDA. The answer is 'yes' in a surprisingly high number of cases; you're unlikely to parallelize LZMA/7Zip, but there are a lot of things that you *can* do in CUDA, especially with shared memory.

I see where you're getting at (don't worry about drawing up a list of applications), but I still can't see OEM's spending the extra cost to boost performance of high volume PC's. For example, your average mainstream PC, sold by Dell, currently looks like this:

- Pentium Dual-Core E2160 (or Core 2 Duo E4400)
- Intel G31 Express Chipset
- Intel GMA X3100 graphics
- 1GB of DDR2-667 RAM
- 160GB+ HDD
- DVD-RW drive
- 19" widescreen monitor

Now, I don't see why an OEM would bother adding (or rather advertise) in a non Intel GPU/IGP, when they could use the money to add in bonuses such as (I'm sure you've all seen this before):
"Buy online now to get a free upgrade to 2GB of RAM for faster performance!"
"Buy online now for a free hard drive upgrade to 500GB to store xxx amount of movies and mp3's!"
...and Dell's personal favourite:
"Buy online now to get a free upgrade to a visually stunning Dell 22" widescreen LCD!"

I'm sure it would be much for easier for OEM's to convince consumers to purchase a PC, if they throw in something that consumers sort of have a vague idea about (e.g. Processor speed increase, RAM increase and HDD space increase), or something that has a tangible bonus to the consumer (e.g. Upgrade to a 22" LCD).

CUDA looks to be a wonderful thing. Something that promises to utlise the massively paralling processing nature of GPU's for practical applications off course sounds wonderful.

It's just that I don't see consumer behavior shifting towards taking advantage of CUDA, if you know what I mean. Someone who's going to only do basic stuff on their PC like browse the web, type documents, read emails and occasionally play some online poker, isn't going to need the power of a dedicated GPU, nor will they sacrifice an extra 3 inches of LCD real estate for a GPU.

Of course the market doesn't only consist of people such as those that I previously mentioned. There are users (such as ourselves) who are looking to squeeze performance from every component, and who can measure the costs and benefits of choosing higher GPU performance over...CPU performance (for example).

It runs out of steam instantly if you don't have enough applications on it. But if you got a bunch of applications running on it, you can simultaneously reduce the value-add of quad-core and increase the value-add of GPUs. You won't get many design wins in the enterprise space for it; but honestly, if the decision makers are rational (errr, that's a bit optimsitic) that market should become a pure commodity market and ASPs should go down 10x. So yeah, you can't add value there, but that's really not the point.

Yes, but once again, basic arithmetic tells us that Intel will be in big trouble if GPU/GPGPU ASPs grow while CPU ASPs lower, because the chip represents nearly 100% of the BoM for a CPU but much less than half for a GPU. So if the end-consumer market is constant, replacing every CPU by a GPU would result in 50-80% less revenue; i.e. utter financial disaster.

Larrabee is a great product for Intel if it just eats away at GPUs. But sadly for Intel, the world is more complex than that, and their fate will be decided in the 2009 Back-to-School and Winter OEM cycles, likely before Larrabee will be available. This is another reason why in-house development is key by the way; in that timeframe, third-party software developers might want to hedge their bets with Larrabee, and clearly that would have negative consequences. And given the fact that the potential profit opportunities here dwarf those of traditional GPU applications, I'd argue it'd be pretty damn dumb to let third parties decide of your fate. Such a strategy has very rarely worked in the past, and it's not magically going to work now. Can they still help? Well yeah, duh. But once again that's not the point.

Definitely agree with your points here.

I guess, what I'm trying to (in summary) is that:
- The majority of consumers ("mainstream consumers") won't see much benefit of CUDA, and thus OEM's would be better off luring consumers with factors other than a discrete [NVIDIA] GPU;

- Intel does recognise that GPU's and CPU's are heading towards a clash course, and that they're doing something about it (in the form of Larrabee and in the effort of exponentially improving their IGP's).
 
Cuda doesn't have to be all that expensive though. The lower-end chips also support it, although they are obviously not as fast. But Cuda is nice and scalable.
So perhaps there is a market for adding a $40-$80 GPU to a system and advertising it as an "ideal home video/picture workstation" with the hardware-accelerated h264/PhotoShop processing etc.
 
It makes a difference in how both companies are going to play this game.
Intel is so much bigger than nVidia, that they could even choose to lose money on their GPU department if that means they can win marketshare. nVidia could never do that.
On the other hand, they may lose interest, and just withdraw. But if nVidia is successful in leveraging Cuda, then that is very unlikely, because they'd be cutting into Intel's CPU sales aswell at that point.
Oh sure, Intel has the resources for an exhaustion war, but they will still need to create a decent product for that. Intel could be pumping billions of dollars for ten years into the project, but if they don't create a product that has value, they will never sell anything. (And in that case, they won't be an opponent to NVIDIA)
Intel has less risk, if they screw things up, there won't be problems right away, but that doesn't change the situation for NVIDIA. All I'm saying is that Intel as a competitor will not be that different from ATI as a competitor, technically speaking. As long as NVIDIA produces a superiour product, they will "live"/win.

Intel has shown that they can adapt and transform quite well, because although they're big as a company, they work with multiple teams... Think about how they developed Pentium M/Core on the side, then moved focus from the Pentium 4 to Core2. They just developed both products side-by-side, and then picked the best one for the future. In the meantime they started working on Atom, yet another independent product-line. And Itanium is still going, but will more or less be merged with x86-technology now. And ofcourse there's Tera-scale, which spawned Larrabee, and might spawn other technologies in the future.
Yes, I agree that Intel is remarkably lean and flexible. They still have more layers than NVIDIA though. NVIDIA doesn't have the same amount corporate politics and the like. Of course, Intel's large orginisation has a lot of benefits NVIDIA lacks, but it's still a disadvantage when it comes to making radical and business-changing decisions, while the Intel Larrabee team has to work within certain confines the Intel corporation imposes. It's debatable how much of an impact these confines will have, but what's certain is that the field we're talking about is rapidly changing.

So Intel is basically free to experiment in all kinds of areas... nVidia basically only has one product-line, and a failure can mean the end of the company.
No argument there, Intel has a financially better position in this war.
 
Oh sure, Intel has the resources for an exhaustion war, but they will still need to create a decent product for that. Intel could be pumping billions of dollars for ten years into the project, but if they don't create a product that has value, they will never sell anything. (And in that case, they won't be an opponent to NVIDIA)
Intel has less risk, if they screw things up, there won't be problems right away, but that doesn't change the situation for NVIDIA. All I'm saying is that Intel as a competitor will not be that different from ATI as a competitor, technically speaking. As long as NVIDIA produces a superiour product, they will "live"/win.

I think this is exactly the difference. ATi/AMD can not go for exhaustion war, Intel can. Another factor is that Intel will probably always be ahead in process technology, where ATi and nVidia use the same third-party, so all that mattered was how good the design was.
Intel can compensate for design with superior manufacturing, just like they did with the Pentium 4/D. They had competitive prices even though their CPUs required far more transistors and higher clockspeeds to get the same performance as AMD's processors.

So Intel basically has a few aces up its sleeve, making the actual design/performance of their GPU less important in the bigger picture. Besides, I don't think it will take Intel more than 2-3 years to catch up with nVidia in the GPU world. And then what is nVidia going to do?
 
I see where you're getting at (don't worry about drawing up a list of applications), but I still can't see OEM's spending the extra cost to boost performance of high volume PC's. For example, your average mainstream PC, sold by Dell, currently looks like this:
The key point you are missing is that any decision maker worth his daily dose of oxygen shouldn't care about unit sales, only revenue and profits. If you average revenue rather than units, the 'sweet spot' is substantially higher. And if you average *chip* revenue rather than full-PC revenue, it's even higher.

Now, I don't see why an OEM would bother adding (or rather advertise) in a non Intel GPU/IGP, when they could use the money to add in bonuses such as (I'm sure you've all seen this before):
"Buy online now to get a free upgrade to 2GB of RAM for faster performance!"
The DRAM market is dead, they just don't know it yet.
"Buy online now for a free hard drive upgrade to 500GB to store xxx amount of movies and mp3's!"
The HDD market is dead, they just don't know it yet.
"Buy online now to get a free upgrade to a visually stunning Dell 22" widescreen LCD!"
Obviously that's not dead, and continues to be a desirable value-add. However LCD prices have 'crashed' nicely in recent years, and as such the price tag for a 'good enough' monitor in most people's mind is gradually decreasing. Just like in the DRAM market or even the PC market in general, users' perception of what they need isn't going up as fast as the prices - the dynamic just isn't quite as extreme. All of these markets would be extremely unattractive if it wasn't for emerging markets.

I'm sure it would be much for easier for OEM's to convince consumers to purchase a PC, if they throw in something that consumers sort of have a vague idea about (e.g. Processor speed increase, RAM increase and HDD space increase), or something that has a tangible bonus to the consumer (e.g. Upgrade to a 22" LCD).
That's why aggressive marketing is necessary, and why delivering many tangible value-adds rather than just a few is necessary. As I said, CPUs/DRAMs/HDDs are all dead, they just don't know it yet because consumer perception hasn't shifted yet. It won't shift overnight, but it would be naive to think that it is static and cannot change. On the other hand, it would be just as naive to think that it will naturally shift without proper effort.

It's just that I don't see consumer behavior shifting towards taking advantage of CUDA, if you know what I mean. Someone who's going to only do basic stuff on their PC like browse the web, type documents, read emails and occasionally play some online poker, isn't going to need the power of a dedicated GPU, nor will they sacrifice an extra 3 inches of LCD real estate for a GPU.
I agree about the LCD part of your point, but I think you're missing the big picture when it comes to the rest of it. Those who only browse the web and read emails won't need a $100 GPU, but they won't need a $100 CPU either, or $100 of DRAM, or even a $100 HDD. That part of the market is commoditizing, and my expectation is that in 2010, nobody with those kinds of requirements will need more than <$25 of logic/analogue chips and <$15 of DRAM. The <$200 PC will be an attractive reality, and the vast majority of those with such requirements will start only buying computers in that segment of the market.

BTW, totally off topic, but my opinion is that single-chip SoCs on leading-edge process nodes are a fad. It's much smarter to do part of the southbridge stuff on another chip, and integrate some analogue on there including a Gigabit Ethernet PHY. Similarly, in the handheld area, I believe having distinct chips for digital and analogue also makes sense; RF complicates things a bit but not massively so.

Of course the market doesn't only consist of people such as those that I previously mentioned. There are users (such as ourselves) who are looking to squeeze performance from every component, and who can measure the costs and benefits of choosing higher GPU performance over...CPU performance (for example).
Yes, and what you need to consider once again is chip revenue, not unit sales.

I guess, what I'm trying to (in summary) is that:
- The majority of consumers ("mainstream consumers") won't see much benefit of CUDA, and thus OEM's would be better off luring consumers with factors other than a discrete [NVIDIA] GPU;
Facts are disagreeing with you. HP is gaining share, Dell is losing share. Which of the two has used the most discrete GPUs in recent years? In practice, it's not just about what OEMs decide; it's also about what consumers decide and how that affects each company's market share.

- Intel does recognise that GPU's and CPU's are heading towards a clash course, and that they're doing something about it (in the form of Larrabee and in the effort of exponentially improving their IGP's).
Yeah, as I said I'm just not convinced this clash course makes sense for them financially when you consider chip revenue rather than end-product revenue.
 
Oh sure, Intel has the resources for an exhaustion war, but they will still need to create a decent product for that. Intel could be pumping billions of dollars for ten years into the project, but if they don't create a product that has value, they will never sell anything. (And in that case, they won't be an opponent to NVIDIA)
Intel has less risk, if they screw things up, there won't be problems right away, but that doesn't change the situation for NVIDIA. All I'm saying is that Intel as a competitor will not be that different from ATI as a competitor, technically speaking. As long as NVIDIA produces a superiour product, they will "live"/win.

All the people who think otherwise and there are many because Intel is such the industry darling need only to look at Microsoft and Google. Just because you are bigger doesn't mean you will automatically create a superior product or win in the marketplace.


Yes, I agree that Intel is remarkably lean and flexible. They still have more layers than NVIDIA though. NVIDIA doesn't have the same amount corporate politics and the like. Of course, Intel's large orginisation has a lot of benefits NVIDIA lacks, but it's still a disadvantage when it comes to making radical and business-changing decisions, while the Intel Larrabee team has to work within certain confines the Intel corporation imposes. It's debatable how much of an impact these confines will have, but what's certain is that the field we're talking about is rapidly changing.


No argument there, Intel has a financially better position in this war.

I have an argument here. How lean is Intel? They made some cuts in recent years. But they still have an awful lot of employees and expenses. What would happen if their average price of CPUs were cut in half or more? If the CPU becomes less important to PC makers why wouldn't this be the case?

NVIDIA looks to be correct. CPU scaling is basically no more for most consumers. And the gains from CUDA are so dramatic it doesn't even matter. Photoshop and video transcoding are the tip of the iceberg. The type of applications that are exciting in the future are going to benefit from GPUs or whatever you want to call these devices. Guess what - NVIDIA charges a lot less for them than Intel does for a CPU. Guess what else, PC makers are listening. Because the differences in performance and price are dramatic. So if you are a PC maker you can sell a better machine for less or take more profit for yourself. Either way lower prices translates into more demand. NVIDIA is certainly not going to raise prices so that it is easier on Intel's cost structure. NVIDIA's cost structure looks to me to be an order of magnitude better than Intel's. They basically serve the sames markets and have the same profitability.

It's hard to know how competitive you are when you are a monopolist. There is no real competition to compare yourself to. These companies are out to maximize profits and that is what they have been doing. But is there real economic basis for the prices Intel and Microsoft have been charging? I would argue no. There certainly is a large price umbrella for the well positioned newcomer to take advantage of.
 
I may not have been entirely correct about nVidia needing Intel.
They're negotiating with Via... they have an x86 CPU aswell. If they can manage to make Cuda the replacement for the lack of performance of Via's CPU, then it could be a strong team. Ofcourse that's going to require proper support from all applications 'that matter', else Intel will always have the upper hand. All software already takes advantage of x86, and Intel has the fastest x86.
 
All the people who think otherwise and there are many because Intel is such the industry darling need only to look at Microsoft and Google. Just because you are bigger doesn't mean you will automatically create a superior product or win in the marketplace.

There's a big difference here in that Microsoft is operating under a microscope. Any misstep immediately puts them back into "juicy lawsuit" territory. Also R&D for software while similar is very different from R&D for hardware. Hardware just has to work and be fast. Software has to work, but it doesn't necessarily have to be fast (in the areas where Google and MS are competing, if it's slow you can always throw more server clusters at it) and it also has to have some "unquantifiable" attraction to consumers that has virtually nothing to do with how well it performs.

Where Intel could potentially have Nvidia by the balls similar to how they have AMD by the balls is that they can R&D many more seperate and overlapping designs simultaneously. Nvidia does this currently but the last I ready they have have 3-4 "different" design teams working on "seperate" yet similar GPU designs. Intel has much more than that doing R&D on different CPU designs, some of which aren't very similar. Out of those they will eventually pick a design to implement for consumer or other uses. Some of those designs will be scrapped and some will continue with R&D.

Intel can afford to do this as their revenue is much higher than say AMD which can't afford to do this to the same degree. "IF" Intel wanted to. They could do the same thing to Nvidia. Sure they'll be behind for a bit as they catch up. But if they were to dedicated 5-10 R&D teams to it (and who says they haven't?) they could make up ground rapidly.

Designing a GPU isn't some mythical design process. Most of the basics are well known and understood. Getting the performance and rendering quality correct just requires time and manpower. Introducing new ways of doing things requires experimentation. All of this falls well within Intels capabilities...if they so wish. The fly in the ointment here is that I don't think Intel is all that serious about supplanting Nvidia or ATI. Rather just taking a bite out of the pie and diversifying.

Currently CPUs and chipsets are a better and more reliable source of revenue. However, who's to say that doesn't change at some point in the future when processing power is no longer marketable for use in Business machines and consumer machines?

I have an argument here. How lean is Intel? They made some cuts in recent years. But they still have an awful lot of employees and expenses. What would happen if their average price of CPUs were cut in half or more? If the CPU becomes less important to PC makers why wouldn't this be the case?

While CPU's and the chipsets they run on are the lions share of Intel's revenue, it's hardly the only source of revenue they have. And that point ties in directly to their renewed R&D into GPU's. Likewise they continue with R&D in various other technologies.

Relying on only one line of products for your revenue stream is a recipe for the eventual death or marginalization of your company.

Nvidia not only wants to push more specialized/generalized computing to the GPU, but they HAVE to. They currently live and die by how well their GPU's do. They would love to be able spread the risk out a bit more similar to Intel. And in that sense, it's far more important to Nvidia that they succeed in broadening the use and acceptance of CUDA than it is for Intel to succeed at Larrabee.

And the gains from CUDA are so dramatic it doesn't even matter. Photoshop and video transcoding are the tip of the iceberg. The type of applications that are exciting in the future are going to benefit from GPUs or whatever you want to call these devices.

Except that it's currently only applicable to a very small and narrow range of applications that currently do not affect most Business customers [Intels main source of revenue] nor the average consumer who doesn't even know what h.264 is nor how to use photoshop beyond playing around with filters.

Guess what - NVIDIA charges a lot less for them than Intel does for a CPU.

Sure for just the GPU. The cost ratio changes dramatically in the favor of Intel for OEM machines when you factor in the memory, PCB, etc. required to actually make that GPU useful.

Guess what else, PC makers are listening. Because the differences in performance and price are dramatic. So if you are a PC maker you can sell a better machine for less or take more profit for yourself.

Except that while the CPU will run pretty much every program, that cannot be said for a GPU. Nor are there any programs currently marketed to consumers or businesses that run on a GPU. And I don't see any OEMs offering special low power [celeron or pentium] coupled with a Tesla packaged GPU for running your average business workstation or grandma's e-mail and office applications computer.

It's hard to know how competitive you are when you are a monopolist. There is no real competition to compare yourself to. These companies are out to maximize profits and that is what they have been doing. But is there real economic basis for the prices Intel and Microsoft have been charging? I would argue no. There certainly is a large price umbrella for the well positioned newcomer to take advantage of.

Well, I suppose you could call Nvidia, Intel, and Microsoft monopolies.

Except they still have to contend with [ATI, S3, and a few others], [AMD, Sun, ARM, and a few others], and [Linux, Unix, MacOS, Sun, Novell, Star Office, etc.].

And while Nvidia, Intel, and MS have some lattitude for pricing their products how they wish, they still have to price according to how much their target market is willing to pay and with regards to whatever competition they may have.

I'd argue that a company such as Adobe has a far stronger, if not larger, Monopoly than any of those 3.

Regards,
SB
 
There's a big difference here in that Microsoft is operating under a microscope. Any misstep immediately puts them back into "juicy lawsuit" territory. Also R&D for software while similar is very different from R&D for hardware. Hardware just has to work and be fast. Software has to work, but it doesn't necessarily have to be fast (in the areas where Google and MS are competing, if it's slow you can always throw more server clusters at it) and it also has to have some "unquantifiable" attraction to consumers that has virtually nothing to do with how well it performs.

I mean honestly hands tied or not by regulators - that has very little to do with Google's success versus Microsoft and if you believe otherwise you are a simpleton. Google innovated in search, an area that was previously not thought to be lucrative. None of the other search engines were big commercial successes (Alta Vista was thought to be great for instance but it was commercially worthless). So it was not a priority for Microsoft because nobody knew how to make money from it. The Google guys knew it was valuable and then figured out a way to make money from. The rest is history. Microsoft has been playing catch up with the Internet from the get go, first with MSN, and now everything else.

It is easy to think that anybody can throw money at a chip and make it work. And maybe Intel can. But think about it - nobody now denies the future is in parallel devices. If Intel were all that wouldn't they have realized this a little sooner? But it is incredibly naive to think that the company that has known this all along, the company that has been getting stronger and more efficient, and better at what they are doing, doesn't have a few tricks up their sleeve that are going to be awfully difficult to compete with. NVIDIA didn't get to be good by throwing lots of money around. They hired especially innovative people and fostered a culture of innovation. If you look around a little at various things Intel has said they are working on, its pretty clear they are trying to learn how to do things that NVIDIA has already done in terms of architecture and circuit design. And that probably means NVIDIA is on to something newer and better, because that is why they are the leaders.

The idea that multiple Intel design teams are a threat to NVIDIA is completely laughable. I'm not sure that NVIDIA just arrives at a design out of thin air or by throwing darts at a board, if that's how you are implying Intel designs CPUs. Probably a lot of things are simulated. And one thing that strikes me about CUDA is that it might just prove valuable in accelerating R&D efforts for future GPUs, perhaps by an order of magnitude?
 
While CPU's and the chipsets they run on are the lions share of Intel's revenue, it's hardly the only source of revenue they have. And that point ties in directly to their renewed R&D into GPU's. Likewise they continue with R&D in various other technologies.

Relying on only one line of products for your revenue stream is a recipe for the eventual death or marginalization of your company.
(Disclaimer, these slides are from NV's analyst day, I can't find the original Intel slides on their site)
view.aspx
view.aspx


You would think Intel would have other sources of (meaningful) revenue, but their own graphs pretty clearly show that their CPUs are it. Everything else they make, including chipsets / iGPUs, they sell or give away for next to nothing so they can use up the capacity and finish paying off of their depreciated N-1/N-2 process fabs. This strategy has worked for them so far, but there appear to be 2 problems:
1) Chipset wafer starts look to be exploding, putting them in the weird position of having capacity pressure on their depreciated fabs to make chips they get next to no money from.
2) I doubt they can fab Larrabee on an N-1 process and have it be competitive. This means taking up prime fab space from far higher-margin CPUs to make it.

I think the main business question for Intel is how do they make Larrabee be successful (enough to have been worth the investment) without screwing up their fab business plan or their CPU margins.
 
Last edited by a moderator:
Not only do they not have other sources of substantial revenue, but the sources they do have are even more laughable in terms of gross profit. I'm pretty sure chipsets have substantially lower gross margins than CPUs despite lower fab amortization costs, and let's not even talk about the utter disaster that is flash.

In fact, I'll reverse the question. In the last 20 years, has a SINGLE one of Intel's many diversification efforts not ended in dismal failure? I can't think of any. It's not indicative of future trends, of course, but it's not a particularly good sign either to say the least. As I said and I'll say it again, I am very pessimistic about both WiMax and Moorestown. Although there is an amusing dynamic with Moorestown & friends that might help it a bit, but I'm skeptical it'll do miracles. One interesting dynamic wrt chipsets is the China/Dalian fab, and I'm very happy about Intel's apparent strategy there, but that's not really a revenue opportunity per se, just a cost reduction thing.

Of course, for the sake of objectivity, it's probably worth pointing out NVIDIA's MCP business is much less of a success than it might seem on first glance (their only really huge financial success was MCP61, and they lost money in the first few years if you know how to read the numbers correctly; the only real benefit of their initial MCP investment was via the XBox1 contract). And their handheld business has consistently been losing money. And embedded is really not a big business right now; in fact, it's arguably shrunk since the TNT M64 era. Quadro has been a huge success on the other hand, but obviously that's much nearer their core business.
 
Back
Top