Could you clarify?And Itanium is still going, but will more or less be merged with x86-technology now.
I've got good reasons to believe executive management looked at CUDA as primarily a HPC thing until recently.
Sure, it's hardly cheap for a hardware company to suddenly position itself to take over many of the performance-centric parts of the software industry. However, I think it's pretty easy to see that it wouldn't be *that* expensive either; I mean, 100 software developers could do so much it's not even funny, and you can get that for~$3M/quarter. The return on investment is deliriously good - but of course, there are two problems: 1) when your stock price goes down massively in part because analysts become worried about your operating expenses, $3M is a lot of money to approve!It's probably also a matter of resources. Which is likely one reason they waited so long too really trumpet the GPU. And having a supercomputer on the desktop is pretty revolutionary for science and potentially huge for profits.
2) It's pretty hard to find 100 Tim Murray's! I once caught one in the wild, but they seem to be going extinct...Who wouldn't want 100 Tim Murray's except for maybe you and his mother.
Heh, not going to disagree with that, but my point was more specific: they didn't realize the impact on sales this could have (hint: it dwarfs HPC in the short, mid and long term if executed properly), and they didn't realize how important it was to do a good chunk of that R&D in-house and release the results as closed-source freeware or libraries.But I find it hard to believe NVIDIA management has not been thinking mainstream applicability of CUDA since well before it was conceived. It's not like they woke up one morning and said oh we can use this on the desktop too.
When you have the right technology at your disposal, it's easy to say your strategy is working; what would be surprising is if it wasn't. The value add from good decision making in this kind of scenario is the magnitude of the impact of your strategy, not its absolute success or failure.Personally I think it looks like their strategy is working. This is all very new stuff and of course CUDA itself is improving. When these first applications hit I bet programmers all over will take a good look.
1) when your stock price goes down massively in part because analysts become worried about your operating expenses, $3M is a lot of money to approve!
Sure, it's hardly cheap for a hardware company to suddenly position itself to take over many of the performance-centric parts of the software industry. However, I think it's pretty easy to see that it wouldn't be *that* expensive either; I mean, 100 software developers could do so much it's not even funny, and you can get that for~$3M/quarter. The return on investment is deliriously good - but of course, there are two problems: 1) when your stock price goes down massively in part because analysts become worried about your operating expenses, $3M is a lot of money to approve!
Yeah, they are - although the way I was looking at it is that a good bunch of those guys can be entry-level, because the concepts are sufficiently new that not a lot of people have experience in there. There are plenty of CUDA university courses nowadays, and simply recruiting out of that could give you 50+ good developers very fast. Of course, you also need field experts in the 240-360k range as you point out, so the average is indeed going to be higher than what I estimated - oops! (but probably still substantially below 200K).And your $3M per quarter number is low, realistic costs would be on the order of 2-3x that amount. Total allocated costs for that type of specialized programmer are likely to be in the 240-360 k per year range. Especially in the valley where software salaries in the 80-100 k range are pretty much entry level.
My estimate was based on a 90-110K average and some extra for that - I did think I was on the low side, but guess I really need to review my employment basics if it's an order of magnitude higher than I thought!Add on incentive, perks, benefits, office space, equipment, etc and you are looking at ~200k for an entry level programmer per year
Obviously, managing your business based on what analysts think is possibly one of the worst possible strategies in the entire universe. However, that doesn't mean you should do everything in your power to anger them or shareholders for a wide variety of very good reasons.Voltron said:Of course it's not really about analysts caring in the short term. It's this financial discipline that's helps them grow and become more profitable and invest in more initiatives.
Oh sure, Marv is right as always; I'm merely pointing out that I believe this should be on the top of the short list, and warrants much more short-term and mid-term extra investment than handhelds, chipsets or HPC. It's also deliriously more time-sensitive than any of these things...It's like Marv says, they have tons of projects they'd like to do, but haven't gotten around to yet. There is organic growth in the sense of not doing acquisitions and then there is organic growth in the bigger sense of the word - growing in a natural way. This is getting esoteric, but NVIDIA are masters of this.
My estimate was based on a 90-110K average and some extra for that - I did think I was on the low side, but guess I really need to review my employment basics if it's an order of magnitude higher than I thought!
But that's the key point: quad-cores also only increase performance on a range of select applications in practice! So the key isn't to accelerate a billion applications; the key is to make it more valuable for Joe Consumer to have a powerful GPU than a quad-core or even a tri-core. For the 2009 Back-to-School cycle, Intel's line-up will look like this:
Ultra-High-End: 192-bit DDR3 Quad-Core Nehalem
High-End: 128-bit DDR3 Quad-Core Nehalem
Mid-Range 1: 128-bit DDR3 Dual-Core Nehalem
Mid-Range 2: 128-bit DDR2/3 Dual-Core Nehalem [Third-Party Chipset]
Low-End: 128-bit DDR2 Dual-Core 3MiB Penryn
Ultra-Low-End: 128-bit DDR2 Single-Core Conroe [65nm]
So let's make the goal clear: encourage OEMs and customers to stick to dual-cores, and potentially even Penryn, in favour of using a more expensive GPU. As you point out, this won't work if you only accelerate select applications; so the solution is simple: do massive in-house R&D for a wide variety of suitable applications, and release the results as freeware and free closed-sourced libraries for third party applications to use.
I have a list of suitable applications if you are interested. However, it should be easy to make up your own by looking at any modern CPU review and pondering whether the multi-core applications benchmarked could be accelerated via CUDA. The answer is 'yes' in a surprisingly high number of cases; you're unlikely to parallelize LZMA/7Zip, but there are a lot of things that you *can* do in CUDA, especially with shared memory.
It runs out of steam instantly if you don't have enough applications on it. But if you got a bunch of applications running on it, you can simultaneously reduce the value-add of quad-core and increase the value-add of GPUs. You won't get many design wins in the enterprise space for it; but honestly, if the decision makers are rational (errr, that's a bit optimsitic) that market should become a pure commodity market and ASPs should go down 10x. So yeah, you can't add value there, but that's really not the point.
Yes, but once again, basic arithmetic tells us that Intel will be in big trouble if GPU/GPGPU ASPs grow while CPU ASPs lower, because the chip represents nearly 100% of the BoM for a CPU but much less than half for a GPU. So if the end-consumer market is constant, replacing every CPU by a GPU would result in 50-80% less revenue; i.e. utter financial disaster.
Larrabee is a great product for Intel if it just eats away at GPUs. But sadly for Intel, the world is more complex than that, and their fate will be decided in the 2009 Back-to-School and Winter OEM cycles, likely before Larrabee will be available. This is another reason why in-house development is key by the way; in that timeframe, third-party software developers might want to hedge their bets with Larrabee, and clearly that would have negative consequences. And given the fact that the potential profit opportunities here dwarf those of traditional GPU applications, I'd argue it'd be pretty damn dumb to let third parties decide of your fate. Such a strategy has very rarely worked in the past, and it's not magically going to work now. Can they still help? Well yeah, duh. But once again that's not the point.
Oh sure, Intel has the resources for an exhaustion war, but they will still need to create a decent product for that. Intel could be pumping billions of dollars for ten years into the project, but if they don't create a product that has value, they will never sell anything. (And in that case, they won't be an opponent to NVIDIA)It makes a difference in how both companies are going to play this game.
Intel is so much bigger than nVidia, that they could even choose to lose money on their GPU department if that means they can win marketshare. nVidia could never do that.
On the other hand, they may lose interest, and just withdraw. But if nVidia is successful in leveraging Cuda, then that is very unlikely, because they'd be cutting into Intel's CPU sales aswell at that point.
Yes, I agree that Intel is remarkably lean and flexible. They still have more layers than NVIDIA though. NVIDIA doesn't have the same amount corporate politics and the like. Of course, Intel's large orginisation has a lot of benefits NVIDIA lacks, but it's still a disadvantage when it comes to making radical and business-changing decisions, while the Intel Larrabee team has to work within certain confines the Intel corporation imposes. It's debatable how much of an impact these confines will have, but what's certain is that the field we're talking about is rapidly changing.Intel has shown that they can adapt and transform quite well, because although they're big as a company, they work with multiple teams... Think about how they developed Pentium M/Core on the side, then moved focus from the Pentium 4 to Core2. They just developed both products side-by-side, and then picked the best one for the future. In the meantime they started working on Atom, yet another independent product-line. And Itanium is still going, but will more or less be merged with x86-technology now. And ofcourse there's Tera-scale, which spawned Larrabee, and might spawn other technologies in the future.
No argument there, Intel has a financially better position in this war.So Intel is basically free to experiment in all kinds of areas... nVidia basically only has one product-line, and a failure can mean the end of the company.
Oh sure, Intel has the resources for an exhaustion war, but they will still need to create a decent product for that. Intel could be pumping billions of dollars for ten years into the project, but if they don't create a product that has value, they will never sell anything. (And in that case, they won't be an opponent to NVIDIA)
Intel has less risk, if they screw things up, there won't be problems right away, but that doesn't change the situation for NVIDIA. All I'm saying is that Intel as a competitor will not be that different from ATI as a competitor, technically speaking. As long as NVIDIA produces a superiour product, they will "live"/win.
The key point you are missing is that any decision maker worth his daily dose of oxygen shouldn't care about unit sales, only revenue and profits. If you average revenue rather than units, the 'sweet spot' is substantially higher. And if you average *chip* revenue rather than full-PC revenue, it's even higher.I see where you're getting at (don't worry about drawing up a list of applications), but I still can't see OEM's spending the extra cost to boost performance of high volume PC's. For example, your average mainstream PC, sold by Dell, currently looks like this:
The DRAM market is dead, they just don't know it yet.Now, I don't see why an OEM would bother adding (or rather advertise) in a non Intel GPU/IGP, when they could use the money to add in bonuses such as (I'm sure you've all seen this before):
"Buy online now to get a free upgrade to 2GB of RAM for faster performance!"
The HDD market is dead, they just don't know it yet."Buy online now for a free hard drive upgrade to 500GB to store xxx amount of movies and mp3's!"
Obviously that's not dead, and continues to be a desirable value-add. However LCD prices have 'crashed' nicely in recent years, and as such the price tag for a 'good enough' monitor in most people's mind is gradually decreasing. Just like in the DRAM market or even the PC market in general, users' perception of what they need isn't going up as fast as the prices - the dynamic just isn't quite as extreme. All of these markets would be extremely unattractive if it wasn't for emerging markets."Buy online now to get a free upgrade to a visually stunning Dell 22" widescreen LCD!"
That's why aggressive marketing is necessary, and why delivering many tangible value-adds rather than just a few is necessary. As I said, CPUs/DRAMs/HDDs are all dead, they just don't know it yet because consumer perception hasn't shifted yet. It won't shift overnight, but it would be naive to think that it is static and cannot change. On the other hand, it would be just as naive to think that it will naturally shift without proper effort.I'm sure it would be much for easier for OEM's to convince consumers to purchase a PC, if they throw in something that consumers sort of have a vague idea about (e.g. Processor speed increase, RAM increase and HDD space increase), or something that has a tangible bonus to the consumer (e.g. Upgrade to a 22" LCD).
I agree about the LCD part of your point, but I think you're missing the big picture when it comes to the rest of it. Those who only browse the web and read emails won't need a $100 GPU, but they won't need a $100 CPU either, or $100 of DRAM, or even a $100 HDD. That part of the market is commoditizing, and my expectation is that in 2010, nobody with those kinds of requirements will need more than <$25 of logic/analogue chips and <$15 of DRAM. The <$200 PC will be an attractive reality, and the vast majority of those with such requirements will start only buying computers in that segment of the market.It's just that I don't see consumer behavior shifting towards taking advantage of CUDA, if you know what I mean. Someone who's going to only do basic stuff on their PC like browse the web, type documents, read emails and occasionally play some online poker, isn't going to need the power of a dedicated GPU, nor will they sacrifice an extra 3 inches of LCD real estate for a GPU.
Yes, and what you need to consider once again is chip revenue, not unit sales.Of course the market doesn't only consist of people such as those that I previously mentioned. There are users (such as ourselves) who are looking to squeeze performance from every component, and who can measure the costs and benefits of choosing higher GPU performance over...CPU performance (for example).
Facts are disagreeing with you. HP is gaining share, Dell is losing share. Which of the two has used the most discrete GPUs in recent years? In practice, it's not just about what OEMs decide; it's also about what consumers decide and how that affects each company's market share.I guess, what I'm trying to (in summary) is that:
- The majority of consumers ("mainstream consumers") won't see much benefit of CUDA, and thus OEM's would be better off luring consumers with factors other than a discrete [NVIDIA] GPU;
Yeah, as I said I'm just not convinced this clash course makes sense for them financially when you consider chip revenue rather than end-product revenue.- Intel does recognise that GPU's and CPU's are heading towards a clash course, and that they're doing something about it (in the form of Larrabee and in the effort of exponentially improving their IGP's).
Oh sure, Intel has the resources for an exhaustion war, but they will still need to create a decent product for that. Intel could be pumping billions of dollars for ten years into the project, but if they don't create a product that has value, they will never sell anything. (And in that case, they won't be an opponent to NVIDIA)
Intel has less risk, if they screw things up, there won't be problems right away, but that doesn't change the situation for NVIDIA. All I'm saying is that Intel as a competitor will not be that different from ATI as a competitor, technically speaking. As long as NVIDIA produces a superiour product, they will "live"/win.
Yes, I agree that Intel is remarkably lean and flexible. They still have more layers than NVIDIA though. NVIDIA doesn't have the same amount corporate politics and the like. Of course, Intel's large orginisation has a lot of benefits NVIDIA lacks, but it's still a disadvantage when it comes to making radical and business-changing decisions, while the Intel Larrabee team has to work within certain confines the Intel corporation imposes. It's debatable how much of an impact these confines will have, but what's certain is that the field we're talking about is rapidly changing.
No argument there, Intel has a financially better position in this war.
All the people who think otherwise and there are many because Intel is such the industry darling need only to look at Microsoft and Google. Just because you are bigger doesn't mean you will automatically create a superior product or win in the marketplace.
I have an argument here. How lean is Intel? They made some cuts in recent years. But they still have an awful lot of employees and expenses. What would happen if their average price of CPUs were cut in half or more? If the CPU becomes less important to PC makers why wouldn't this be the case?
And the gains from CUDA are so dramatic it doesn't even matter. Photoshop and video transcoding are the tip of the iceberg. The type of applications that are exciting in the future are going to benefit from GPUs or whatever you want to call these devices.
Guess what - NVIDIA charges a lot less for them than Intel does for a CPU.
Guess what else, PC makers are listening. Because the differences in performance and price are dramatic. So if you are a PC maker you can sell a better machine for less or take more profit for yourself.
It's hard to know how competitive you are when you are a monopolist. There is no real competition to compare yourself to. These companies are out to maximize profits and that is what they have been doing. But is there real economic basis for the prices Intel and Microsoft have been charging? I would argue no. There certainly is a large price umbrella for the well positioned newcomer to take advantage of.
There's a big difference here in that Microsoft is operating under a microscope. Any misstep immediately puts them back into "juicy lawsuit" territory. Also R&D for software while similar is very different from R&D for hardware. Hardware just has to work and be fast. Software has to work, but it doesn't necessarily have to be fast (in the areas where Google and MS are competing, if it's slow you can always throw more server clusters at it) and it also has to have some "unquantifiable" attraction to consumers that has virtually nothing to do with how well it performs.
(Disclaimer, these slides are from NV's analyst day, I can't find the original Intel slides on their site)While CPU's and the chipsets they run on are the lions share of Intel's revenue, it's hardly the only source of revenue they have. And that point ties in directly to their renewed R&D into GPU's. Likewise they continue with R&D in various other technologies.
Relying on only one line of products for your revenue stream is a recipe for the eventual death or marginalization of your company.