NVIDIA GF100 & Friends speculation

not many companies only ship one product

If they did it all the time then it would not matter how many products they shipped. Did you mean that many companies ship one product at a loss at any given time? Even that argument may be untrue, though it is possible given the need to clear inventory. That is nothing at all related to the concept of selling at a loss due to the fact that it is more rational to sell at a small loss then not sell at all given certain conditions.
 
Been messing around with 3dm11 a bit to see what makes it tick. Here's a quick summary of gtx 580 feature performance on game test 4 (the most comprehensive one I think). Will post if I come across anything interesting but nothing particularly newsworthy so far. Tessellation and shadow map size seem to have the biggest impact, after resolution of course.

Oh, my "medium" settings are close to the performance preset, except with added 16xAF and surface shadow sample count reduced to 8 from 16.

gtx5803dm11.png
 
Nvidia seem very wary of making the same mistakes, that is why they have expanded into HPC, and SoCs, an area where the traditional CPU makers are completely out of the game. Moorestown is terrible from what I have seen and AMD don't have anything in that range, while Nvidia are inking deals to supply 10-15 devices with Tegra 2 next year increasing their non traditional revenue and protecting them further from market shocks (ATi hitting one out of the park leaving them busted like AMD after Core).

It might be OT here, but the road for real profit seems long for Tegra2. If you want to forget Moorestown be my guest for next year; but if you start comparing Tegra2 to something like TI's OMAP4 or what Apple will dish out soon with its next SoC...errr yes the road is long.
 
So you're sure that the Geforce line is in positive profit because you're lumping in the high profits from the professional line based on the same family of chips? That's changing the goalposts but I see where you are coming from.

It seems to me that Nvidia would rather like to make some/more profit from the consumer cards too, instead of burning cash on them to keep market share, and not have to use the professional products to subsidise the consumer cards.

The Nvidia numbers are broken down in their most recent 10Q filing (p. 27) which covers the quarter that ended October 31 (unlike prior quarters, there were no charges to distort figures):

GPU (GeForce, Chipsets): Rev 582M Operating profit 49M
PSB (Quadro, Tesla): Rev 210M Operating profit 87M
CPB (Tegra, console royalties): Rev 52M Operating loss 32M

Compare that against AMD graphics which managed only a 1M operating profit. It's not quite apples-to-apples since the AMD numbers include professional but not chipsets, and the Nv GPU numbers include chipsets but not professional, but it's still illustrative. Charlie's prognostications of Nv die size induced d00m don't seem to hold much connection to reality.
 
Last edited by a moderator:
Depends on how they distribute the R&D...

True, but given professional's insane 87m operating profit on 210M revs, it sure doesn't seem like they are shifting a lot of costs over to that side. Either that, or professional is such a cash cow it's capable of supporting the whole company by itself, including (at this point) heavy tegra losses.
 
True, but given professional's insane 87m operating profit on 210M revs, it sure doesn't seem like they are shifting a lot of costs over to that side. Either that, or professional is such a cash cow it's capable of supporting the whole company by itself, including (at this point) heavy tegra losses.

Well, Quadros do have extremely high margins… Not saying that's definitely the case, but it doesn't strike me as unlikely.
 
582 million revenue 49 profit

Do you think everything in that line is making 8% profit? Or do you think that some are making more and some less?
 
582 million revenue 49 profit

Do you think everything in that line is making 8% profit? Or do you think that some are making more and some less?

That includes all the expenses allotted to GPU, including R&D, advertising, adminstrative ect., the gross margins would be much much higher. If they were selling the 460 at a loss, they must have been making very large margins somewhere else. Since this quarter was before the launch 580/570, you tell me where those high margin products in GPU were?
 
That includes all the expenses allotted to GPU, including R&D, advertising, adminstrative ect., the gross margins would be much much higher. If they were selling the 460 at a loss, they must have been making very large margins somewhere else. Since this quarter was before the launch 580/570, you tell me where those high margin products in GPU were?

What about an 8% operating profit suggests than anything was making "very large margins"?

They could be making 20% on one chip, 10% on everything else and losing buckets of money on the 460 and still nothing was making large margins. I really don't know, but there's nothing in that posting that says the 460 was profitable. Of course the opposite could be true and the 460 was the only profitable chip. It neither proves or disproves anything.
 
That includes all the expenses allotted to GPU, including R&D, advertising, adminstrative ect

No. It's unlikely that it includes all the R&D etc.
But it does include chipsets, which is basicly nothing but licensing and gross margins...
If PSB is making 2/3 of the profit, wouldn't it be most fair that it got at least half the common expenses too? Ofcourse you could also be more ..creative and make a split making sure that both lines look reasonably profitable.
I'm not saying their consumer GPUs as a whole isn't making a positive gross margin, but from those few numbers we can't conclude much about how big it is.
 
What about an 8% operating profit suggests than anything was making "very large margins"?

They could be making 20% on one chip, 10% on everything else and losing buckets of money on the 460 and still nothing was making large margins. I really don't know, but there's nothing in that posting that says the 460 was profitable. Of course the opposite could be true and the 460 was the only profitable chip. It neither proves or disproves anything.

The 8% is net overall profit. It has nothing to do with gross margins, except that IF a company runs a profit, gross margins must automatically be much, much higher. The opposite is not true: I've worked for years for companies with very decent 40%+ gross margins, yet the company was never profitable as a whole because volumes didn't pick up as expected.

The nature of all fabless semiconductor companies is that you have to make large up front investments to develop a product. This is what generates most of the losses. Which you try to overcome by selling as many as you can for as much as you can for a long as you can. Only when gross margins are negative does it become not profitable to continue selling (or when there are capacity constraints and other products fetch a higher gross margin). No company will ever do that. There is absolutely no reason.

So the point about positive gross margins is that it's an indication that it makes sense to produce and sell more of the same product. There is a difference between a company making a profit and selling a product at a profit.

It's completely pointless to include R&D costs in the 'selling profitably' equation for something for which expenses have already been accounted for: Wall Street has long forgotten about the R&D expenses required to bring a GTX460 to market and for a fabless company there's very little to write off over the long term.
 
What about an 8% operating profit suggests than anything was making "very large margins"?

They could be making 20% on one chip, 10% on everything else and losing buckets of money on the 460 and still nothing was making large margins. I really don't know, but there's nothing in that posting that says the 460 was profitable. Of course the opposite could be true and the 460 was the only profitable chip. It neither proves or disproves anything.

Margins are calculated by gross profit, at least the ones we talk about for semi-conductor companies. A healthy tech company makes net profit margins between 5-15%, more than that and you end up starving your R&D and less than that for obvious reasons.

On that basis, Nvidia are doing pretty well overall while AMD and ATi are in the crapper. Also this is before Intel pay out a massive settlement or give Nvidia a QPI licence and AMD already have their $1bn settlement.

When it comes to business and finance Charlie doesn't know what he is talking about. Nvidia are perfectly healthy and profitable and their lineup is expanding outside of their core business and they are still making a profit (which is difficult to do if you aren't Microsoft, Apple or Sony these days). They have got around 20 design wins for Tegra 2, 10 of which have been productised and will be shown at CES and the other 10 are in the prototype stage. For a company that doesn't specialise in SoCs that is a pretty good start, and with Tegra 3 looking very nice (lower power consumption, better GFX, faster dual A9) and Honeycomb on the way in Q1 with Tegra 2 as the default plaftorm I think Nvidia are going to be fine.
 
For Cypress, this is true. Can't find it now, but there was some article running HD5850 and HD5870 at the same clocks. Other than synthetics, the performance difference was 0.5-4% (for 10% more shaders), with an average of about 2% IIRC. Or just look at Barts if more SPs would mean much it would be nowhere near Cypress performance. (Don't forget, these 10% shaders use nowhere near close to 10% die area, so it's not quite as bad as it sounds.) The same is true for Cayman btw - but I haven't seen someone running tests for HD 6950 and HD 6970 at the same clock (but that simd scaling must be near nonexistent follows from the clock and perf difference between these two).
That said, I think there's reason to believe GF104/GF114 (or GF100/GF110 for that matter) scale better with more SMs. I was assuming something like roughly 50% scaling - so for that additional SM (+17%) you'd indeed get somewhere along 10% improvement.

http://www.xbitlabs.com/articles/video/display/radeon-hd5850_6.html#sect0

http://www.xbitlabs.com/articles/video/display/radeon-hd6970-hd6950_7.html#sect4
 
Ah I think I had a different article in mind but this one is fine. Though it looks error margin is quite high as the overclocked HD 5850 with clocks of HD 5870 sometimes even beats the HD 5870... In any case, it illustrates the point quite well.

Scaling isn't quite that bad as I feared, though clocks are unfortunately not the same (slightly lower core clock but quite a bit higher mem clock of the OC HD 6950 compared to HD 6970. Due to that it's near impossible to isolate simd scaling.
And, the article did not mention if the power limit was also raised (I suspect it could make an impact otherwise when OC), and I have concerns over gddr5 overclocking (we know this can get slower due to (correctable) errors when overclocking too much). I'd prefer a HD 6970 at HD 6950 clocks for comparison for that reason (or, a HD 6950 clocked to HD 6970 levels with the exact same clocks and the power limit raised).
 
The Nvidia numbers are broken down in their most recent 10Q filing (p. 27) which covers the quarter that ended October 31 (unlike prior quarters, there were no charges to distort figures):

GPU (GeForce, Chipsets): Rev 582M Operating profit 49M
PSB (Quadro, Tesla): Rev 210M Operating profit 87M
CPB (Tegra, console royalties): Rev 52M Operating loss 32M

Assuming that all chipset and GPU profit were made at the same rate of 8% is of course wrong. Low end GPUs and especially chipsets have appreciably higher profit margins.

Looks like some GPU lines were sold without profit if these numbers are correct. Are you telling us that Charlie was right all the time? :devilish:

Take GTX460: Price cuts have been quite much larger than 8% over several months. Were all of those price levels profitable or was it sold at loss from beginning?
 
Last edited by a moderator:
Assuming that all chipset and GPU profit were made at the same rate of 8% is of course wrong. Low end GPUs and especially chipsets have appreciably higher profit margins.

Looks like some GPU lines were sold without profit if these numbers are correct. Are you telling us that Charlie was right all the time? :devilish:

Take GTX460: Price cuts have been quite much larger than 8% over several months. Were all of those price levels profitable or was it sold at loss from beginning?

Thank you so much for your insight! It's great to have somebody on board who understands the accounting intricacies related to fabless companies!

Do you mind if I ask a few questions?
  • How do you define 'profit margin' ? According to Wikipedia: "It is difficult to accurately compare the net profit ratio for different entities. Individual businesses' operating and financing arrangements vary so much that different entities are bound to have different levels of expenditure, so that comparison of one with another can have little meaning." Can you clarify how Nvidia uses it between different entities?
  • If we assume that 95% of the development/code between high-end and low end chips is shared, how do you spread opex between high and low end chips? You could arbitrarily assign everything to the high end product and nothing to the low end product or vice versa and dramatically change the profit margin for either product (I'm assuming you include R&D in this because otherwise 8% wouldn't make sense because of known 40%+ gross margins.)
  • How do you take into account general GPU marketing efforts that don't explicitly target a particular high-end or low-end card?
  • We know that the professional products and consumer products use the same silicon. How do you spread the opex between those to product lines? It only seems fair that you pro-rate them according to revenue, right? Or would you do it accourding to chip volume? In the former case, the consumer products are guaranteed to be much more profitable than AMD's products (who has no professional revenue to speak of and has to charge all opex to their consumer products). In the later case, you'd burden the high running consumer products with unused professional features.
  • How does the gross margin factor into this whole story? How high do you think they are? Do you think Nvida ever buys a working die from TSMC and sells it for less than it cost to produce? (That would be intensely stupid, right?) If not, then they have positive gross margins and it would be beneficial to keep on selling their wares to generate additional cash. Can you call that 'selling for a profit', even if 1 or 2 years later, it turns out that earlier R&D investments where not fully recouped? If not, what would you call it?

I look forward to your expert opinion regarding these matters! Feel free to consult your mentor first. (While you do so, can you also ask him to revise an accounting related article he edited roughly a year ago? Looking back, it doesn't exactly seem to stand the test of time, especially the part where, based on a bogus opex premise, it states that they "gave up on this generation and has nothing new in its pipeline for the short term, so it is looking forward to the next chip generation".)
 
silent_guy: I don't often regret the forum's reputation system, but... wow :)
But I think it would be best if tannat started by explaining the difference between gross/operating/net margins - too many of us heathens just don't get it.
 
Thank you so much for your insight! It's great to have somebody on board who understands the accounting intricacies related to fabless companies!

Do you mind if I ask a few questions?
  • How do you define 'profit margin' ? According to Wikipedia: "It is difficult to accurately compare the net profit ratio for different entities. Individual businesses' operating and financing arrangements vary so much that different entities are bound to have different levels of expenditure, so that comparison of one with another can have little meaning." Can you clarify how Nvidia uses it between different entities?
  • If we assume that 95% of the development/code between high-end and low end chips is shared, how do you spread opex between high and low end chips? You could arbitrarily assign everything to the high end product and nothing to the low end product or vice versa and dramatically change the profit margin for either product (I'm assuming you include R&D in this because otherwise 8% wouldn't make sense because of known 40%+ gross margins.)
  • How do you take into account general GPU marketing efforts that don't explicitly target a particular high-end or low-end card?
  • We know that the professional products and consumer products use the same silicon. How do you spread the opex between those to product lines? It only seems fair that you pro-rate them according to revenue, right? Or would you do it accourding to chip volume? In the former case, the consumer products are guaranteed to be much more profitable than AMD's products (who has no professional revenue to speak of and has to charge all opex to their consumer products). In the later case, you'd burden the high running consumer products with unused professional features.
  • How does the gross margin factor into this whole story? How high do you think they are? Do you think Nvida ever buys a working die from TSMC and sells it for less than it cost to produce? (That would be intensely stupid, right?) If not, then they have positive gross margins and it would be beneficial to keep on selling their wares to generate additional cash. Can you call that 'selling for a profit', even if 1 or 2 years later, it turns out that earlier R&D investments where not fully recouped? If not, what would you call it?

I look forward to your expert opinion regarding these matters! Feel free to consult your mentor first. (While you do so, can you also ask him to revise an accounting related article he edited roughly a year ago? Looking back, it doesn't exactly seem to stand the test of time, especially the part where, based on a bogus opex premise, it states that they "gave up on this generation and has nothing new in its pipeline for the short term, so it is looking forward to the next chip generation".)

Nvidia presents a total of 8% profit on all Geforce GPU/Chipset lines. Some parts give more profit some parts give less. That the cards during their lifetime are sold at
quite different price levels which moves much more than within the 8% range tells me that they, at least at some point are sold without profit according to nvidias numbers I don't see what's so provocative with that insight to take such a (infected?) stance.

Well well.

silent_guy: I don't often regret the forum's reputation system, but... wow :)
But I think it would be best if tannat started by explaining the difference between gross/operating/net margins - too many of us heathens just don't get it.

Hey I don't claim to be an expert. And the exact differences are hard for me to define in Swedish already(my native tongue) . I'm happy to trust nvidias numbers and definitions here.

I'm quite good at basic math though. With 8% total profit /mean profit for the Geforce products it looks quite wishful to believe that all products were sold with profit. And frankly, it's no deal if they didn't.
 
Back
Top