NVIDIA Kepler speculation thread

How do you explain AMD making a tiny profit on graphics all year with their perf/mm2 advantage, yet Nvidia making ~$500m? Nvidia's professional sector is shoring up their losses in consumer and has been for years - it really is that simple. What else could it be? These price wars might be great for us but they are not working for either company.

nVidia has a big advantage on volumes which help them to offset the NREs. There have been years when nVidia's consumer business has brought in big profits with big dies, but it's a strategy that don't have much room for troubles in the manufacturing, especially now that they have to pay per wafer instead of per working chip.

Also NRE breakdowns aren't fully out in the open in financial statements, so it's harder to get the 100% accurate picture of the situation. Nevertheless I agree that their business line is very important for their bottom line, but the business segment also benefits from the large volumes of their consumer GPUs. The business line of products wouldn't be the same without the other segment. I think there is a huge synergy benefits for them.
 
How do you explain AMD making a tiny profit on graphics all year with their perf/mm2 advantage, yet Nvidia making ~$500m? Nvidia's professional sector is shoring up their losses in consumer and has been for years - it really is that simple. What else could it be? These price wars might be great for us but they are not working for either company.

The professional and HPC market certainly plays into that.

However, Nvidia also has greater volume at the moment than AMD. Hence static costs (costs not associated with per chip manufacturing) can be amortized over a larger amount of revenue. That will lead to greater profit overall.

Plus, AMD has moved the single largest volume of GPU sales out of the GPU division and into the CPU division, lowering the GPU revenue stream and hence less revenue with which to amortize those static costs (R&D, operational costs, etc.).

This isn't to say that professional and HPC isn't a significant chunk of why Nvidia is able to post higher profits despite larger dies, but it isn't the only reason.

Regards,
SB
 
nVidia has a big advantage on volumes which help them to offset the NREs. There have been years when nVidia's consumer business has brought in big profits with big dies, but it's a strategy that don't have much room for troubles in the manufacturing, especially now that they have to pay per wafer instead of per working chip.

Also NRE breakdowns aren't fully out in the open in financial statements, so it's harder to get the 100% accurate picture of the situation. Nevertheless I agree that their business line is very important for their bottom line, but the business segment also benefits from the large volumes of their consumer GPUs. The business line of products wouldn't be the same without the other segment. I think there is a huge synergy benefits for them.

I'm not necessarily disagreeing with that but AMD has their own advantages as well like die sizes and time to market. It really doesn't appear to be helping their bottom line a whole lot though.

AMD's graphics segment made a profit of $51m last year - that's everything graphics including the professional and console markets. $51 million...how can anyone think the 7-series cards are overpriced?

If you just consider say, the 6950 vs the 560 - sure the 6950 has a larger die but it's selling for more than offsets that disadvantage. How much of that measly $51 million can that one card be? Not a lot, and in all probability it's been a loss or at best a miniscule profit, and based on that it's highly likely that Nvidia's midrange/performance cards are also loss makers.
 
The professional and HPC market certainly plays into that.

However, Nvidia also has greater volume at the moment than AMD. Hence static costs (costs not associated with per chip manufacturing) can be amortized over a larger amount of revenue. That will lead to greater profit overall.

Plus, AMD has moved the single largest volume of GPU sales out of the GPU division and into the CPU division, lowering the GPU revenue stream and hence less revenue with which to amortize those static costs (R&D, operational costs, etc.).

This isn't to say that professional and HPC isn't a significant chunk of why Nvidia is able to post higher profits despite larger dies, but it isn't the only reason.

Regards,
SB

Again I'm not necessarily disagreeing with that - Nvidia does have certain advantages that should help, but AMD also has their own advantages.

I just think that if you look at what each company has (Nvidia has what ~$800 million tesla and quadro revenue vs AMD's...$100 million if they are lucky?). We know that's where the real profits are made and the $530 million or so gap between Nvidia and AMD's graphics division is, for me, highly likely to be made in those areas.
 
You seem to have forgotten that I was talking about profit (which I mentioned 3 times btw) and you started talking about gross margin.

Yes, you're talking about profit and at the same time keep referring to die-sizes. Guess you didn't get the hint :)

Die-size directly impacts gross margin. Profit is a far more complex beast that takes into consideration operating efficiency, sales volume, r&d and other overhead costs. Using the right words is far more important that using a lot of them....
 
Fans, VRM's, PCB's - it all adds up.
Then why don't you add it up? For reference.

You seem to have forgotten that I was talking about profit (which I mentioned 3 times btw) and you started talking about gross margin.
Like I said: this has been discussed to death. Here.

How do you explain AMD making a tiny profit on graphics all year with their perf/mm2 advantage, yet Nvidia making ~$500m? Nvidia's professional sector is shoring up their losses in consumer and has been for years - it really is that simple. What else could it be?
I don't disagree with anything you claim here. I disagree with your claim that Nvidia (and AMD for that matter) are selling at a price that's below the marginal cost to produce each extra die. (Which is the very definition of negative gross margins.)
The profit as a company has no relevance in this: you can sell a die that costs $5 to produce at $100, but still are deep in the red if volumes are low and NRE is high.

Nvidia claims its GeForce division is roughly break-even, but their professional segment is highly profitable. That means their accountants chose to put most of the NRE and marketing costs on the GeForce account. It doesn't say one bit about the price at which they sell a die.
 
If so. :???:
Why do they hurry to EOL products which can be produced few months more at lower cost, prices (discounts too), thus higher volume?
See an example- at the moment there is one R6970 at the ridiculous price of 410$. :???:
There's probably multiple factors at work?
- The cards are already in the channel. Lowering the prices would hurt the vendors?
- You don't want to lower the price of particular product segment? Somewhat similar to real estate agents throwing in a ton of freebies during the waning days of the housing bubble: free washers/dryers/fridge etc. Instead of lowering the price outright (which hurts the comps in the whole neighborhood), they keep the price officially high but reduce it by other means. From this point of view, Nvidia's product renaming is smart: you're less likely to piss off a GTX9800 buyer if you keep its price and introduce a differently named product that plays in a lower price segment than if you reduce the original price a lot.
- Other reasons?
 
A supposed 3DMark11 Extreme benchmark image, original source appears to be here.

GK104_BINCHMARK.jpg
 
One thing I'm wondering...

Now that Nvidia is rumored to be able to do at least 3 monitors on one card, I wonder how the 2 GB is going to impact performance at multimonitor resolutions. Yes it's still a niche segment, albeit a fairly important one for this class of cards (just like high resolution 2560x1440 or 2560x1600), IMO.

Regards,
SB
 
If FXAA and other post-process AA keep gaining momentum it may matter less. So far it's getting good traction with native support - BF3, DeusEX, Skyrim etc. UE3 will probably adopt it too.
 
The professional and HPC market certainly plays into that.

However, Nvidia also has greater volume at the moment than AMD. Hence static costs (costs not associated with per chip manufacturing) can be amortized over a larger amount of revenue. That will lead to greater profit overall.

Plus, AMD has moved the single largest volume of GPU sales out of the GPU division and into the CPU division, lowering the GPU revenue stream and hence less revenue with which to amortize those static costs (R&D, operational costs, etc.).



Regards,
SB

There's a significant difference between the segmentation that's presented in the public financial statements and the managerial accounting that's used to make the actual decisions. After all Llano, Trinity and Brazos wouldn't be much without it's GPU component.
 
One thing I'm wondering...

Now that Nvidia is rumored to be able to do at least 3 monitors on one card, I wonder how the 2 GB is going to impact performance at multimonitor resolutions. Yes it's still a niche segment, albeit a fairly important one for this class of cards (just like high resolution 2560x1440 or 2560x1600), IMO.

Regards,
SB

2GB is 2little for multi-monitor.
 
2GB is 2little for multi-monitor.

Can you please lay out the math for us? Multi-Monitor for me starts at two, going upwards obviously.
Now, 2x 1920x1080 is still not significantly more pixels and therefore buffer size than a single 25x16 display.

Additionally, I did not see too many people with a 6900 series complain about to few gigabytes of vidmem.
 
The "2 GHz" QDR might be a typo, as all the other info in that report points to 1.5 GHz QDR: 6 GHz effective, 256-bit bus, 192 GB/s bandwidth. (I'm guessing there's no such thing as T(riple)DR memory?)

EDIT: Or maybe Dynamic clock works with memory too?

It's a typo that made its way quite oddly from an original Theo piece...My point is that if they can't figure out how 2GHz QDR is not really 6GHz effective, there's not much insight to be expected anyhow.
 
Can you please lay out the math for us? Multi-Monitor for me starts at two, going upwards obviously.
Now, 2x 1920x1080 is still not significantly more pixels and therefore buffer size than a single 25x16 display.

Additionally, I did not see too many people with a 6900 series complain about to few gigabytes of vidmem.

2 monitors sucks for gaming. You can start at 2, but looking straight into a bezel is less than ideal.
 
Can you please lay out the math for us? Multi-Monitor for me starts at two, going upwards obviously.
Now, 2x 1920x1080 is still not significantly more pixels and therefore buffer size than a single 25x16 display.

Additionally, I did not see too many people with a 6900 series complain about to few gigabytes of vidmem.

3x 1920x1200 is my default with 4 or 8xAA... Now try that with SGSSAA and 2GB with textures, etc. I don't have test data and might certainly be wrong, but 3GB would make me feel better as I look for monitors 4 & 5.
 
3x 1920x1200 is my default with 4 or 8xAA... Now try that with SGSSAA and 2GB with textures, etc. I don't have test data and might certainly be wrong, but 3GB would make me feel better as I look for monitors 4 & 5.
By your logic you would never have had a multi-monitor system since even AMD capped the VRAM at 2GB until very recently.

So please tell us why your AMD multi-monitor with only 2GB of VRAM works fine but Nvidia with 2GB won't?

And as for only 2GB you must have missed this post that states 4GB models will be coming from partners.

http://vr-zone.com/articles/nvidia-...r-dynamic-clocking-2-and-4gb-gddr5/15148.html
 
Back
Top