AMD: R9xx Speculation

Well...

tegra is money looser, tesla isn't money maker, GF100 was debacle...
vs.
cypress has higher margins, cypress has better yields, fire gl line sells well...

Can anybody explain, why nVidia earned $137M and $131M during last two quarters, while ATi earned only $33M and $47M?

All the renamed DX10.1 stuff and Quadros, probably. Quadros are still very profitable. Plus, they have (had?) more wafer allocation. But in case you haven't heard, NVIDIA drastically reduced their revenue guidance for Q2 a few days ago.

PS: Do FireGL cards really sell all that well?
 
Thanks, this explanations doesn't lack logic :)

Back to the topic - nVidia probably makes money on Quadros, so price reduction of GTX460 won't harm them as much as ATi, who makes money primarily on gaming GPUs.
 
Unless TSMC is able to supply more wafers—which they should—and therefore AMD can sell more cards and make more money… :p
 
Thanks, this explanations doesn't lack logic :)

Back to the topic - nVidia probably makes money on Quadros, so price reduction of GTX460 won't harm them as much as ATi, who makes money primarily on gaming GPUs.

Currently, but FireGL is making slow but steady inroads. Professional markets are less prone to quick changes than consumer markets.

No market is safe, but Nvidia can rely on the professional market for a bit longer as long as they don't jump off the deep end and royally screw things up. And while they may not be making money on Tesla, it does feature extremely high margins that can help boost overall margins for the company.

As well, even with that Nvidia aren't immune to a price war as the 48xx/GT2xx generation showed us. That price war hurt both companies even with their relative strengths at the time. ATI with lower cost per chip and Nvidia with margin boosting dominance in the professional markets.

Regards,
SB
 
Other things to consider:
- TSMC raised 40nm wafer prices in early spring since customer demand exceeded capacity, which probably hurt AMD more than NV at that time.
- Nvidia may have had excess inventory of GT21x and G9x chips that they were selling off during that time-frame (remember the reports that (r)etailers had to buy xx GT 2x0 cards for every GTX 470/480 at launch).
 
Currently, but FireGL is making slow but steady inroads.

What is this based on? Hopefully not one of ATI's traditional presentations about their epic design wins happening there. Because they've had those for years, and they still make 25mln a quarter in a hundred of mln per quarter market. If there's any substantial indication of them doing more than they've been doing there for years (which isn't hard, they have done pretty much zilch), please share!
 
Well...

tegra is money looser, tesla isn't money maker, GF100 was debacle...
vs.
cypress has higher margins, cypress has better yields, fire gl line sells well...

Can anybody explain, why nVidia earned $137M and $131M during last two quarters, while ATi earned only $33M and $47M?

As far as I know the chipsets and IGP are not included in graphics division. They make some money for CPU division. :)
 
Can anybody explain, why nVidia earned $137M and $131M during last two quarters, while ATi earned only $33M and $47M?
Dirk has already pointed out that notebook has been prioritised in order to fulfill the design wins; notebook primarily utilises the lower end of the stack.

NVIDIA's quarters are also trailing us by one month, so your second comparison point for AMD would be best served comparing against the quarter NVIDIA is about to announce.
 
Dirk has already pointed out that notebook has been prioritised in order to fulfill the design wins; notebook primarily utilises the lower end of the stack.

NVIDIA's quarters are also trailing us by one month, so your second comparison point for AMD would be best served comparing against the quarter NVIDIA is about to announce.

By the way, is the situation for Q3 going to be similar, especially with OEMs preparing for the back-to-school period? Or have they mostly finished preparing for it?
 
What is this based on? Hopefully not one of ATI's traditional presentations about their epic design wins happening there. Because they've had those for years, and they still make 25mln a quarter in a hundred of mln per quarter market. If there's any substantial indication of them doing more than they've been doing there for years (which isn't hard, they have done pretty much zilch), please share!

Ah, maybe not then. That had just been a bit of a hunch on my end due to the increased quality and performance of the FireGL line over the past couple years. It's unfortunate if they still aren't able to make any inroads.

Regards,
SB
 
The "salvage part" qualifier has taken on a more perjorative connotation than I think it merits.
We hardly see complaints that RV770 could use slightly less than the full number of silicon ALUs it had on chip. 16/17ths was the utilization for a full chip, possibly. By default, no RV770 and unless something has changed 5870 Cypress, is necessarily without flaw. The one case where this redundancy is not as effective is the rather brutal cut-down of ROP capability in the 5830.

EDIT:
I reviewed some of what I read earlier, and the fraction should be smaller. I think it was only a fraction of a whole SIMD block that was kept in reserve.

The primary distinction I can see drawn is that none of the scheduling or disclosed data paths had similar redundancy, whereas Nvidia's coarser scheme did leave such things unused. A number of unused crossbars and schedulers is an unfortunate, but not particularly damning distinction, given how negatively it is regarded.
In the CPU realm, we do not see complaints that we cannot use all the cache lines present on our CPUs.
The most recent desktop example I know of where this was even possible was probably AMD's K6, and that was regarded as a serious blunder.

Nvidia's primary sin is that it promised peak capability for its product lines it could not deliver, which is where it ran into trouble with GF100. I have not seen any marketing of that kind for GF104, so perhaps Nvidia has gotten a little wiser.
 
Last edited by a moderator:
Is the RV770 core capable to utilize more than 800 SPs at once? I think we don't have any single proof of that. It's not possible to complain about unused SPs until the question is answered.
 
Is the RV770 core capable to utilize more than 800 SPs at once? I think we don't have any single proof of that. It's not possible to complain about unused SPs until the question is answered.

I made that distinction.
The primary differentiator between the finer granularity on RV770 and GF100 is the number of crossbars and schedulers that are fused off. Some number of paths are off in RV770 and Cypress, they're just not hypothetically usable like those for a full GF104.

It's a matter of some wires a buyer doesn't get to put to use. So while we can be aware of the SM that is not active, is it really worth dwelling over as far as the product on sale is concerned?
 
It's entirely merited when the chip was first announced as a "512 CUDA core" chip.

I mentioned that as well.
It is still being brought up repeatedly for product lines where no such promises were made, perhaps because "salvage part" is the new bandwagon thing to throw out there to show geek street cred.

It also does not change that those salvage parts are per-chip generally the most performant single-chip cards out there.
The consumer woes of power and noise could only be worse if they weren't salvage parts.
The problem of die size is somewhat worse for the fact there is no non-salvage bin, but this is not the most significant problem Nvidia's design faces.

If this were an Phenom X3 with a BIOS unlock, that salvage part would be so much more lovable. Perhaps that should be Nvidia's next marketing ploy.
 
In the CPU realm, we do not see complaints that we cannot use all the cache lines present on our CPUs.
The most recent desktop example I know of where this was even possible was probably AMD's K6, and that was regarded as a serious blunder.
All Phenom X3s are "bad" X4s.
Also some Semprons were in fact X2 with (probably) one core not-quite-good. @home one og my CPUs was bought as X2 and successfully inlocked to X4 ;) (I bought it knowing in advance that chance of unlocking to 3 or 4 cores is high)
I'm pretty sure that some Celerons/i3 etc are "salvage" parts too
 
All Phenom X3s are "bad" X4s.
They are chips with one core disabled.
The "bad" part is what I find debatable, aside from some weirdness with some multimedia software that doesn't like non-power of 2 thread counts.

It is a less optimal outcome, where either physical defect or concerns with meeting TDP or allocation lead to a core being turned off.
Bargain hunters don't seem to mind it too much, as the rest of your post suggests. ;)
 
All Phenom X3s are "bad" X4s.
Also some Semprons were in fact X2 with (probably) one core not-quite-good. @home one og my CPUs was bought as X2 and successfully inlocked to X4 ;) (I bought it knowing in advance that chance of unlocking to 3 or 4 cores is high)
I'm pretty sure that some Celerons/i3 etc are "salvage" parts too

Then again, most of Phenom II X2's while count as salvage parts, unlock perfectly well to X4's with no voltage bumps or anything. I believe same applies to most X3's
 
Back
Top