NVIDIA Fermi: Architecture discussion

Which would be a financial disaster a la GT200 for nv all over again, with no quick-shrink in sight to save their bacon. :LOL:
It's probably a given that the margins on GT200 won't be as great as those on RV770, but it's ludicrous to suggest that they are bad, let alone a disaster, especially on a 55nm process which is very mature now. In their last reported quarter, Nvidia reported >40% gross margins. You can't do that if you have a financial disaster in your main product line.
 
It's probably a given that the margins on GT200 won't be as great as those on RV770, but it's ludicrous to suggest that they are bad, let alone a disaster, especially on a 55nm process which is very mature now. In their last reported quarter, Nvidia reported >40% gross margins. You can't do that if you have a financial disaster in your main product line.

Well, gt200 parts are in short supply right now, if not eol-ed. And there is a good chance that cedar/redwood would launch before fermi. Also, both fermi and 58xx are on an immature process, which will hurt large dies more than small dies.

This is drifting towards uneducated speculation w/o numbers though. But there is no doubt that nv suffered losses in rv770 vs gt200 price war. ATI didn't.
 
Well, gt200 parts are in short supply right now, if not eol-ed. And there is a good chance that cedar/redwood would launch before fermi. Also, both fermi and 58xx are on an immature process, which will hurt large dies more than small dies.

The short supply is hurting both companies, not just NVIDIA.

rpg.314 said:
This is drifting towards uneducated speculation w/o numbers though. But there is no doubt that nv suffered losses in rv770 vs gt200 price war. ATI didn't.

I don't think they were losses. Most definitely the profit margins went down, but I'm not so sure about losses, without numbers. According to NVIDIA's quarterly results and specifically what silent_guy mentioned, it really doesn't seem like they had losses at all in that regard.
 
20% is a serious amount. It cautions that we should be thinking in terms of a similar shortfall needing to be applied to GF100. We can only wait to see what it turns out to be. It'd be silly to assume it's 0. I agree that neither GTX285 nor HD5870 are useful baselines. But if we cautiously assume 80% of what theoreticals indicate based on estimated unit counts and clocks, we'll be better armed to understand how it does scale when it appears.

Yeah but there are a whole bunch of mitigating factors there. Even if there is a 20% shortfall from the nominal increase there are potential efficiency gains that could regain that loss. I think it's fair to say the potential for those gains is far higher with Fermi than they were with Cypress given the architectural overhaul. Of course, some would argue the opposite - that Fermi could be even less efficient than GT200 but I don't see why that would be a first choice.

This is drifting towards uneducated speculation w/o numbers though. But there is no doubt that nv suffered losses in rv770 vs gt200 price war. ATI didn't.

Uneducated speculation leads you to believe there's no doubt? :LOL:
 
You may say that about the HD 5870 (assuming worst case for NVIDIA), but definitely not about the HD 5850. It's a salvage part of the full RV870 chip and NVIDIA certainly won't be competing with it with a salvage part of the full GF100 chip. I'm guessing a new chip for a "GeForce 350" or "GeForce 340", with 256 SPs, half the ROPs and half the TMUs on a 256 bit is probably what will.

Maybe that's the second chip that will be released (and not the GeForce 360), according to Fudzilla. This means that the release would have the high-end and the mid-end portions of the market covered.

Going out on a limb and using G94 as an example, which is roughly half of G92 in every way (just like I'm speculating this "GeForce 350" to be, when compared to the full GF100 chip):

G92 @ 65 nm = 324mm2
G94 @ 65 nm = 240mm2

Assuming GF100 @ 40 nm = 480mm2 (as seems to be the most common speculation)
The chip that powers the "GeForce 350" (GF104?) @ 40 nm should be around ~355mm2.

If it competes or even beats the HD 5850 overall, then this segment will surely be interesting to follow.

No chance that a card that is very likely to be less powerful than a GTX285 (less TMUs - 64 vs 80, less ROPs - 24 vs 32, less bandwidth or maybe almost on par, counterbalanced by only 16 cores more, which maybe on a one vs one comparison are even less powerful than the ones in GT200, due to the "FMA thing") will rival the HD5850.
 
Performance advantage ranges from 50-80% depending on settings. How you can paint that as worse scaling than 4870->5870 is beyond me.
50-80%? At launch a GTX260 (maybe 20% slower than a GTX280) was struggling to be noticeably faster than a 9800GTX+.

If you're going to cherry pick games and setting where GTX285 is 50-80% faster than a GTS250, then I can find some where the 5870 is much more than 50% faster than the 4870.
 
50-80%? At launch a GTX260 (maybe 20% slower than a GTX280) was struggling to be noticeably faster than a 9800GTX+.

If you're going to cherry pick games and setting where GTX285 is 50-80% faster than a GTS250, then I can find some where the 5870 is much more than 50% faster than the 4870.

Nice, so you have no idea where I got my numbers yet you assume I'm cherry picking? :LOL: Go have some fun with computerbase's convenient charts and then we can have a useful discussion.

Btw, have you even compared the two parts? Early reviews show a 10-20% advantage for the 260. Here are the theoreticals:

Fillrate: +37%
Bandwidth: +59%
Texturing: -22% (yes that's a minus sign)
Flops: +1%

Yep, so 1% on the flops and -22% on the textures and you want to mock a 10-20% advantage? I'm confused....
 
I don't think they were losses. Most definitely the profit margins went down, but I'm not so sure about losses, without numbers.

May be you should look at the "nv shows sign of strain" thread. :LOL:

The losses were real [O($100M) for 2-3 (4?) quarters], although nowhere near what amd's cpu division managed to achieve.
 
Yep, so 1% on the flops and -22% on the textures and you want to mock a 10-20% advantage? I'm confused....
I'm not mocking it (well I am from a die size perspective). I'm just saying that at launch GT200 over G92 was not a bigger leap than RV870 over RV790.

The idea that NVidia's ALUs scale better than ATI's is bogus. They're not the primary factor behind gaming performance anyway. I'm going to start a thread deconstructing limitations in games using data from hardware.fr to illustrate my point.

Fermi will beat RV870 because it's a monsterous chip. Other products in the line will not fare so well.
 
It would had been interesting if you'd heard (there's a barrier between hearsay and real knowledge) what mass production silicon supposedly runs at.

It was targeted at 750MHz for the base clock, I forget what the hot clock was supposed to be at, but the SP FP rate was targeted at around 1500+ GF and DP was 768 or so. You can do the math from there.

-Charlie
 
Right? Thats why they have only sold more GTX295s than ATI 4870X2s and they are STILL in stock and STILL being bought compared to the 4870X2s. Charlie, for once, WOULD YOU PLEASE JUST ADMIT YOU WERE WRONG ABOUT SOMETHING or is it beneith you to do such a thing?

XMAN26, for once WOULD YOU PLEASE STOP stalking Charlie like a little crotch-sniffing leg-humping dog?

Also, provide a link with sales data to back up your claim.
 
Unlaunched products are a matter of speculation. nv's financial history over the last 18 months is not.

Sigh, where in Nvidia's financial history over the last 18 months do you see evidence to support your "disastrous margins" theory?

I'm not mocking it (well I am from a die size perspective). I'm just saying that at launch GT200 over G92 was not a bigger leap than RV870 over RV790.

We weren't discussing die sizes or which one was a bigger leap in absolute terms. We were discussing which one scaled better relative to its theoretical numbers.

The idea that NVidia's ALUs scale better than ATI's is bogus. They're not the primary factor behind gaming performance anyway. I'm going to start a thread deconstructing limitations in games using data from hardware.fr to illustrate my point.

I'm not sure that question is even relevant. In any case good luck with that considering ATI's shaders and texture units have been scaling in lock step for some time now. :)
 
I have a question in regards to the whole nV finacials/profit thing.. people seem to be saying that nV's profitability did/did not suffer (depending largely on personal stance/bias) and this seems centered around the Geforce series of cards .... however did not nV recently claim that 75% of it's profits came from Quadro/Tesla products ?
 
Might I ask you to take a look at nv's profits over the last year or so?:rolleyes:

http://www.google.com/finance?q=NASDAQ:NVDA&fstype=ii

Yes, that tells you that GT200 had poor margins. Low sales and bumpgate charges had absolutely nothing to do with that. I really expected you would put more effort into it. How about you account for the charges and then see how Nvidia did relative to the rest of the industry....you know a real analysis? :)

I have a question in regards to the whole nV finacials/profit thing.. people seem to be saying that nV's profitability did/did not suffer (depending largely on personal stance/bias) and this seems centered around the Geforce series of cards .... however did not nV recently claim that 75% of it's profits came from Quadro/Tesla products ?

Yes, and they make that very clear in nearly every conf call. Which is why the die size arguments are silly. However, last quarter's favorable results were not boosted by a resurgence in the professional markets which obviously means that die size hasn't been disastrous for the consumer segment either.
 
GTX 285 vs GTS 250

Fillrate: +76%
Texturing: +10%
Bandwidth: +126%
Flops: +51%

Performance advantage ranges from 50-80% depending on settings. How you can paint that as worse scaling than 4870->5870 is beyond me.

Please point out where I said that GT200 scales worse than ATI cards (or the reverse). I was only pointing out that with GT200 there was not a 100%+ improvement as someone other was writing.
BTW, even Fermi has not everything +100% more than GTX285 and on GT200 the FLOPs were more "usable" as Nvidia itself said the MUL that was "missing" on G8x-9x chips was then "found again". So FLOPS comparison between GTX285 and GTS250 is more in the +51% / +126% range. And, in many posts here it is believed that G9x was a chip with a big bottleneck in the bandwidth.
 
Please point out where I said that GT200 scales worse than ATI cards (or the reverse). I was only pointing out that with GT200 there was not a 100%+ improvement as someone other was writing.

You didn't. However, you did point to the fact that it was not a 100% jump in performance so I simply pointed out that the theoretical increase was far from 100% to begin with.

BTW, even Fermi has not everything +100% more than GTX285 and on GT200 the FLOPs were more "usable" as Nvidia itself said the MUL that was "missing" on G8x-9x chips was then "found again". So FLOPS comparison between GTX285 and GTS250 is more in the +51% / +126% range.

And how do we know that Fermi's flops aren't also more usable compared to GT200? To Ail's point, random speculation isn't really going to get you anywhere.
 
I have a question in regards to the whole nV finacials/profit thing.. people seem to be saying that nV's profitability did/did not suffer (depending largely on personal stance/bias) and this seems centered around the Geforce series of cards .... however did not nV recently claim that 75% of it's profits came from Quadro/Tesla products ?

ATI said that they had a lot of design wins in professional market as well.
 
Yes, that tells you that GT200 had poor margins. Low sales and bumpgate charges had absolutely nothing to do with that. I really expected you would put more effort into it. How about you account for the charges and then see how Nvidia did relative to the rest of the industry....you know a real analysis? :)

Fair enough. May be they didn't lose money outright on gt200, but they definitely suffered way lower margins. Which is precisely the point I was making earlier. IF fermi or it's salvage part is neck and neck with 5870, then nv's margins in high end consumer market are toast.
 
Back
Top