NVIDIA GF100 & Friends speculation

I'm curious to hear what the actual feature differentiation is. I see some software differentiation at the moment, but I'm curious to hear how the differences create feature differentiation...?
 
If you mean 6 months later, GF100 is a better situation for nV that G200 vs RV770, then I'd maybe agree with that.
It's not like NV lost a load of money during these 6 months you know.

If you consider where ATI actually are along the road in terms of a refresh etc, they are further ahead than with RV770 v G200.
Where is ATI in these terms? I don't know. I'm thinking that with 40G being pretty much the only option for 2010 it's not that much further than NV. GF100 being late doesn't mean that anything NV had planned beyond it will be late as well.
 
I can't say I fully agree with this statement, but I find it amusing that during DX8-9 era exactly the same words were often spinned in favour of NVidia: ATI cards are more flexible and feature rich, but nVidia concentrates on raw power and it's all that matters. :LOL:

Nah Nvidia is and always was the brute force guy and ATI is always efficient no matter what. When ATi started throwing crazy flops around, lots of flops then became efficient and big dies were brute force. Oh, and let's not forget when ATi had a big die (R600) the ring-bus controller was supposed to be efficient while G80's 64 TMUs were brute force. I sense a trend!

It's hard to claim Fermi is more efficient using any metric though, if current numbers hold up.

I'm curious to hear what the actual feature differentiation is. I see some software differentiation at the moment, but I'm curious to hear how the differences create feature differentiation...?

You're obviously limiting this comparison to the 3D APIs right?
 
Nah Nvidia is and always was the brute force guy and ATI is always efficient no matter what. When ATi started throwing crazy flops around, lots of flops then became efficient and big dies were brute force. Oh, and let's not forget when ATi had a big die (R600) the ring-bus controller was supposed to be efficient while G80's 64 TMUs were brute force. I sense a trend!

It's hard to claim Fermi is more efficient using any metric though, if current numbers hold up.



You're obviously limiting this comparison to the 3D APIs right?

Isn't stuff like "efficiency", "powerconsumption", "die size" ect. only used to defend a performance deficit?
A shifting of the goalpost...which in my book always has been performance...

You're obviously limiting this comparison to the 3D APIs right?

Hard to promote something you don't have now isn't it ;)
 
You're obviously limiting this comparison to the 3D APIs right?
No, I'm talking all over. From the compue side - what feature is there that is actually different in terms of the capabilities that it brings vs potential performance differentiations?

In terms of raw, hardware feature differentiation, what does it bring? Does it do more displays on a single board? No. Hell, does it even bring HBR Audio?
 
Not really. GF100 vs Cypress is a better situation for NV then GT200 vs RV770 was. They now have a lead in features, they have a clearly more future-proof products and they still are basically the only company with a GPU-based products for HPC markets. And although it's not really revealed yet I'd guess that AMD's transistor density advantage is lost on 40G.
The only advantage 5800 has here in my opinion is a quite less power consumption. But that's just not something that matters to me in any way.
But at what cost? Nvidia has stuck with their "big die" strategy and now it appears that the gaming performance of their 3 million transistor Fermi is going to be roughly equal to that of a mildly overclocked 2 million transistor ATi Cypress.

Yes, Fermi has additional HPC capabilities. But what percentage of Fermi's will be sold to people/organizations that will purchase them primarily for their HPC functions? That number is going to be fractionally small. Nvidia is once again stuck with the double whammy of lower number of dies per wafer and lower yields when compared to ATi's design.

IMO, Fermi's additional features simply don't appear to be an equitable tradeoff for its cost to produce.
 
I can't say I fully agree with this statement, but I find it amusing that during DX8-9 era exactly the same words were often spinned in favour of NVidia: ATI cards are more flexible and feature rich, but nVidia concentrates on raw power and it's all that matters. :LOL:
The thing is, at the time, I don't think it was actually true, not like it is today. Compare, for instance, the GeForce 4 vs. Radeon 8500. The GeForce 4 was a refresh of the GeForce 3 architecture, while the Radeon 8500 was a brand-new architecture that came out within a few months of the GeForce 4 parts. Yes, the 8500 was more feature rich, but this is more because of the fact that it was released over a year later than due to any sort of difference in design decisions (although there were differences in design decisions, of course).

Then again if you look at the Radeon 9700 and its derivatives vs. the GeForce FX and derivatives, well, the FX just didn't perform, so it definitely didn't fit this.

And when you look at the GeForce 6x00, when that first came out it was still competing with ATI's own 9700 derivatives, and so it was quite a lot more feature-rich (and also higher-performing). Later, when ATI released their own SM3 parts, obviously nVidia lost this advantage.

So basically, nVidia and ATI have been enough out of sync that we never really saw much of any "raw power vs. features" competition directly. Instead they leapfrogged one another in both features and performance.

This time around, by contrast, the GF100 should have been released at almost the same time as ATI's new architecture. It ended up a bit later, of course, but we can be pretty darned sure that no new features have been added to the GF100 in the mean time. So this is a true competition in design strategies, and those design strategies are quite transparent: ATI has gone for a much larger number of less flexible units than has nVidia. They have, in effect, gone for much higher raw performance (even if said performance isn't realized in real-world situations), while nVidia has gone for lower raw performance but instead sought to get better performance in the end through better use of the available compute power.

Please bear in mind that I never meant that nVidia has actually shot for a lower performance target. That wasn't my intention at all: I'm sure nVidia is every bit as interested in attaining the performance crown as ATI. I'm just saying that their way of going about it has been rather different.
 
No, I'm talking all over. From the compue side - what feature is there that is actually different in terms of the capabilities that it brings vs potential performance differentiations?

In terms of raw, hardware feature differentiation, what does it bring? Does it do more displays on a single board? No. Hell, does it even bring HBR Audio?

Hmmm, so the caching and other nice compute stuff isn't a hardware feature? What is it then? I know you're in marketing Dave but narrowing the scope of the definition isn't playing fair.
 
But at what cost? Nvidia has stuck with their "big die" strategy and now it appears that the gaming performance of their 3 million transistor Fermi is going to be roughly equal to that of a mildly overclocked 2 million transistor ATi Cypress.
Billion, perhaps?
 
It's not like NV lost a load of money during these 6 months you know.

No but these things don't flip on their head overnight. We will see the start of harder times for nV in this Q1 financial report.

Where is ATI in these terms? I don't know. I'm thinking that with 40G being pretty much the only option for 2010 it's not that much further than NV. GF100 being late doesn't mean that anything NV had planned beyond it will be late as well.
While it's true TSMC appear to be doing everything they can to keep nVidia in the race, not all of ATI's work will be lost. From what I see, it's just keeping nVidia in touch until the inevitable.

Even if we assume both companies are stuck with what they have, we know ATI can price Fermi into making a loss. No matter how you look at it, nVidia are making bigger chips that just aren't fast enough. If they were making smaller, slower chips that wouldn't be so bad but they aren't, they are making much bigger chips that aren't fast enough and that is a bad situation to be in.
 
Let me just add in a small comment that if nVidia had no problem surviving the GeForce FX era business-wise, they'll have no problem now, especially as the GF100 looks like it will, at the very least, be a vastly better part compared to the competition than the FX was.
 
Hmmm, so the caching and other nice compute stuff isn't a hardware feature? What is it then? I know you're in marketing Dave but narrowing the scope of the definition isn't playing fair.
Is SAD in Evergreen a feature? Its certianly something that I'll talk about, but fundamentally it doesn't do anything different, it ultimately results in a performance improvement > the number of units over the previous generation. These are hardware design elements that are intended to bring performance benefits - its no more an end user "feature" than just doubling the number of SIMDs, or whatever, is.
 
But at what cost? Nvidia has stuck with their "big die" strategy and now it appears that the gaming performance of their 3 million transistor Fermi is going to be roughly equal to that of a mildly overclocked 2 million transistor ATi Cypress.
So? Guessing anything from die sizes is absolutely pointless. And GF100 has all the HPC market to itself. How can you be sure that NV won't make more money off GF100 from gaming+HPC markets than AMD will from gaming only?

Yes, Fermi has additional HPC capabilities. But what percentage of Fermi's will be sold to people/organizations that will purchase them primarily for their HPC functions? That number is going to be fractionally small. Nvidia is once again stuck with the double whammy of lower number of dies per wafer and lower yields when compared to ATi's design.
You don't know anything about that number yet. NV was counting on something with all the GF100 compute capabilities. If they would've thought that "number is going to be fractionally small" then they wouldn't do it, would they? It's a gamble, sure, but what if it'll pay off?

IMO, Fermi's additional features simply don't appear to be an equitable tradeoff for its cost to produce.
I don't give a f about GF100 cost to produce, I don't care if it's bigger or smaller than Cypress. What matters is performance, features, prices for consumers and profits for NVIDIA. Whole this "oh noes it's bigger well it must cost more to produce thus it's dooomed" situation is really lame. Knife is cheaper to produce than a gun so does it mean that guns are pointless, let's all buy knifes instead? (Sorry for the comparision.)
 
From the compue side - what feature is there that is actually different in terms of the capabilities that it brings vs potential performance differentiations?
Could we count a Fermi's parallel triangle setup as a distinct feature that brings higher performance?

As for compute side, can this considered as a differentiation in capabilities: Fermi and the PTX 2.0 ISA also add support for C++ virtual functions, function pointers, and ‘new’ and ‘delete’ operators for dynamic object allocation and de-allocation. C++ exception handling operations ‘try’ and ‘catch’ are also supported.
 
Last edited by a moderator:
Not really Silus.

nVidia got lucky with G80 while ATI had a horror show with R600. Since then, perf-transistor and perf-watt have been clearly in favour of ATI. Fermi was supposed to address that but on early indications nVidia has gone backwards from g200.

Your problem is you believe nVidia has been unlucky for the past 3 years. Well, in my opinion you can't be unlucky for so long - this right now is the normal operating performance for nVidia and it's well behind ATI. ATI are doing nothing particularly groundbreaking but nVidia still can't even keep up, in fact they are falling further behind.

Not at all. I don't think luck had anything to do with it. GT200 (which was released 2 years ago, not 3), wasn't exactly a good design, but it didn't stop NVIDIA from keeping the performance crown with it. And it wasn't a financial disaster as some like to rave about constantly, otherwise that would've been seen for some time now. NVIDIA struck gold with their G80 deisng and especially G92, which took ATI almost 3 years to catch up. As for Fermi, well new architectures tend to be very hard to start (just look at what ATI had to deal with R600) and this is just another example.
 
So? Guessing anything from die sizes is absolutely pointless. And GF100 has all the HPC market to itself. How can you be sure that NV won't make more money off GF100 from gaming+HPC markets than AMD will from gaming only?
It's not so much about cost or money as it is about what that area could have been used on. How powerful would a 3bn transistor Evergreen chip have been? The argument was that AMD has recently been better at performance/area. If GF100 is indeed <<50% faster than Cypress, that is not getting better.
 
Last edited by a moderator:
Is SAD in Evergreen a feature? Its certianly something that I'll talk about, but fundamentally it doesn't do anything different, it ultimately results in a performance improvement > the number of units over the previous generation. These are hardware design elements that are intended to bring performance benefits - its no more an end user "feature" than just doubling the number of SIMDs, or whatever, is.

Developers are customers too Dave. I'm surprised you're taking the advances in programmability so lightly. A feature isn't defined by what the end-user sees in the end, if that was the case not many features have been added to GPUs since inception. After all we still just get an image on our monitors at the end of the day.
 
Back
Top