I'm curious to hear what the actual feature differentiation is. I see some software differentiation at the moment, but I'm curious to hear how the differences create feature differentiation...?
It's not like NV lost a load of money during these 6 months you know.If you mean 6 months later, GF100 is a better situation for nV that G200 vs RV770, then I'd maybe agree with that.
Where is ATI in these terms? I don't know. I'm thinking that with 40G being pretty much the only option for 2010 it's not that much further than NV. GF100 being late doesn't mean that anything NV had planned beyond it will be late as well.If you consider where ATI actually are along the road in terms of a refresh etc, they are further ahead than with RV770 v G200.
I can't say I fully agree with this statement, but I find it amusing that during DX8-9 era exactly the same words were often spinned in favour of NVidia: ATI cards are more flexible and feature rich, but nVidia concentrates on raw power and it's all that matters.
I'm curious to hear what the actual feature differentiation is. I see some software differentiation at the moment, but I'm curious to hear how the differences create feature differentiation...?
Nah Nvidia is and always was the brute force guy and ATI is always efficient no matter what. When ATi started throwing crazy flops around, lots of flops then became efficient and big dies were brute force. Oh, and let's not forget when ATi had a big die (R600) the ring-bus controller was supposed to be efficient while G80's 64 TMUs were brute force. I sense a trend!
It's hard to claim Fermi is more efficient using any metric though, if current numbers hold up.
You're obviously limiting this comparison to the 3D APIs right?
You're obviously limiting this comparison to the 3D APIs right?
No, I'm talking all over. From the compue side - what feature is there that is actually different in terms of the capabilities that it brings vs potential performance differentiations?You're obviously limiting this comparison to the 3D APIs right?
But at what cost? Nvidia has stuck with their "big die" strategy and now it appears that the gaming performance of their 3 million transistor Fermi is going to be roughly equal to that of a mildly overclocked 2 million transistor ATi Cypress.Not really. GF100 vs Cypress is a better situation for NV then GT200 vs RV770 was. They now have a lead in features, they have a clearly more future-proof products and they still are basically the only company with a GPU-based products for HPC markets. And although it's not really revealed yet I'd guess that AMD's transistor density advantage is lost on 40G.
The only advantage 5800 has here in my opinion is a quite less power consumption. But that's just not something that matters to me in any way.
The thing is, at the time, I don't think it was actually true, not like it is today. Compare, for instance, the GeForce 4 vs. Radeon 8500. The GeForce 4 was a refresh of the GeForce 3 architecture, while the Radeon 8500 was a brand-new architecture that came out within a few months of the GeForce 4 parts. Yes, the 8500 was more feature rich, but this is more because of the fact that it was released over a year later than due to any sort of difference in design decisions (although there were differences in design decisions, of course).I can't say I fully agree with this statement, but I find it amusing that during DX8-9 era exactly the same words were often spinned in favour of NVidia: ATI cards are more flexible and feature rich, but nVidia concentrates on raw power and it's all that matters.
No, I'm talking all over. From the compue side - what feature is there that is actually different in terms of the capabilities that it brings vs potential performance differentiations?
In terms of raw, hardware feature differentiation, what does it bring? Does it do more displays on a single board? No. Hell, does it even bring HBR Audio?
Billion, perhaps?But at what cost? Nvidia has stuck with their "big die" strategy and now it appears that the gaming performance of their 3 million transistor Fermi is going to be roughly equal to that of a mildly overclocked 2 million transistor ATi Cypress.
It's not like NV lost a load of money during these 6 months you know.
While it's true TSMC appear to be doing everything they can to keep nVidia in the race, not all of ATI's work will be lost. From what I see, it's just keeping nVidia in touch until the inevitable.Where is ATI in these terms? I don't know. I'm thinking that with 40G being pretty much the only option for 2010 it's not that much further than NV. GF100 being late doesn't mean that anything NV had planned beyond it will be late as well.
Is SAD in Evergreen a feature? Its certianly something that I'll talk about, but fundamentally it doesn't do anything different, it ultimately results in a performance improvement > the number of units over the previous generation. These are hardware design elements that are intended to bring performance benefits - its no more an end user "feature" than just doubling the number of SIMDs, or whatever, is.Hmmm, so the caching and other nice compute stuff isn't a hardware feature? What is it then? I know you're in marketing Dave but narrowing the scope of the definition isn't playing fair.
So? Guessing anything from die sizes is absolutely pointless. And GF100 has all the HPC market to itself. How can you be sure that NV won't make more money off GF100 from gaming+HPC markets than AMD will from gaming only?But at what cost? Nvidia has stuck with their "big die" strategy and now it appears that the gaming performance of their 3 million transistor Fermi is going to be roughly equal to that of a mildly overclocked 2 million transistor ATi Cypress.
You don't know anything about that number yet. NV was counting on something with all the GF100 compute capabilities. If they would've thought that "number is going to be fractionally small" then they wouldn't do it, would they? It's a gamble, sure, but what if it'll pay off?Yes, Fermi has additional HPC capabilities. But what percentage of Fermi's will be sold to people/organizations that will purchase them primarily for their HPC functions? That number is going to be fractionally small. Nvidia is once again stuck with the double whammy of lower number of dies per wafer and lower yields when compared to ATi's design.
I don't give a f about GF100 cost to produce, I don't care if it's bigger or smaller than Cypress. What matters is performance, features, prices for consumers and profits for NVIDIA. Whole this "oh noes it's bigger well it must cost more to produce thus it's dooomed" situation is really lame. Knife is cheaper to produce than a gun so does it mean that guns are pointless, let's all buy knifes instead? (Sorry for the comparision.)IMO, Fermi's additional features simply don't appear to be an equitable tradeoff for its cost to produce.
Could we count a Fermi's parallel triangle setup as a distinct feature that brings higher performance?From the compue side - what feature is there that is actually different in terms of the capabilities that it brings vs potential performance differentiations?
Driver packaging version 8.712.3 belongs to Catalyst 10.3X mate.
It's good then that that driver is actually the latest one provided by ATI, the famous 10.3a!
http://www.forum-3dcenter.org/vbulle...ostcount=18157This is a fake. Our benchmarks are different.
Clearly this demands:
Not really Silus.
nVidia got lucky with G80 while ATI had a horror show with R600. Since then, perf-transistor and perf-watt have been clearly in favour of ATI. Fermi was supposed to address that but on early indications nVidia has gone backwards from g200.
Your problem is you believe nVidia has been unlucky for the past 3 years. Well, in my opinion you can't be unlucky for so long - this right now is the normal operating performance for nVidia and it's well behind ATI. ATI are doing nothing particularly groundbreaking but nVidia still can't even keep up, in fact they are falling further behind.
It's not so much about cost or money as it is about what that area could have been used on. How powerful would a 3bn transistor Evergreen chip have been? The argument was that AMD has recently been better at performance/area. If GF100 is indeed <<50% faster than Cypress, that is not getting better.So? Guessing anything from die sizes is absolutely pointless. And GF100 has all the HPC market to itself. How can you be sure that NV won't make more money off GF100 from gaming+HPC markets than AMD will from gaming only?
Is SAD in Evergreen a feature? Its certianly something that I'll talk about, but fundamentally it doesn't do anything different, it ultimately results in a performance improvement > the number of units over the previous generation. These are hardware design elements that are intended to bring performance benefits - its no more an end user "feature" than just doubling the number of SIMDs, or whatever, is.