Would M$ have a problem with a product named "360"?
384SP, 96 TMUs and a 256bit bus could not be enough to keep up with an HD5870 (which should be the main target of a GTX360), I fear.
And how do you figure that ?
A single HD 5870 is on average 40-50% faster than a GTX 285. Even if I don't consider the obvious architectural differences that should improve performance overall, a GT200 with over 50% more Stream Processors (384), more memory bandwidth (though even @ 320 bit, certainly not 50% more), more ROPs and TMUs, should on average be 50% faster than a GTX 285.
Hahaha! .. no.. he asked for proof, not an article fuad wrote because some store-clerk mailed him "This week we sold three GTX295's versus one HD4870X2"!
As far as I know, either A1 or A2's shaders run north of 1200Mhz
And how do you figure that ?
A single HD 5870 is on average 40-50% faster than a GTX 285. Even if I don't consider the obvious architectural differences that should improve performance overall, a GT200 with over 50% more Stream Processors (384), more memory bandwidth (though even @ 320 bit, certainly not 50% more), more ROPs and TMUs, should on average be 50% faster than a GTX 285.
Funny you're assuming that nVidia's ALUs will scale much, much, much better than ATI's (in your case ideal).
Tchock said:That's like counting architectural improvements twice when you don't even know how Fermi's modifications will impact how it takes on the graphics pipeline at large. Oh well, people are already saying Amen so others assume it's true.
Tchock said:At 4200Mhz (same as Rys' speculation) the 320-bit GTX360 will have 6% more bandwidth than the 285.
And again, if this is a salvage @575Mhz core (reasonable spec wrt GTX260a) or so, you have again a whooping 6% more texel and pixel fillrate.
The only point here is the substantial shader power increase- it's 1.6x base ALUs and say, 1600Mhz (VERY optimistic imo for a firstrun salvage part, I'd put my money on 1400Mhz instead) - 8.5% more shader clocks vs GTX 285.
73% more shader power but nearly flat fills beside Z. My money's on the 5870 being 10+% faster at least (and this is the part of me that's not bullish on future Catalyst releases- the other 99% of me is)
So nv's perf scales perfectly linearly with alu's,, interesting.
How much, we will only know when it's released, but guesstimation on my part, along with Rys speculation of the full Fermi chip's performance, and I really can't see this "GTX 360" not being at the level of the HD 5870 on most occasions. Should be win some, lose some, as was the case with the GTX 260 vs HD 4870.
Which would be a financial disaster a la GT200 for nv all over again, with no quick-shrink in sight to save their bacon.
That's assuming that this card, which I called "GeForce 360" is a full Fermi chip, with units disabled. I'm thinking that such card is more like a "GeForce 370" with 480 Stream Processors, one or two ROPs disabled, a 64 bit path from the memory interface disabled, etc.
GT200's increased ALU:TEX, increased register file and improved texturing hardware all act as multipliers. Z-rate was also substantially increased, not to mention bandwidth.And they have been better. Without doubling G80/G92, GT200 was usually above 50% faster than G80 and G92, especially at higher resolutions.
Performance difference looks fine here, peaking at 97% at 1680x1050 4xAA/16xAF (hardly the toughest setting):From HD 3870 to HD 4870 (where ALU count alone increase 2.5x) plus the TMU increase, etc and at best, the HD 4870 doubled the performance of a HD 3870 at higher resolutions.
I've just shown that a comparison of HD5770 and HD5870 is more useful, and from that you can clearly see ~80% scaling achieved instead of 100% theoretical. The gap is Amdahl's law, basically - perhaps with some driver immaturity thrown in for good measure.And now we are seeing what happened with the HD 4870 to HD 5870, where the latter is on average 50-60% faster than the HD 4870, even though it doubles almost everything in the HD 4870, in terms of specs.
That's only because the base for ALU performance on NVidia is so low - much like R6xx's Z-rate base was so low. Whereas ATI ALU performance has always been more than adequate.So yes, I am assuming the ALUs will scale better on NVIDIA's architecture, than on ATI's architecture, since they have been so far.
Agreed, a GTX360 should come in at least as fast as HD5870 on DX9/10 games.I'm actually more inclined to 1500Mhz for the Stream Processors and 600-650Mhz for the core. But that's besides the point. The Stream Processors increase, plus the increase in ROPs (actually ROPs may be the same as GTX 285) a bit more bandwidth and 96 TMUs, should make it quite a bit faster than a GTX 285. How much, we will only know when it's released, but guesstimation on my part, along with Rys speculation of the full Fermi chip's performance, and I really can't see this "GTX 360" not being at the level of the HD 5870 on most occasions. Should be win some, lose some, as was the case with the GTX 260 vs HD 4870.
And how do you figure that ?
A single HD 5870 is on average 40-50% faster than a GTX 285. Even if I don't consider the obvious architectural differences that should improve performance overall, a GT200 with over 50% more Stream Processors (384), more memory bandwidth (though even @ 320 bit, certainly not 50% more), more ROPs and TMUs, should on average be 50% faster than a GTX 285.
I remember XMAN saying about no vendor preference a while ago. Now where is that now I wonder...
I already know he will NEVER admit he was wrong on any of his past predictions. Be it the performance ones or products.
Then why keep up the anti-charlie tirade?