Well I've said previously that G92 = 8800GT does not add up, AT ALL. I also believe that G92 is a a full G80 with some tweaks. I'm not sure why Nvidia went ahead with such a naming convention (GT > GTS) .. its really stupid. It is one ball game to confuse your competitor but it does not mean you confuse your consumers! The 38xx nomenclature as awful as it may sound, looks like a better choice than anything both sides made in the last 5+ years.I'm starting to believe that the HD 3870 (god, what an awful name... :S) isn't the real competitor to the 8800 GT 512MB. Even the dual slot cooler doesn't add up.
The report that the manufacturers are already at work on non-reference PCB's for the 8800 GT is a further hint, seeing as the "true" high-end products are usually "shielded" by Nvidia from such practices.
So, i think a 128 scalar processor-sporting G92 with a dual slot cooler (higher clocks and, perhaps, a 320/384bit bus) may be unveiled down the road.
I agree (though I'm cagey about >256-bit bus).So, i think a 128 scalar processor-sporting G92 with a dual slot cooler (higher clocks and, perhaps, a 320/384bit bus) may be unveiled down the road.
The only fly in the ointment is my suspicion that NVidia has been struggling with 65nm ("A2" on die-picture at eXpreview doesn't really tell much though, does it?) and that G92 was supposed to be a "summer" part (i.e. it's late, as is G80's true successor, though I'm waiting till the fat lady sings on that latter detail before getting excited)...
The core/shader clocks on this 65nm part are bizarrely low in comparison with the 80nm G84.
Then you are on the wrong site.just wanna know if you guys think it will have 28 tmu. its a simple yes or no.
I wasn't aware of such a part. And, besides, you'd hope a much smaller die would clock faster...How do you explain the 65nm 8400 GS sporting exactly the same clocks as the old 80nm version ?
G92 hasn't launched yet...And the fact that it came out at the same time as the 8800 GT/G92, despite the fact that it's a low-end, simple die-shrink (well, with the exception of the new VP3 processor anyway... ) ?
... and now we have rumours of limited supply for G92, so if NVidia went to UMC, perhaps it was as backup to problematic 65nm at TSMC.Or the fact that they've ordered a bunch of G92's (not G98/8400 GS 65nm) to UMC in addition to TSMC ?
Whereas it makes a lot of sense to me.The delay story doesn't make sense in light of these events to me.
We've not had the benefit of B3D's penetrating analysis of why G84's ALUs are better than G80's - we're told to take it on trust.I'd just like to know exactly what is it with the 8800 GT's ALU's that gives them so much more performance than it was supposed to.
It sports a memory bus cut down from 320bit to 256bit (not even the 1800MHz GDDR3 can make up for the fact that it has a few GB/s less overall than a GTS) and yet it kills the 8800 GTS 640MB precisely where it shouldn't according to the review.
Well the performance gap does decrease to about half as you put up the resolution but then I again I know what you are getting at.
I still find it very puzzling that the 8800GT is faster than the 8800GTS if, and only if, the 8800GTS continues to be sold.
Who in their right mind would by the more expensive GTS if it is that slower in general?
Perhaps they will dump the GTS and still bring out an 8700GT with 64/80sp.
There is a new GTS on its way.
Which is absolutely retarded. The GT nomenclature implies lower performance than the GTS. If there is a new GTS, it had better be named 8850, 8900 or some such.
What about ~800/2400MHz, with a consumption on 88GTX-level or a bit below like I heard?