If you're talking about G96 which replaced G94 on the roadmap and is scheduled for March/April 2008, it's not what I was talking about....
Why is G80@65nm = G92 not a high-end-chip?And without another high-end chip in sight, seems fairly logical to me.
Maybe he expected "A Perfect 10"?Boy, is he a supreme hater or what ?
Cherry-picked benchmarks are one thing; flat-out lies about competing products are another. The latter is what Charlie thinks Nvidia is guilty of: that's what's got him so riled.Certainly a bit of a rant there from Charlie.
An IHV cherry-picking benchmarks/resolutions/etc for a product release to make their product look better! Who'd have thunk it?
Maybe he expected "A Perfect 10"?
Cherry-picked benchmarks are one thing; flat-out lies about competing products are another. The latter is what Charlie thinks Nvidia is guilty of: that's what's got him so riled.
Cherry-picked benchmarks are one thing; flat-out lies about competing products are another. The latter is what Charlie thinks Nvidia is guilty of: that's what's got him so riled.
Pre-G80, did Nvidia stress on turning on AF all the time on their high end GPUs?I just love how he gets so riled up over NV using AA when benchmarking a high end GPU. Oh the humanity!
Blimey - I can't make out if the calipers are measuring the die or the die+sealant. But if it's really 330mm2 then 289mm2 was an entertaining diversion and makes the comparison with RV670's 194mm2 even more fun.PConline measured in their Review of 8800 GT a die-size of 330mm².
That memory would be too slow to achieve 8800GTX performance (though it'll be interesting to see how close). And there's still the question of whether the chip can support GDDR4.Why is G80@65nm = G92 not a high-end-chip?
Put a dual-slot cooler on it, rise VGPU to 1.25-1.3V and clock it up to ~800/2400MHz and buy some GDDR4 at Samsung with 1.2GHz+.
Cherry-picked benchmarks are one thing; flat-out lies about competing products are another. The latter is what Charlie thinks Nvidia is guilty of: that's what's got him so riled.
Then why does he stay quiet about it whenever it's AMD/ATI doing it ?
If you read his posts (i wouldn't call them "stories") there's always a sense of almost personal grudge against Nvidia, for reasons i can't quite ascertain.
First you say they needed to upgrade the specs to compete, now you are implying they downgraded the specs for yields... make up your mind. Furthermore, unless you can cite the MSRP of the GT's competition, you're in no position to determine if anything is or is not getting "blown out of the the water."CJ said:If yields really were that good they'd come out with a full ASIC configuration to blow the competition out of the water before they even got the chance to launch it just like they did with G80... but instead they opted for the 'saver' 112 SPs
If you read his posts (i wouldn't call them "stories") there's always a sense of almost personal grudge against Nvidia, for reasons i can't quite ascertain.
Ad hominemNo, Charlie has been an NV hater for over 1 year now and to date has never reported 1 bad thing about ATI.
On the zoom-pictures it looks like they only measured the die.Blimey - I can't make out if the calipers are measuring the die or the die+sealant.
Why should NV give this advantage of their architecture up?I think you're relying too much upon it being 384-bit.