NVIDIA G92 : Pre-review bits and pieces

Folks who'd like to grind their axes can always do so of course, but it's utter nonsense to suggest that EOLing any part implies anything at all about its drivers ability to continue to mature. The number of times we've seen over the years (from both IHVs) new and improved functionality and performance added to old hardware that's been EOL'ed for multiple years even is really quite conclusive on that point.
 
The HD2900XT is End-Of-Lifed. It is no longer being made. IT is no longer a product. To me it implies: the "driver bugs" are really hardware bugs. Thus, the HD2900XT has reached its full potential.

It's a mixture of both... Drivers improved X1800XTs performace months after it's EOL. My opinion:

R520: drivers [-x---] hardware
R600: drivers [---x-] hardware
 
Waiting too long for driver maturation kind of hurts the marketability of the product, though that is a separate point.

AMD: "Buy our amazing new, top of the line (at high prices), product!"
Buyer: "I think I'll wait a year and a half for your drivers (and prices) to settle down, then buy the mainstream variant or wait for an inventory clearance."

That's good buyer policy, but murder on the bottom line.

I would say that if there is such a thing as a marketability bug, that would be a black mark on an otherwise interesting technical design.

AMD has several hardware refreshes based on this architecture, so hopefully they will benefit from R600's sacrifice.

I wish there were a way to quantify design fragility in order to get an idea of a trend I think I'm seeing in products coming out that lose so much performance, features, or stability when things are even slightly less than optimal.
 
Fillrate and texel rate. I'm losing track of the number of times I've posted this.
That is certainly a big part of the problem, but it doesn't explain the big performance drop with when compared to R580. I know there's no hardware AA resolve, but that hit should be very minimal.

If the resolve operation is BW limited, a fully uncompressed 1920x1200 4xAA 32bpp color buffer would take 0.3 ms to read, and it also means G80 and R580 would not have zero performance impact. Now let's consider the case that it's not BW limited:

Assuming ATI is not braindead, they load and uncompress AA samples into the texture cache, and use the filtering units (which have gamma correction hardware) to average them and feed it to the shader, where it's written out to the framebuffer. This doesn't work for tent filters, but simple 4xAA is what all the benchmarks test. This can be done at 16 pixels/clock, so that's 0.2 ms for the same framebuffer as above.

Assuming ATI is braindead and loads the AA samples as point samples once per clock, you quarter the rate, so it's 0.8 ms.

0.2 ms drops you from 60.0 fps to 59.3 fps, or from 30.0 fps to 29.8 fps.
0.3 ms drops you from 60.0 fps to 58.9 fps, or from 30.0 fps to 29.7 fps.
0.8 ms drops you from 60.0 fps to 57.2 fps, or from 30.0 fps to 29.3 fps.

I don't think ATI is braindead, so the first two are far more likely. Even if ATI did do it the dumb way, it's still a lot less than the difference we're seeing between R600 and R580.
I guess not being desperate to conclude that R600 is bug-ridden (round here it seems fashionable to conclude it is buggy) marks me out as an apologist for it.

Come up with some evidence for bugs.
Is that good enough? I can go fish for benchmarks that show the drop is bigger than the amounts above, but we all know that the R580/R600 differential in AA hit is more than a few percent.

Oh yeah, we should move this tangent about hardware AA resolve over to another thread about R600 or RV670.
 
The HD2900XT is End-Of-Lifed. It is no longer being made. IT is no longer a product.

This GPU is no more! It has ceased to be! 'E's expired and gone to meet 'is maker! 'E's a stiff! Bereft of life, 'e rests in peace! 'E's off the twig! 'E's kicked the bucket, 'e's shuffled off 'is mortal coil, run down the curtain and joined the bleedin' choir invisibile!! THIS IS AN EX-GPU!!

etc...
 
Last edited by a moderator:
its almost like opposite 3dfx. 3dfx refused to adopt new features while ATI cant stop implementing new yet useless ones.
 
its almost like opposite 3dfx. 3dfx refused to adopt new features while ATI cant stop implementing new yet useless ones.
R600 has a lot of similarities with 3Dfx Rampage project:

Rampage: based on original Rampage project, which should be successor of unexceptionable Voodoo Graphics

R600: based on R400 project, which should be successor of unexceptionable R300

both were many times postponed, delayed and the long time was filled by primary unplanned solutions, which were based on original architectures (V2/R420)

both of them came with new AA algorithms (Rampage: RGMS, R600: custom filtering modes), both seems to be a bit texturing/TF limited (less than half trilinear fillrate compared to their competitors) and both architectures were designed with new levels of multi-GPU support in mind. Both of them were (among others) highly focused on geometry performance (R600: unification, very fast setup engine, GS performance; Rampage: external SAGE geometry processor - theoretical performance > 2 times higher than competition). And both - 3Dfx and ATi - were bought (just a few months before launch) by other company.
 
Since this thread is about G92, this goes in here too ..

8800GTS : G80 : 96SP : 320b : 320MB : VP1 : Now
8800GTS : G80 : 96SP : 320b : 320MB : VP1 : Now
8800GTS : G80 : 112SP : 320b : 640MB : VP1 : Nov 19
8800GTS : G92 : 128SP : 256b : 512MB : VP2 : Dec 3

http://www.expreview.com/news/hard/2007-11-03/1194066469d6767.html

can we expect the G92 128sp 512mb GTS to outclass the GTX? or will its lesser bandwidth, pixel fillrate and memory mean GTX still is on top?
 
yeah using 1.2v with the a regular GTX type dual slot cooler 800mhz core should be no problem. combined that with perhaps 24 ROPs 64 TMU 384bit bus and its looking like a single GPU GTX killer. although id realy like to see nvidia go with a 512bit bus allowing for 32 ROPs and more bandwidth to use the fillrate more efficiently.
 
Since this thread is about G92, this goes in here too ..

8800GTS : G80 : 96SP : 320b : 320MB : VP1 : Now
8800GTS : G80 : 96SP : 320b : 320MB : VP1 : Now
8800GTS : G80 : 112SP : 320b : 640MB : VP1 : Nov 19
8800GTS : G92 : 128SP : 256b : 512MB : VP2 : Dec 3

http://www.expreview.com/news/hard/2007-11-03/1194066469d6767.html


So, there are going to be 4 "8800GTS" cards? WTF? Did Nvidia sign some sort of treaty that precludes them from using anything but 8800 as the name for their cards?
 
Well, 5 if there turns out to be a 1GB version as well.

/prepares for the end as the apocalypse is surely near...
 
Back
Top