Doomtrooper said:
This ugrade has been happening for years, why all of sudden is it not 'ok', when buying a video card today since the feature set is way ahead of developers one should look to what the card will deliver them in its lifespan. i.e 9700 can deliver very high frame rates with current popular titles with FSAA and AF.
Hang on just a second. I never said that running current games with the highest possible IQ (by which I mean resolution, framerate, in-game settings and AA/AF) is not a valid reason to buy a top-end card. It's a great reason, and the 9700 Pro is a fabulous card on this measure. I'm just saying that *some* people value longevity as much or more than performance on current games. (Not to say the 9700 won't be a great card on that measure either.)
Basic said:
Five people here at work that just got new home computers with R9700PRO / R9700TX, replacing their old computers all with TNT/GF2MX range cards, says that Dave H has a point. They are non-gamers with kids that might play some games or ..uhm... "softcore" gamers. They just wanted a computer that would last as long as possible without further care.
Conclusive market data! I am vindicated!!
With that out of the way, onto the bigger question: what makes a long-lasting card?
I think it's best to acknowledge at this point that this question, like all the important debates of our time, is really just another way of saying "R300 vs NV30!!!!" That is, since we're going to be reading in the characteristics of these GPUs into everything we say on the topic, we might as well be explicit about it.
I think I'm not going too far out on a limb to say that, while of course we need to wait for benchmark results to be certain, most of us substantially expect that GFFX 5800 will outperform the 9700 Pro on current games without AA/AF (but no one will care because the framerates for both cards are higher than anyone could want, and most times will be CPU-limited anyways), but that 9700 Pro will draw more-or-less equal on current games with AA/AF on (plus its RGMS will look better).
The issue of which card will do better on, say, a late 2004 game is of course harder to say. I think it's fair to say that those games--at medium settings, with AA/AF off--will probably not be bandwidth-limited on either card, for the simple reason that the low-end mainstream card (or chip) of late 2004 will probably not have > 15 GB/s bandwidth; it's just too expensive.
Will they be fillrate-limited (thus benefiting the GFFX)? Could be. The Doom 3 engine seems (as I understand it) to be a real fillrate-guzzler, with 1 pass for z-buffer values and then 1 pass each for every light source, each of those consisting of something like 5 loops back through the pipeline--one for stencil buffer calculations, and 2 each (color and bump map) for diffuse and specular lighting. (What's the proper term for "loops back", anyways?)
Of course both 9700 Pro and GFFX 5800 will do just fine on Doom 3; I'm more wondering what D3 engine licensed games will be like, as well as games built on other engines but with similar principles. Presumably light counts will go up which, as I understand it, will demand more and more fillrate. (And I doubt "# of lights" will be easily adjustible in the game settings.)
Another issue is that Doom 3 has a relatively low poly count, apparently for performance reasons. Can someone explain this to me? Why would adding geometry stress the D3 engine more than most? After all, the big performance hit of the D3 engine is that everything is calculated per-pixel instead of per-vertex! My only guess is that keeping poly count low helps reduce "overdraw" (from the POV of the light) in the stencil buffer calculations, in which case increasing poly count would seem to be another fillrate hit.
Well so far things are looking good for GFFX in the longevity contest. But another possibility is that our late 2004 game will be shader limited. While Doom 3 is limited to DX7 features, presumably its successors, as well as engines which barely resemble it, will be heavy into shaders. And AFAICT, we know next to nothing about how R300 and NV30 compare in shader performance. (Indeed, all we know is that R300 can theoretically T&L 43% more vertices per clock than NV30, but that doesn't tell us anything about how they compare on other vertex shaders much less pixel shaders.)
And this would seem to be a big hole in our current benchmarking capabilities. In the past, the new game of today at high settings was often a pretty good proxy for the game of the future at lower settings; the principles, at least, were the same, just with more texture applications and such. Shaders, on the other hand, are a completely seperate ballgame, and so far the only software we have that really exercises them are IHV-supplied demos (not likely to be a fair comparison).
Hopefully HLSLs will allow the rapid creation of plenty of representative shader benchmarks. The upcoming (hopefully) round of GFFX reviews won't have them, though, and that IMO is a big loss. Perhaps 3dMark 2003 will step up to the plate, although that perhaps seems too much to hope for. Of course there will be plenty of controversy over what constitutes a well-written, "representative" forward looking shader benchmark, but controversy is at least usually indicative of an interesting problem...
So...is there any good way for the longevity-minded buyer to choose between high end cards at their introduction? When will there be good benchmarks to help this task? Or are longevity-minded buyers, being the sort of people who don't want to replace their video card, also not the sort of people who will pay any attention even if there are good benchmarks developed to try to predict this sort of thing? Won't they just go with whatever top-end card (whether ATI or Nvidia) Dell is offering that day?
Hmm....