Tech-Report blasts GeForce FX

John Reynolds said:
but for consumers who'll most likely never see this really supported in games within the product's lifespan, they will be forced to instead 'enjoy' its inferior AA.

I have a 4200 now, and I have to say that I still don't play with AA on. Would be nice to have at least 2X on at a minimum, but it still plays havoc with some games. (I've noticed BF1942 doesn't like it that much... laggy cursors and much slower framerates.)

So, anyhow, ANY card (be it a 9700 or a 5800) that would allow for AA to be turned on at any decent level with near zero impact would be appreciated. I'm not looking for the BEST AA around, I'm simply looking for a good starting point of AA that has zero impact on my gaming.

The moral of this story is that I'll look at both ATIs and Nvidias cards late next spring and decide what I can go with. Since I've got four gaming computers, I don't really feel like I can continually buy the fastest videocard on a regular basis and stay married. She's going to want to have enough money to see a movie occasionally, y'know?
 
I have a R300 PRO and on older games I play with max quality: 6XFSAA and 16xANISO. (counter-strike, etc) However, on newer games like 007: Agent Under Fire, I was forced to dial down to 2X FSAA.

On the Doom3 demo, forget it. Lowest res, lowest IQ to squeeze every last frame.

I think ATI has beat Nvidia somewhat this time around. I agree with John, this is the one thing about the NV30 that let me down. I will also be let down if their anisotropic implementation is still a performance killer. ATI has made the decision that most people won't see the artifacts from their implementation, and that's true. NVidia is betting that most people won't see any difference between ATI's 6X and their 4XS or 6XS in game. It's true that us hardcore users will, but most gamers won't.

At COMDEX, I spent a lot of time looking at flat panels everywhere. Samsung was demoing lots of DLP rear projectors, LCD panels, and Plasma Displays (like over 50 of them!). And the sales people were insensed that I kept asking about artifacts! Except for the Plasma display, the DLP and LCD technologies still display noise, rainbows, and other artifacts during fast moving scenes and moreover, due to the way they took one video source and split it 80 different ways to all the monitors, there was even more noise.

However, what for me was blantantly obvious, was invisible for my wife, and most of the other people who were drooling over these displays.
 
DemoCoder, when 3D acceleration was introduced a few years back i met quite a few people who claimed they didn't notice a significant difference between point sampled and bilinear filtered texture mapping.
 
DemoCoder said:
On the Doom3 demo, forget it. Lowest res, lowest IQ to squeeze every last frame.

Weird... on my machine it's around 30-40 w/ 4xAA, 8xAF on 1024/32.

EDIT: Yes, I've modified the cfg by hand.
 
megadrive0088 said:
ATI must be extreeeemely pleased in its purchase of ArtX for $400 million around April of 2000. By acquiring ArtX, ATI has brought itself to within parity of Nvidia, more or less. some would even argue that ATI has surpassed Nvidia.
What's funny is, IIRC, this is the same company that caused a stir at some 3D site (I forgot which, but one I frequented) when employees posted messages without revealing their company affiliation. Hopefully they've realized actions speak louder than fake words. I'm impressed with their parts so far.

Sorry, didn't read the rest of the thread to see if this tangential tidbit has been brought up b/c I've had my fill of speculation. I'm content to sit back and wait to see what nV delivers in due time, probably moreso b/c NV30 isn't that big of a leap from R300.

BTW, I'm most interested in whether the GFX will actually have "no-impact" FSAA, or if that's more marketing BS.
 
Well according to nvidias benchmarks it gets 40fps in the 3dmark nature test at 1280x1024 4xfsaa and 8xaf. Unless it is far slower than the 9700 without AA then yes it takes a hit with FSAA. Hardly suprising though as they say 'effectively' free fsaa ie. free when viewing one single colour polygon that covers the entire screen so that no edges can be seen. :)
 
Bambers said:
Hardly suprising though as they say 'effectively' free fsaa ie. free when viewing one single colour polygon that covers the entire screen so that no edges can be seen. :)

Well, these types of PR are freely bounded about, apparantly Accuview Antialiasing doesn't have any performance hit either. :-?
 
Evildeus said:
jvd said:
Hey. everyone bashed the radeon 8500 for being later than the geforce 3 and yet it did far more than even the geforce 4 can do.
Yes, but the 8500 wasn't faster right? ;)

Actually it was, but its drivers were immature. We're actually looking at the same identical situation with R300 vs NV30 now. R8500 was superior to GF3 Ti500 and it now beats it across the board. At release it was slower and only as fast as GF3 Ti200.

NV30 is superior in (almost) every way to R9700 (it's true, except for the memory bandwidth issue) and eventually it will probably beat it across the board (assuming memory bandwidth doesn't kill it), but on release it will have immature drivers compared to R9700s 6 month old tweaked drivers. Will it be faster? Maybe, maybe not...

Randell said:
A lower clocked, equivalent performance NV30 board will still sell well out of brand recognition, loyalty and belief in the driver quality being superior and JC will undoubtably endorse the 5800/5800U for Doom3 based on - it will be faster than the R300 in Doom3 (however small or large the difference) and OGL driver quality.

Times are changing though...ATI regained a lot of mindshare with the R9700 and apparently it is selling well-- it's already surpassed 1 million units (although I can't understand why since it's so expensive, but anyway that has nothing to do with ATi/Nvidia).
 
Pete:
Site was www.arstechnica.com
Under cover employe acting as amazed neutral persons was Rick Calle, Director of Marketing for ArtX. (In multiple versions.)
Counts as one of the big slimeballs of internet to me.

Btw
Some time ago when someone couted up 5 or 6 ATI employees at Beyond3D, DaveBaumann said something that strongly suggested that there actually are a lot more in under cover mode.
 
Basic,

Umm...I'm pretty sure "undercover" mode if DB said that is "lurker" mode, i.e. reading and not posting. The people making pro-ATI statements here, either subtle or directly stated, I think are either a) pretty clearly not ATI employees and are generally labelled as "fanbois", whether, correctly or not, or b) have clearly stated they are ATI employees.

You seem to be implying (rather plainly) that there is some "slimeball" tactic going on in the tradition of that ArtX marketing director (I remember someone mentioning it before too somewhere)...did I miss some subtle ATI slanted posts that makes you think this?
 
Basic said:
Pete:
Site was www.arstechnica.com
Under cover employe acting as amazed neutral persons was Rick Calle, Director of Marketing for ArtX. (In multiple versions.)
Counts as one of the big slimeballs of internet to me.

I'm pretty sure he was let go not long after that. Deservedly so.

Basic said:
Btw
Some time ago when someone couted up 5 or 6 ATI employees at Beyond3D, DaveBaumann said something that strongly suggested that there actually are a lot more in under cover mode.
Bigus Dickus said:
And there aren't nV employees going anonymous here as well?

Wouldn't you expect both to be true? Presumably there are a fair number of people at both companies familiar with and interested in 3d graphics. While they aren't official representatives of their companies (and wouldn't want to be seen as such), they certainly would have relevant opinions and (non-confidential) information they could discuss.

If one were to come out and say, "I work for XYZ," it might drive some to both look for hidden meaning in what they say/don't say and might paint them with a bias that isn't really there. They'd also have to be much more careful in what they say, for fear of it being taken as official company policy.
 
John Reynolds said:
Agreed. Too much psychological fall-out from ATi beating NV to market with the 9700. That said, however, I'm still extremely disappointed with NV30's AA support. The more advanced shading capabilities might be attractive to developers/programmers, but for consumers who'll most likely never see this really supported in games within the product's lifespan, they will be forced to instead 'enjoy' its inferior AA. And here I thought it was all about better pixels. . .apparently not (at least from the above perspective).
As you put it, it is a matter of perspective.

I too am surprised ordered grid is still used in some of the GF FX AA modes... not "extremely" disappointed because I'll leave it to when I experience the GF FX first hand with a AA quality/performance/resolution criteria in my mind.

But the product was delayed... its AA is "inferior" because the product was delayed.
 
I'm too am disappointed in NV30's anti aliasing method yet I am hopeful that whatever was left out or taken out, of NV30, will make it back into NV35.

NV35 is (or was) intended for a 1H 2003 introduction. hpefully that will not slip because of NV30's late release......
 
You're hoping too much. I would bet that the NV35 is largely a better yielding, cooler, more highly clocked chip. They are not going to add significant new architectural differences during such a short period.
 
DeathKnight said:
Well, I think most people are expecting some uber-1337 performance figures for the NV30 over the 9700. I think they should be looking more into the programmability and features which seem to put a bigger burden on the 9700 (that is if developers actually take advantage of it).

Comparisons seem rather worthless and a waste of breath at the moment. When NV30's are actually tested for realworld performance and IQ comparisons against the 9700, and it turns out to be a close race, then the complaints will seem a bit more justifiable.


Couple of things though...looping, you could do 64K or so worth of shader instructions with the R300---it looks like nVidia's talking about something very similar. But the main thing is that, with either chip, I think you'll be getting out of the "real-time" mode when you start doing loops with thousands of instructions. I saw in the nVidia demo of the Orge dance that this one took, I believe, about 100 instructions to do (or did I misinterpret that?) Anyway, I don't think the differences in "programmability" as far as "real time" 3D goes are as vast as nVidia wishes to protray them--if they exist at all, for that matter. (Appreciate any comments if I'm all wet here....)
 
Yeah, what is the point of being able to do 65,000+ vertex shader instruction when that will be impossible for even 30fps gameplay. I suppose it would only be useful if vast arrays of NV30s are hooked up into render farms in order to speed up the production of movie-quality CG frames by several orders of magnitude over CPU farms.

We know that arrays of upto 256 R300s can work together, so Nvidia must have similar capability. I guess it makes sense. Nv wants to sell vast amounts of NV30s to movie production houses/studios. perhaps replacing companies like Pixar. with each successive generation of GPU (i.e. NV40, NV50) these new GPU farms can undergo an upgrade to the latest GPU for further speed increases/shader advancement.

could also work on a smaller scale with single systems shipping with 2-32 GPUs in them for say, television grade CG production. Or the real-time flight simulation industry that Evans & Sutherland currently dominates.
I think E&S uses a twin R200 board for one of their simulators.

I'd like to see arcade use of R300 and NV30, a whole board full of as many as possible. with enough memory to support it. massive power, low cost.
 
Well, it wouldn't make sense in the pathological case where every vertex has a 65k shader attached, but there are some situations where are a certain special effect you may need a longer shader. Most shaders might average 30-50 instructions, but a few might be need a few hundred.
 
DemoCoder said:
Well, it wouldn't make sense in the pathological case where every vertex has a 65k shader attached, but there are some situations where are a certain special effect you may need a longer shader. Most shaders might average 30-50 instructions, but a few might be need a few hundred.
same goes for pixel shaders. those 1024 instructions might come handy when doing nice refraction effects and such. Its entirely possible to consistently use such long shaders umm.. say on 10% of the screen, and more reasonable-length shaders for rest.
For example, you lets say you have a cave littered with crystals, and you do LOD-based switch to refractive shader for crystals, using simpler one for those farther away. It would make for quite a eye-candy.
What will matter for performance is your average shader length per scene, not _longest_ shader length. And average can be brought down with several tricks, like LOD switching for shaders.
So 100+ instruction shaders will not be quite so useless even on this gen of HW.
Its similar to cube-mapping support on GeForce256 .. with smart and sparing use of it it did create for quite some effects, but obviously the performance became abysmal when used in simplistic "make-every-piece of-shiny-material-on-scene-environment-mapped-and-reflective-so-that one-would-need-sunglasses-to-bear-watching-it" scenario
 
Back
Top