New FX Extremetech Review

It's worth noting that these are 16x12 results only. Good for bragging rights, certainly, but what percentage of the people who would buy this card have a 21" monitor? If the 12x10 numbers look much better for NV, then that could make a significant difference.
 
The advanced Pixel Shader test was complained about simply becuase the SE version of 3Dmark was marketed as a DX8.1 benchmark yet no 8.1 features were allowed in scoring.

Not that hard to understand..and rightfully wrong.
 
Chalnoth said:
Well, one thing to note is that those UT2k3 benches were in botmatch mode, and are therefore both CPU-limited and unreliable.

Yes, lol.
I read their statement on that:
For UT2003, we record only the botmatch score, as the fly-by test isn't at all representative of game-play.
and i was like, wtf?
dont these guys know that the botmatches are totally system dependent???
 
The most telling part to me is the basic performance in non-typical benchmark titles. Namely, Comanche, Nascar, and IL-2. Under FSAA/AF, the R300 just smokes the living crap out of the FX.

To me, these are always the most interesting numbers to look at, because these are numbers that (IMHO) are more indicative of "real" performance in typical gaming situations.

If drivers are really holding this thing back, it almost makes you wonder why they even allowed these things to be reviewed in the first place...although, it's understandable, given the ridiculous delays with this product.

I'm _really_ wondering what effect this will have on R350 (if any at all).
 
I know we're all excited about NV30 (I just finished reading ET's review :) ), but inlining images from their server isn't cool, IMO. Their server load is great enough as it is (thus the sporatic loading), no need to add to it without giving ET their (though probably meager) ad dollars.

True, this focus on 16x12 may not be realistic, even for $400 video card buyers. 12x10 is much more realistic for 17-19" LCD and CRT owners.
 
Chal:

Excuses, excuses... :LOL:

Why would botmatch benches be CPU-limited in UT? Isn't a timedemo playback of a recorded demo so that bot AI wouldn't actually be running during playback? I mean, in Quake 2 timedemos for example enemy AI isn't running while playing back the demo afaik!

Anyway, even presuming bot AI *is* running, why would that make the bench unreliable? Anyway, even if it was CPU-limited, shouldn't both cards be equally limited, and the FX still got its butt kicked in AA+AF didn't it? So how do you explain that away, really?

*G*
 
geo said:
It's worth noting that these are 16x12 results only. Good for bragging rights, certainly, but what percentage of the people who would buy this card have a 21" monitor?

Pretty high percetage - this card costs about $450.
 
Grall said:
Why would botmatch benches be CPU-limited in UT? Isn't a timedemo playback of a recorded demo so that bot AI wouldn't actually be running during playback? I mean, in Quake 2 timedemos for example enemy AI isn't running while playing back the demo afaik!

Anyway, even presuming bot AI *is* running, why would that make the bench unreliable? Anyway, even if it was CPU-limited, shouldn't both cards be equally limited, and the FX still got its butt kicked in AA+AF didn't it? So how do you explain that away, really?

*G*

Try it yourself sometime - fire up one of the botmatch benchies.
Then, downclock your GFX card by 20% (or bump up AA/Aniso level)
re-run the test
watch the scores not change.

And, iirc, no - in UT2003, oddly enough, the scores were dead even:

UT2003 is another game in which the two GPUs are essentially running even in both test conditions.
Gee, wonder why?

Still, the GFFX does take one heck of a beating - in almost everything else.
Wonder why SS:SE is so different?
 
True, but you've got to figure there are way more 17-19" 12x10 LCDs and CRTs out there than 16x12 20+" LCDs. Anyway, these reviews were rushed on a deadline--I'm sure we'll get more later.
 
Doomtrooper said:
The advanced Pixel Shader test was complained about simply becuase the SE version of 3Dmark was marketed as a DX8.1 benchmark yet no 8.1 features were allowed in scoring.

Not that hard to understand..and rightfully wrong.

Well, I certainly don't want to get that whole discussion started up again, but IIRC some were also complaining because the 8500 didn't beat the GF 4 on APS, even though it could one-pass the shaders. Guess this is an indication that APS is very bandwidth-heavy.

And my comment was also slightly tongue-in-cheek. Probably should have put a ;) next to it...
 
I'm pretty confident that a person willing to shell out that much cash for a video card doesn't have some piece of crap monitor, incapable of running @ high resolutions.

But even if this were the case, these are the most important benchmarks, as it clearly shows the ability (or inability, in the case of FX) of the thing to handle gaming situations with IQ features enabled. I would much rather opt with the solution that does better @ high resolutions than the one that chokes.

This is precisley what I saw with the Parhelia...When you turned things on, the performance difference was totally eliminated in most cases...and in several, the Parhelia came out on top by a few percentages.
 
Reading that article I tend to agree - apart from memory bandwidth something must be holding the NV30 seriously back in some tests - I wonder if it isn't unoptimised drivers to some significant extent.

In that case if they can really fix the drivers in 1-2 months GF FX may give them some respite - however we all look to ATi for a knockout with the R350...
 
Nvidia is in ATI's shoes

This is the first major architecture change for nvidia since the Original Geforce. Their drivers are very raw at this point.

If the 9700Pro wasn't out we would be amazed by the performance increace, but instead people are in shock because the GeforceFX performs so poorly when it has such a huge clock speed advantage.

This is the same problem ATI had with the 8500 release. The card should have smoked the Geforce3 in all the benchmarks at the time, but it didn't because the drivers were too green and it was a new architecture.

I would expect that it is going to take three or four months before we start seing real improvements.
 
I think everyone here should be hoping that these numbers improve dramatically, because if they don't...

Taken from the front page
"Sounds like ATI has an Ace up their sleave: a top dog at ATI has told us that the R350 will debut in march, and as told before will be 10% faster than the FX,"

Then we'll end with an R350 that performs roughly on par with a 9700 non-pro. :?
 
As somebody else mentioned...ATI had better hope that R350 is much better than 10% over the FX...because as it not stands, that would mean that R350 would actually be slower than R300, in many respects :)
 
Re: Nvidia is in ATI's shoes

rwolf said:
Their drivers are very raw at this point.

I have heard this a number of times, and its really starting to get annoying. Now, unless anyone actually has some proof of this, beyond speculation, can we stop trotting this out as a "defence"?

Sure, there is bound to be some room for optimisations, but just how much room is speculation. Besides, the arguement could be spun the other way -- all this extra time thanks to the problems associated with getting production quantities out of the foundries should have given nVidia a good few months to optimise their drivers -- I doubt that the driver writers were just sitting around twiddling their thumbs.

LW.
 
Re: Nvidia is in ATI's shoes

rwolf said:
This is the first major architecture change for nvidia since the Original Geforce.

If you scrap all the hype away I think you very wrong on this one. I would maintain that the NV20 (GeForce 3) was a bigger architecture jump over NV10 than NV30 is over NV20.

Why? With NV20 nVidia introduced LMA, MSAA, shaders (register combiners) amongst other things. And it took some time to get performance up with the drivers. So what is really new in NV30? Much more advanced shader yes, but it seems they keep register combiners in there, and the vertex shaders seems to be a bigger step away than pixel shaders anyway. Has FSAA change alot? No. Has LMA? You add color compression, but it's nothing radical.

I'm not trying to downplay NV30, but as it is the architecture doesn't seem to be a total rewrite but more a NV25 with much more advanced shaders. I guess that's why i feel disappointed.
 
Raw drivers should only effect really CPU, or Geometry limited situations. They should not effect the situations where the GFFX is being convicingly beaten by the R9700P with FSAA, but wasn't being beaten without.
 
Back
Top