5900Ultra review at [H]ard OCP

If the results aren't truthful to what the card is able to do globally, then it isn't worth posting.
Uhh... not really. There are no benchmarks from which you can make blanket statements about the performance of a given card. You can make guesses about how a card will perform in certain situations, but unless you actually coded the benchmark, you don't know what it's doing. Unless you coded the drivers, you don't know what they're doing. And unless you designed the video card, you don't know what it's doing.

The whole point of having lots of benchmarks is that you can tell how the card performs in X, Y, and Z. Then, your readers can use their own judgment to make a guess about how that card will perform in situation W before they buy the card.

Anyway, back to your regularly scheduled [H] fun. :p
 
That's what they are for, not for proving how fast a card can render UT2003. It's almost global, that a benchmark should be, but not totally. I'll agree with that. You can't surmise that a card is the best with one benchmark. You need a set, but you can't get accurate results about what the entire card's deal is without results that aren't tampered with. I should have went into detail about what I meant... but you seem to get it anyway. ;)
 
I think I will just stand by my initial assesment of the NV3X design... IT IS WEAK AS HELL

Lets see here what goes under the not usable list


FP32
Cinematic Computing
ANSIO with Trilinear filtering
PS 2.0 performance
Vortex Proccessing
FSAA at anything above 4X (although that looks like ATi's 2x performance setting)

Why do I have the feeling that a 9500 Pro has more functionality than the NV3x cards do?

Anyone who spent $400+ on any Nvidia card definately deserves what they got since they have had 9+ months of the rest of us telling them to get ATi cards.............

Nvidia is in a world of hurt right now and they really need to clean up their act or they might as well join S3 and their Savage 2000 card.......
 
You need a set, but you can't get accurate results about what the entire card's deal is without results that aren't tampered with.
This is where I'm on the fence.

What IS an accurate result? If Card X benches 25% faster than Card Y in unoptimized benches, but Card Y's driver team creates drivers that allow Card Y to run 50% faster than Card X in 95% of situations, which would you buy? In a clean benchmark, Card X would demolish it. But, Card Y would likely be the "better" package.

Note, I did not say optimizations that cause IQ loss or optimizations that only affect benchmarks. I just mean optimizations in general.

I think we're reaching a point where you have to differentiate between the speed of the card and whatever the benchmark results tell you. How will you do this? Hell if I know. But, I have to think that application-specific optimizations are here to stay, and no one will ever remove them. They're just too beneficial for the majority of people for anyone to take them out. What we need is a way to toggle specific ones in order to judge the speed of the card and then a way to judge the effectiveness of that optimization.

This way, you can judge the speed of the card without any optimizations (remembering that only the more popular pieces of software will receive the time it takes to really increase its performance) but also see the potential of a card when code is designed specifically for it.

So, basically, having multiple codepaths for a card isn't a bad idea. It's just not the only thing we should do yet.
 
Whoa whoa whoa, let's all wait here a second...everyone here is so quick to trash nVidia, but....

I left Trilinear settings for the game default to whatever the game uses.

I set the nvidia drivers to quality mode.

I didn't sit down and examine each map with the mip-maps highlighted though. You can look at the screenshots yourself and see how it looks to you with 8XAF.

Recent findings on B3D seem to indicate that the 5900 is not doing Trilinear fully in UT2K3 though.

I haven't examined a comparison between it and the 9800 myself yet though.

If I recall correctly, it was also found that ATi cheats and doesn't use full trilinear if the application doesn't force it. Here Brent clearly stated that he left it to default, so if he used Performance or Quality settings in the ATi driver menu, then it wasn't running at full trilinear there either... I think the results ARE a bit more comparible in this case then...
 
Obviousally you need to RMA your eyes along with your 5900, glad to see people supporting business practices that are designed to hurt the consumer and mislead. Esepcially overiding the control panel options.

Good job.


ATI DOES do Trilinear Filtering as shown here, hopefully your new set of eyes along with your new card will help makes things clearer.

Doom Trooper, sorry but you need to reread the entire threads... ATi only does Trilinear filtering if set to APPLICATION and if the application then says to use trilinear. Otherwise, it will use a bilinear/trilinear mix. Perhaps a new brain should go along with your 9800 pro?
 
surfhurleydude said:
If I recall correctly, it was also found that ATi cheats and doesn't use full trilinear if the application doesn't force it. Here Brent clearly stated that he left it to default, so if he used Performance or Quality settings in the ATi driver menu, then it wasn't running at full trilinear there either... I think the results ARE a bit more comparible in this case then...
So. . . trilinear on only one texture layer is as bad as no trilinear at all? In any case, it's been posted by Brent in the linked [H] forum thread that there is no "application preference" option available in the 44.03 drivers. If that's the case, then the drivers make it impossible for the GeforceFX to render trilinear.

A related note: when asked, ATI fully admitted and explained what their "quality" setting does.
 
surfhurleydude said:
If I recall correctly, it was also found that ATi cheats and doesn't use full trilinear if the application doesn't force it.

That's only if you selct AF. As it is doing AF it is not cheating. If you set application deafult (which is trilinear yes assuming no in game AF toggle) you get full trilinear.

With nVidia at the moment, if you set drivers to effectively app. default, which is trilinear - you get bilinear.

It is not the same thing.
 
surfhurleydude said:
If I recall correctly, it was also found that ATi cheats and doesn't use full trilinear if the application doesn't force it. Here Brent clearly stated that he left it to default, so if he used Performance or Quality settings in the ATi driver menu, then it wasn't running at full trilinear there either... I think the results ARE a bit more comparible in this case then...

For starters, it's not a cheat, simply a matter of configuring things correctly.

You're right on this point though, but again it's up to Brent to make sure the right settings are selected to give a fair comparison. And even if he did set up the ATi card wrongly, trilinear on the first texture stage only versus bilinear still isn't a fair comparison. The big problem goes beyond the numbers though I would say, this is more about informing potential buyers of what could be seen as a pitfall of this card.
 
magoo-1.gif
<--[H] ??

Lets see, using custom time demos the 5900 loses to a 9800, but when benchmarked wth shipping timedemos wins :rolleyes:

2+2....

http://www.gamepc.com/labs/view_content.asp?id=fx5900u&page=1

Then here:

http://firingsquad.gamers.com/hardware/msi_geforce_fx5900-td128_review/page8.asp

Quake 3 results custom .vs normal shipping timedemo:


q3timedemo.jpg
 
Lets see, using custom time demos the 5900 loses to a 9800, but when benchmarked wth shipping timedemos wins

This isn't actually the case. It actually comes down to the number of detail textures that the level contains - the more detail textures in the level the less filtering the 5900 is doing in comparison to the 9800.
 
Yes in your discovery, but my complaint also was the lack of even trying a custom timdemo in this [H] review Dave. (or maybe they did and didn't like what they saw)

What the Firingsquad article shows is Nvidia is application detecting (has to be to take such a huge performance hit) and more than likely applying the same 'optimizations' in SS2 etc...
 
that graph does not put ATi into the best light either nearly a 140fps drop? wowow that is steep for ATi as well but watching the GFFX tumble nearly 200 GPS SUCKS

Nvidia has single handedly drug the industry back to 1997 with visual crapity rvialing that of the Riva 128

Doomtrooper said:
magoo-1.gif
<--[H] ??

Lets see, using custom time demos the 5900 loses to a 9800, but when benchmarked wth shipping timedemos wins :rolleyes:

2+2....

http://www.gamepc.com/labs/view_content.asp?id=fx5900u&page=1

Then here:

http://firingsquad.gamers.com/hardware/msi_geforce_fx5900-td128_review/page8.asp

Quake 3 results custom .vs normal shipping timedemo:

Image removed as it is right above
 
YeuEmMaiMai said:
that graph does not put ATi into the best light either nearly a 140fps drop? wowow that is steep for ATi as well but watching the GFFX tumble nearly 200 GPS SUCKS

Obviously its a more taxing demo in general so both cards are going to lose fps :)
 
No it does not, ATI uses application detection too, that is why custom unreleased time demos are essential today for a reviewer. If there is engine level optimizations that increase performance, a custom time demo should not affect that improvement.

At least make a attempt, a custom time demo would not have caught this cheat..but it is a start vs. the usual same old benchmarks.
 
That Quake III Custom demo proves nothing in "application detection". Firing Squad themselves said it was extremely taxing... and with bots in addition to a 40 frag limit, the demo definitely is much harder on the system than Quake III's standard benchmark demos...
 
surfhurleydude said:
That Quake III Custom demo proves nothing in "application detection". Firing Squad themselves said it was extremely taxing... and with bots in addition to a 40 frag limit, the demo definitely is much harder on the system than Quake III's standard benchmark demos...
Indeed, though it's gotta make you think: which is more representative of common gameplay conditions?
 
Back
Top