Very Odd FiringSquad GeForceFX review

http://firingsquad.gamers.com/hardware/geforce_fx_5800_ultra/default.asp

They've done some strange things with dealing with anisotropic.

NVIDIA has really come a long way with its latest Detonator drivers, anistropic filtering quality is drastically improved over older driver revisions

?? Don't know if he really meant that, or just very poorly worded. Aniso quality isn't improved at all. Just more options between the old quality and new performance modes.

I also think he fell down a bit in comparing AA (edit: I mean anisotropic) modes. He basically took nVidia's word for it. Image quality comparisons of:

Radeon balanced vs. GeForce Aggressive
Radeon Quality vs. Geforce Balanced

It would have been prudent to compare Radeon's performance vs. GeForce's Balanced in a side by side shot as well.

As a consequnce, in his commentary he compares GeForce's "balanced" to Radeon's "Quality". And no testing with Geforce's "application" setting. They also mentioned having to tweak Radeon's aniso quality in some way in GL to get quality up to GeForce level? No mention of what that tweak was, or image quality comparisons. Makes the whole Aniso thing very foggy.

I must say it's almost saddening to see how something as seemingly "innocent" as a control panel setting lay-out can TOTALLY influence the review process. I'm willing to bet that if nVidia had done something like have 3 forced settings for aniso, "Quality (force the original trilinear method), Balanced, Aggressive", then we'd see a lot more detailed, and probably more legitimate, aniso comparisons. As it stands, I don't think I've seen one p/review YET that looked at comparing GeForce "application" aniso. How quickly everyone forgets the arguments about how "good" the original Geforce aniso is, and it's the only "right" way to do it.

At least, he acknowledges ATI's performance mode is better quality than Geforce's aggressive mode.

Anyway....

Some new tests that we haven't seen (like Chameleonmark that someone here was asking about), and some commentary on individual performance scores that I didn't "Get" when reading them.

And the seemingly now common and misplaced comments about FX being more "forward looking":

So essentially, GeForce FX has been designed for more forward-looking titles at the expense of older single-textured applications.

Seems completely backward to me. "Older" single textured games? Older games are multitextured. Forward looking games may not be texture limited, but shader limited, where multiple texture units will be less important. Truthfully, the FX pipelines have been designed for maximum utilization of its bandwidth. The more "forward" looking arrangement is the 9700's....because it supplies the bandwidth to feed it.

In the conclusion, I just don't know where he's getting things like this from (my emphasis added):

Doom 3 (and the underlying games based on this game engine) is going to sell lots of graphics cards for ATI and NVIDIA, and right now it’s still unclear if ATIs DX8/DX9 hyrbrid approach taken with RADEON 9700 is best, or if GeForce FX’s forward-looking design is ultimately proven to be the winning strategy

Again, if any chip was a DX8/DX9 hybrid, it's the GeForceFX with it's separate processing units.....

In the end, the "overall" conclusion was what I would more or less expect, so I guess I can't complain too much. ;)
 
i think if I read any more of these Nausiatingly innacurate pandering "Reviews" Im going to be sick...

*Tweaked ATis AF to make it the same IQ as the GFFX*

Is so Completely Absurd that I Am at a complete loss for words...
 
The comment nmade about how the Nv30 is more "forward looking" is made even more completely absurd when you look at the actual benchmarks. the FX wins the OLDER Pixel shader, Vertex shader, and T&L tests but LOSES all of the DX9 tests.

It is so apparent that many reiewers dont even examine their own data. All they do is post whatever comentary Nvidia Spoon Feeds them.
 
I posted several months ago (before the GFFX was released at COMDEX) that a forward looking card has to be optimized for stencil throughput and shader operations. Games of the future are going to be stencil/z fill limited due to global lighting. It's not just the Doom3 engine, for example, Command and Conquer Generals uses either shadow volumes or buffers to achieve self-shadowed lighting everywhere. This is the trend, just like multitextured Quake1 shadows were, or dynamic lights,and I expect most future games will have it.

Shader throughput is next, but long shaders depend on early-Z rejection or 2D-post process to be effective. This means, Z or untextured fill is important.

The problem is, on the first N passes, you need a pipeline optimized for raw fillrate, but on the last pass, you need a pipeline optimized for shader throughput and texturing. On the last pass, none of your shaders are going to write 1 pixel per pipeline per clock anyway, but will require several clocks to execute, so the peak single texturing fillrate is not relevant, only the untextured fillrate.

I think you are right that the NV30 is designed to match its limits, but not the bandwidth limits per say, but the transistor budget limits. The NV30 would have been a much better design if they had dropped those integer pipelines altogether. There will be few integer precision ops in future shaders, that the waste of so many transistors on something which won't accelerate performance is a huge mistake. The last pass doesn't need alot of pixel write bandwidth since you are aiming to eliminate all overdraw and execute shaders only for visible pixels. The last pass does need alot of texture fetch/filtering bandwidth.

I actually think the NV30 has the right idea, just the wrong implementation. It is correct to have a design which is optimized for two scenarios (stencil/z fill and shader throughput, although I would add that non-textured color fill also needs to be there)

On the other hand, the integer pipeline is wasting space that could be used to make the FP pixel shaders run faster (or AA better).

The ideal card:
1) has extremely high Z/stencil fill (first N passes)
2) has high early-Z rejection rate (last pass)
3) has high shader throughput and doesn't have large penalty for sampling many textures (last pass)

The NV30 seems to have gotten #1 right, but skimped on #2 and #3.

Hopefully the NV40 will boot the integer units, and make #2 and #3 run alot better.
 
DemoCoder said:
The ideal card:
1) has extremely high Z/stencil fill (first N passes)
2) has high early-Z rejection rate (last pass)
3) has high shader throughput and doesn't have large penalty for sampling many textures (last pass)

Yes, (aside from considering AA implementations) pretty much agree on priorities, at least for games. I also agree that the inclusion of integer units on the NV30 is a bit puzzling. Perhaps that was more or less needed to minimize compatibility issues / driver nightmares with applications that use NV2x register combiners? (I'm thinking primarily GL here.)
 
Very strange review indeed. I can't go so far as to call it biased, due to the conclusion, but very poorly executed.

Why were results from an OC'ed 5800 included in some of the tests, and not others. If OC'ed results were included to demonstrate 'potential' of the 5800, then why were no OC'ed 9700P results shown? Even worse, why is there no indication of the settings used for the OC'ed results (perf or balanced)? Or maybe I missed the description in there somewhere.

Then there's the truly bizarre - the AA and AF results for UT2003. First, vs the normal 5800 the OC'ed 5800 gets beat by 30fps at 8X6. Then its ahead by 24fps at 10X7 (showing almost no perf drop from 8X6. It stays ahead at 12X9, and falls way behind again at 16X12.

Actually, on second thought I can account for this behavior if perhaps the OC'ed version was being clock-throttled due to overheating (scary for a 5% core speed increase, unless voltage was also increased). No matter though - the fact that this inconsistency was never mentioned demonstrates that very little analysis of the results was performed before a conclusion was reached.

Neither ATi nor NVidia is well served by this type of 'thrown-together' article. Perhaps less attention should be paid to whether or not 3DM03 makes a good benchmark, and more attention paid to what makes a good review.
 
FiringSquad GeForce FX Ultra preview said:
In addition, the scalable clock frequency “featureâ€￾ can sometimes underclock your GeForce FX 5800 Ultra card right in the middle of gaming. We had to repeat multiple runs of Serious Sam and Quake 3 running with 4xAA/8xAniso enabled to get our final numbers, in some cases the margin between the scores was as high as 30%!
This makes it sound like they reran the numbers until they got the desired results. The enduser is done a serious disservice by this methodology: If the clock frequency is dropped during gameplay, you can't "rerun" the game multiple times to get the higher clockspeed! If the clockrate is dropping from 500/500 to 300/300, I believe the enduser will notice the 40% drop in performance quite significantly and will not get a good gaming experience.

Looking at the benchmarks, how well do you think a 300/300 GeForce FX would stack up to a 9700 Pro or even a GeForce 4 Ti 4600?
 
I really think more emphasis needs to be put on the auto clock throttling. That is the worst thing I've ever heard for a video card. Yet it's just barely touched upon in these reveiws even though it seems like most of the reviewers have experienced this. The worst thing is, this is happening when the reveiwers are only benchmarking. Benchmarks aren't run for very long, so it seems these cards throttle down pretty easily. It's going to be worse when actually playing games. I think all of the benchmark results are bogus if you can't get those scores when actually playing a game due to throttling. And Nvidia had the ballz to complain about 3dmark03 not being representative of games? People had the ballz to complain about the quack thing? Where is the outrage? They've built an overclocked card that will barely stay at top speed long enough to get a high benchmark score, that won't actually run that speed when playing games due to it throttling down because of overheating. Ridiculous.
 
I think OpenGL guy bought up a good point. The speed drops of the FX pose a serious problem for anybody running benchmarks and if retail cards have this problem then the FX future is even bleaker than I thought. Anyway, if you benchmark and take averages including any runs where speed throttling occured then you will get some strange and rather unpredictable results. Imagine the confusion of one review giving a set of results and another site publishing results that are sometimes 30% lower. There are always discreprancies sure but not like this. The only way to get a 'standard' score is to rerun tests and make sure that all run at the full speed and then note what happened (as the FiringSquad review did). Perhaps of interest would be to see the results of benchmarks where throttling occured to see the impact on performance.

If you ask me the down throttling alone would be enough for me to fail the card in a review and I don't see how anybody can possibly give it a positive review if it drops performance by 30% at random times during gaming. Obviously none of the reviewers actually play any games on the card, they just run benchmarks. I can just imagine people playing UT2k3 at LANs with a GFFX and having framerates drop 30% while playing. They are not going to be happy customers 8)
 
I think the clock-throttling thing may be why Ultras are not really going to be available to the general public...
 
I cant imagine that the product will ship that way, or that any company would think they could sweep that under the carpet.

I'm inclined to say its a driver problem, since its been stated that it can be benchmarked for a long period of time while overclocked with no problems. (That would seem to indicate that its NOT a heat related issue, but triggered by something else)
 
I think the clock-throttling thing may be why Ultras are not really going to be available to the general public...

And why they shouldn't be reviewed for the "performance crown" either.
OpenGl guy hit the hammer on the nail. Everytime the clock downthrottles, reviewers simply rerun the test until they get the desired score. That is completely bogus.

Isn't anyone the least bit curious why Nvidia hasn't released any of the 5800 non ultras to be reviewed? The only card they have released for review is a card that will never be released to the public, except for a very limited supply of preorders. You've already seen Huang and Tony pr's 'emlots Tommasi touting they have regained the performance lead. But have they really even if the benchmarks showed they did?

I'm inclined to say its a driver problem, since its been stated that it can be benchmarked for a long period of time while overclocked with no problems.


Wasn't it anands review where he said the card would automatically downclock when trying to overclock?
 
I just finished reading the review and I must say having overclocked GFFX scores makes the whole thing really confusing. When quickly glancing over all the benchmark score it appears that the FX is beating the R9700P in nearly everything, often by a reasonable margin but a closer look reveals that the R9700P is actually winning most of the tests against the stock GFFX. When including overclocking scores like that they should have also included an overclocked R9700P to be fair (and this might have been interesting to see which card overclocks better, my bet would be the R9700P).

At least the review was less biased and much better informend than the THG review.
 
Informative post, DemoCoder. It'd be nice to see some C&C:Generals benches in upcoming B3D reviews.
 
Yes, the fact these websites are providing overclocked performance at speeds the card cannot maintain does make the whole thing seem a bit fishy.

There are 3rd parties that are now selling 400mhz 9700 Pro's using the heat-pipe cooling system, and these run at 400mhz always, not throttling down in the middle of a game, and for periods that extend beyond a matter of minutes.

I'm also interested why so few sites have provided benchmarks of the default settings. One site did a pretty good job and it's pretty telling. Check out the review at guru3d for the true story behind the benchmarks-
http://content.guru3d.com/article.php?cat=review&id=22

All this factors together to really paint a *fake* preview process.
 
Sharkfood said:
I'm also interested why so few sites have provided benchmarks of the default settings. One site did a pretty good job and it's pretty telling. Check out the review at guru3d for the true story behind the benchmarks-
http://content.guru3d.com/article.php?cat=review&id=22

All this factors together to really paint a *fake* preview process.
I don't consider this to be a good review at all. For one, it's full of obvious bias. For example, whenever the 9700 Pro scores higher the application is inadequate, yet when the GeForce FX scores higher "The benchmark proofs that this game engine utilizes the videocard extremely well".

Another comment, "If you are missing out on the Scewed Grid AA modes then you are absolutely right. You won't see them in any OpenGL applications as they are only supported in Direct3D. Doing 95 Frames per second on a mid-range PC with 4x AA is really nice I'd say." But yet they fail to note that the 9700 Pro is getting over 100 fps at the same settings!!

Absolutely amazing...

Edit: fix typo.
 
RussSchultz said:
I cant imagine that the product will ship that way, or that any company would think they could sweep that under the carpet.


you must have missed it; they have already started shipping the product. newegg already sold out of their first batch and a few other places have been selling them too. if the issue was resolved i am imagine nvidia would be driving the point home to us, don't you?
 
Back
Top