GeforceFX - what will be the best way to benchmark it?

antlers4 said:
I may be mistaken, but I believe the 9700 launch was 30 days (it may have been shorter) before reviewers were allowed to post benchmarks (although they may have had the boards earlier :)).
Am I remembering wrong?

No you are right, the R300 was launch on the 18th of july (http://www.anandtech.com/showdoc.html?i=1656), and the 1st reviews ramp up on the 19th of august (http://www.anandtech.com/showdoc.html?i=1683)

So, we could see some reviews of the NV30 before the 19th of december ;) and Nvidia would be on the same scheddule as Ati ;)
 
Simply wrong.

Anand had a card (or access to one) and posted benchmarks scores on the July article and they start on page 17:

http://www.anandtech.com/showdoc.html?i=1656&p=17

Granted it was not a final sample and they were not in FPS. But they are benchmarks of the card against the GF4 and ATI allow relative numbers to be posted. The difference with the NV30 is we have had NV give out the numbers while at the R300 launch we had an indepenent test and we had AA/AF comparisions as well. So you simple can not even say these two things (R300 NV30) were launched in the same manner.
 
It was a non-final test Radeon9700 at non-final clockspeeds, and ATi didn't let them publish anything other than % improvements over GeForce4Ti4600.

Not much better than Anand's "NV30 gets 46.5 fps in DOOM3 nvdemo3, R300 only gets 33.1".
 
I'll revert to my more professional tone regarding benchmarking the new architectures of the newest generation of gfx-cards. I feel that the collective B3D community has the know-how to contribute something worthwhile here.

Application benchmarking is impossible as long as there are no applications that strongly depend on the new features. Also, even when such titles are available, due to the versatile nature of the new engines it is difficult to say whether benchmarking these titles describe anything other than themselves. Of course to those that actually use it, benchmarking a title says something which may be relevant.

Simulated application tests (Example: the "Games" tests in 3DMark) could be produced, but still suffer from the problem of transferability, only worse, since they would try to guess what might be a typical future game code. The results would likely be less useful for prediction than an application test, with the added disadvantage that noone could benefit from the test directly since it isn't a full app.

Thus, if we want to test these new architectures, subsystem tests would seem to be our main alternative in the short perspective. Of these, there are two flavours. One is to try to measure very low level performance, actual instruction throughput for instance. Another is to try to guess what might (for instance) be typical shader programs and then run these. (Example: 3DMarks Dot3 test) If sufficiently informed people get together and try to come up with good examples, I feel that this last path should be doable, and might give results that would at least be interesting and perhaps even useful.

I would suggest the creation of a new thread (perhaps in the more coding oriented forum) for the purpose of sorting out who are prepared to contribute suggestions, insights, and actual coding work.

Entropy
 
BoddoZerg said:
It was a non-final test Radeon9700 at non-final clockspeeds, and ATi didn't let them publish anything other than % improvements over GeForce4Ti4600.

Not much better than Anand's "NV30 gets 46.5 fps in DOOM3 nvdemo3, R300 only gets 33.1".

How it is not different: it was not completely independent benchmarks in a reviewer controlled environment, i.e., reviewers didn't get cards to take "home" and test.

How it is different: reviewers had independent control of the benchmarks and benchmark settings. Namely, this is completely different than having benchmark numbers provided to you on a slide. To say it is not different looks extremely silly to me. To say one is "not much better" than a comparison for an unreleased vendor specific demo is laughable.

That said, I don't think the difference matters much to the consumer, unless they are treating the numbers that resulted as being the same thing. Namely, I think it is a much inferior style (namely, a style resultant from a focus on producing hype rather than substance), but as to when actual completely independent reviews appear I don't think it matters much....or shouldn't if the December dates for reviews are met.

To be clearer, I don't think the difference matters at all to a smart consumer who evaluates the benchmark comparisons thoroughly (an example would be most of the people on these forums), but I also don't think most consumer fit that description.

Further, I think the difference is understandable given the issues nVidia has faced. If the Anandtech's and Tom's and PC Magazines of the world reviewed like Beyond3D did, I wouldn't even be so concerned about gullible consumers...so actually I anticipate placing the blame for the difference I expect from this mostly on the anticipated hype-infected "reviews" and their benchmarks based on past inability to discern substance from hype displayed by such sites.

There is of course the possibility that the nv30 does live up in reality to the PR hype concerning performance relative to the R300, so I agree that too much blame should not be assigned until we have actual proof that it doesn't. But I don't think concern over the lack of some technical details and substantiation so far is necessarily assigning blame.
 
BoddoZerg said:
It was a non-final test Radeon9700 at non-final clockspeeds, and ATi didn't let them publish anything other than % improvements over GeForce4Ti4600.

Not much better than Anand's "NV30 gets 46.5 fps in DOOM3 nvdemo3, R300 only gets 33.1".

NV handed them those numbers. No if was given on the drivers, settings, ect. Anand did not have a NV30 to test.

Anand had an R300 and was allowed to run it in test he wanted and show AF/AA images as well as perfromance numbers with all details give. Pretty much night and day difference.
 
jb said:
NV handed them those numbers. No if was given on the drivers, settings, ect. Anand did not have a NV30 to test.
Whether Anand has a NV30 to play with isn't as important as whether he has a Doom3 build to play with. The former is much more possible than the latter.
 
What troubles me about the Doom3 benchmarks and Anand...

I don't believe nVidia gave out Doom3 scores for the Radeon 9700. I think every benchmark I've seen given out by nVidia (even Doom3 scores shown on other sites) only had GeForce4Ti and NV30.

Look at Anand's "benchmark chart":

doom3.gif


Notice that Radeon9700 doesn't appear in the legend? (Suppossedly the red bar) Looks like to me Anand shoe-horned that in...So how did Anand actually get those scores?
 
Look at the bottom right hand corner - you see the grey bit? Thats a rip directly from an NV power point presentation.
 
It's interesting to note that nVidia mainly compare NV30 to the GF4 and only(?) to the R300 in Doom III where we know that Carmack has made a special NV30 backend. I would guess the NV30 backend doesn't (yet?)work on R300 which then uses the R200 backend.
 
IIRC, there may not be an R-300 specific back-end for Doom. The logic being that the reason for creating the NV30 back-end is so that NV30 could render Doom3 with "one pass per light". None of the previous nvidia paths are coded to do that.

However, the ATI R200 back-end already does do that. (A tribute to Radeon 8500 pixel shader flexibility). So Carmack may be content to let the R-300 just run the R-200 back-end.

Which is a shame IMO, because I would think that even if it doesn't collapse any further passes, an R-300 specific back-end would be a bit more optimal for R-300 than the R-200 path...
 
Plus R300 supports the two-sided stencil optimization, whereas R200 does not. Seeing as how Doom 3 is highly dependent on stencil shadow performance, I'm sure there will be some form of R300 optimizations included.
 
Back
Top