Which path will NV40 use in Doom3?

radar1200gs said:
Whats the point of using a comparable path if most of the users will run the game in a non comparable path due to performance gains?

THIS, my friend, is the bullshit and not striving for an apples-to-apples comparison.
 
I'm quite sure the game designer didn't put the alternate path in the program for window dressing, it is put there to be used. Since its a developer designed path there can hardly be any cause for complaint or claims of the program "not doing things properly".
 
radar1200gs,

Benchmarking is supposed to be testing an equal workload.

Benchmarking can be used in show up weakness in difference VPU architectures - this is a good thing.

I would mostly like to see an apples-to-apples ARB2 comparison.

I would also like to see the all 'alternate paths' compared just for my own edification.


You should read this (esp the 'What do game benchmarks measure?' section :

http://www.beyond3d.com/forum/viewtopic.php?t=8521
 
radar1200gs said:
Neeyik said:
Well purely on the basis of comparable benchmark results, one would have to use the standard ARB2 path; just in the same way that we normally use projector shadows in Splinter Cell.
Whats the point of using a comparable path if most of the users will run the game in a non comparable path due to performance gains?
Since when has any review benchmark been an accurate respresentation of game? What reliable, repeatable and consistent reviewer would choose to test a graphics card by "just playing a game". We're reviewing the cards, not the games.
 
If you want to compare apples to apples run the game using a software rasterizer, thats the only way you will get apples to apples.

3D hardware varies from vendor to vendor. Forcing hardware to use a suboptimal path is stupid and benefits nobody.

What you need to test is the best path for a given piece of hardware and the Image Quality that results.
 
radar1200gs said:
If you want to compare apples to apples run the game using a software rasterizer, thats the only way you will get apples to apples.

3D hardware varies from vendor to vendor. Forcing hardware to use a suboptimal path is stupid and benefits nobody.

What you need to test is the best path for a given piece of hardware and the Image Quality that results.

So.. some people would like to see the same thing run on different
hardware, where as you would like to see different things run on
different hardware? :? We might as well take it one step further
and run different games on each card. Perhaps each manufacturer
could nominate a game which they feel best demonstrates the
superiority of their product. What a wonderful benchmark that
would be. :)
 
I think it will actually make little difference what path is used for benchmarking. I'm fairly sure the shaders have been replaced already (i.e. I believe that the ARB2 paths may actually be as fast, if not faster faster than the NV30 path on NV35 already!).
 
DaveBaumann said:
I think it will actually make little difference what path is used for benchmarking. I'm fairly sure the shaders have been replaced already (i.e. I believe that the ARB2 paths may actually be as fast, if not faster faster than the NV30 path on NV35 already!).


Hmmm In the Benchmark or the game itself?

I think this question is Silly until Quality differences can be seen. If its as little as carmack implies. Then It may not even be an issue of Quality. Between the 2 rendering pathways. If its like the Far Cry differences. Then it should definately at least be mentioned and observed.
 
DaveBaumann said:
I think it will actually make little difference what path is used for benchmarking. I'm fairly sure the shaders have been replaced already (i.e. I believe that the ARB2 paths may actually be as fast, if not faster faster than the NV30 path on NV35 already!).


I'd say radar1200gs has worries that the ARB2 path may not support UltraShadow I/II, hence his gripe with using the same path across all cards.
 
Using a game with different paths as a benchmark is stupid unless you are only testing that game. I can see that Doom3 will be used nonetheless though simply because it will be popular. So the issue is: do you use custom paths or force a particular path? The answer depends on what you want to know. Assuming there are no big IQ differences between NV30/40 paths (if there will be a NV40 path) and the ARB2 path then using the specific paths is fine (optimizations should be there to take full advantage of each card so although the code might be different you are still testing the potential of each card).

Of course if there are big IQ differences when running different paths you can't get a fair comparison unless you run all cards on the same path. If you choose ARB2 as the path used to test all games then those that suck at that path are disadvantaged (NV30!!!) and might appear worse than they will realistically be (when the end user runs the default, card-optimized path). Any good review site will do the right thing anyway by stating explicitely what path was used for each card, benchmarking other games as well and by doing IQ comparisons. Doing anything less will not give you an accurate picture of relative performance anyway, no matter what game you use to test.
 
Sorry, I was kinda answering a question of what path we'd use. I think ARB2 would be the one of choice, but I question how much benefits there will be in testing any paths because there's zero guarantee that you are actually benchmarking the shader path in that game.

As for the Paths, remember these are really only dicating which particular shader path to use for the unified lighting model - I don't see why ultrashadow support wouldn't be just a test of the extension within the engine, regardless of the path used.
 
DaveBaumann said:
As for the Paths, remember these are really only dictating which particular shader path to use for the unified lighting model

I wasn't aware... hmm, seems like I overestimated it's influence. Thanks for the heads up!
 
I don't see why ATi supporters would be against the concept of benchmarking using different paths.

After all its likely to become quite important to demonstrate that R42x isn't hurt by a lack of SM3.0 in upcoming games...
 
Considering the shaders in Doom3 can be done in one pass with an equivelent of a PS1.4 shader model I fail to see how your statement bears any relevance to anything.
 
radar1200gs said:
After all its likely to become quite important to demonstrate that R42x isn't hurt by a lack of SM3.0 in upcoming games...

What "upcoming games" are you talking about, exactly? :?:
 
Greg, your post has absolutly zero bearing on Shader 3.0 titles and what paths used in D3 - they have no relation at all.
 
You are correct they are probably off topic for this thread, but the idea behind them is quite valid. Feel free to split them into a new thread if it worries you that much, however I think any derailing of the thread that may have occured is quite minor in comparison to some other threads in this forum.
 
Thanks Nick and Dave.

radar: here's my opinion why the general path should be used: because benchmarks are always artificial. Even hardocp's are artificial. They say people don't "play" 3dmark03, I say we don't play "UT flyby", I say we don't play "Q3 Crunch". We even don't play their "FRAPS run".

What's worse IMHO however, is that they change image quality options, rez, etc. to "hit the sweet spot". What makes them think what they consider adequate is equal to my experience? And perhaps I prefer to drop down a rez rung than reduce AA (or vice-versa). On top of that, how can they say they test how gamers play if a lot of people is going to have CPU/RAM/Mobo/whatever different from what they test.

What this all means is that, unless you happen to agree with their IQ vs fps standards their benchmark numbers are next to useless. And since they can't test every single game gamers are prone to play, this also means (IMHO) that striving to "mimic gamers" is bound to fail, no matter what.

The other way is to go for benchmarks "you have to be this tall". Then you compare fps & IQ. Like Nick said, this is the only way to compare video cards's power: give them equal (or as similar as possible) workload and judge the fps and IQ results.

I don't read a p/review to know how many fps I'm going to get on game X. I read them to know which is better overall so I can do an informed decision. I want to know worst case fps scenarios because the only way out from that is up and not "sweet spot" scenarios that may clash with what I'm interested in.

But of course, there will be people who disagree. :shrug:
 
DaveBaumann said:
I think ARB2 would be the one of choice, but I question how much benefits there will be in testing any paths because there's zero guarantee that you are actually benchmarking the shader path in that game.

Hmm? I was under the impression you are always running under one path or another. (either general paths such as ARB/ARB2 or chip dedicated ones like nv10/nv20/nv30/r200/parhelia)[/b]
 
Back
Top