HardOCP and Doom 3 benchmarks

But the bulk of the reviews, imo, should be run on the standard path.

Natoma, you don't make any sense. You (personallY) buy games and video cards to play games, not benchmark them right? Then how is showing how a game runs on a card using a different path representative of how the game will run when you go to play it?

What is the primary purpose of video card reviews? To see which one runs games the best. How would running a game in a path that no one who actually plays the game would ever use serve the purpose of the video card review? Except for the few people addicted to 3dmark, people buy a video card to play games. Tell me, knowing how a card runs on a different path than what the card will use when you actually play the game, how is that representative of what you will see when you actually play the game? If you're all for running everything using the same path, then you'd have to go for the lowest common denominator, with is the ARB path which would look like crap compared to the others.

What I don't get is that you seem to be all bent out of shape about Carmack putting in a specific path for the GeForceFX, but you haven't said a word about him putting in a specific path for Radeon 8500 series. When Doom 3 comes out, it will most likely be benchmarked on cards like GF4 Ti and Radeon 8500 series because a lot of people have such a card. How would it be possible to have a apples-to-apples comparison if those cards don't support the ARB2 path? You'd have to drop to the ARB path, thus sacrificing IQ and many of the effects.

What you want will never happen. The primary goal of game companies like id is to sell games. Sure, there may be other factors, but the main goal is to sell games. If you program a game so it only uses ARB and ARB2 paths, it'll run (and look) like junk on the majority of systems. If it runs slowly on most systems, no one will buy it because it wouldn't be fun to play. So, the programmers implement different paths for different cards to get performance so that it's playable. Your idea that reviewers should only use standard paths to force ATI and nVidia to provide no extensions is, in my opinion, flat out backwards. Reviewers have very little impact on the overall design of video cards. ATI and nVidia ask programmers what features they want. Without extensions to OpenGL (which according to you are not standard) there would be no way to use them. You might say we should just wait for the OpenGL board to add them to the standard. That would take a very long time, and most likely, the next chips will have been released.
 
For those who cant access Anandtech, where's what Carmack wrote them:

John Carmark said:
The executable and data that is being shown was effectively lifted at a random point in the development process, and shows some obvious issues with playback, but we believe it to be a fair and unbiased data point.

John Carmark said:
We were not happy with the demo that Nvidia prepared, so we recorded a new one while they were here today. This is an important point -- while I'm sure Nvidia did extensive testing, and knew that their card was going to come out comfortably ahead with the demo they prepared, right now, they don't actually know if the demo that we recorded for them puts them in the best light. Rather nervy of them, actually.

John Carmark said:
The Nvidia card will be fastest with "r_renderer nv30", while the ATI will be a tiny bit faster in the "r_renderer R200" mode instead of the "r_renderer ARB2" mode that it defaults to (which gives some minor quality improvements).

So far, so good.
 
Natoma said:
ARB2 is an OpenGL standard is it not?

No, it isn't. ARB is a path in DoomIII. ARB2 is about as relevant to OpenGL as bicycle is to cubicle.

The standard is the OpenGL core and ARB extensions, GL_ARB_fragment_program, GL_ARB_vertex_program etc.

The R200, ARB, ARB2, NV20, NV30 paths in DoomIII are pieces of code specific to DoomIII and far from being a standard of any kind.
 
Lezmaka said:
What you want will never happen. The primary goal of game companies like id is to sell games. Sure, there may be other factors, but the main goal is to sell games. If you program a game so it only uses ARB and ARB2 paths, it'll run (and look) like junk on the majority of systems. If it runs slowly on most systems, no one will buy it because it wouldn't be fun to play. So, the programmers implement different paths for different cards to get performance so that it's playable.

I agree that games should be benched using whatever path the gamers will actually play it on (with commentary given on any quality or feature differences there may be among the paths, if any.)

However, what you said above is only half right.

Games companies are NOT in the business to sell games. They are in the business to make a profit. Selling more games is only half the equation of profit. How much it costs to make the game is the other half.

To support multiple optimal rendring paths for multiple cards costs more money than supporting one rendering path, though less optimal, for all cards. So there is a trade-off. A real "is it worth it" decision has to be made when deciding to code specific paths for specific architecutres. It's certainly NOT (paraphrasing), "all companies will do it, because they will sell more games that way."

If the company calculates that it would cost more money to support and implement, vs. any projected gain in sales, they won't do it.
 
Humus, I think Natoma is trying to say that the "ARB2 path" is utilizing ARB2 extensions, which are presumably blessed by the ARB/OpenGL guiding committee as the shader input mechanism for OpenGL.
 
Natoma said:
I think it does. It shows a lack of standards adherance from Nvidia. And from the tenor of Carmack's .plan updates, it seems that he's quite miffed that he has to code a completely separate path for the Nvidia cards to get them to work correctly.
This is silliness. The architecture was set in stone far, far before these standards you talk about were adopted. One cannot buck the standards before the standards are formulated!

I see no reason why nVidia should be penalized just because their direction was not chosen as the "standard." ATI just got lucky that their direction was chosen as standard, and nVidia's off-the-wall idea that data types should actually be used was utterly snubbed. We've only had CPU's doing that very thing for decades, and graphics processing has always included a wide array of different data types. Why not for shader processing?

I don't think nVidia should be penalized most particularly because they have released such excellent tools for developers, such as Cg, to ensure that it is as easy as possible to take into account the idiosyncracies of the NV3x architecture while not slighting other architectures in the process.

Even more importantly, I think it is a good thing that developers are going to, more or less, be forced to use higher-level shading languages. This will encourage hardware developer freedom and creativity. Adhering to low-level standards is a bad thing (please note that I was saying this very thing long before the release of the NV3x, or even its programming specifications). I sincerely hope that Microsoft soon goes the way of OpenGL 2.0, where all shader programming is meant to be done in a high-level language with hardware-specific compilers.
 
Here's an interesting quote about Doom 3 / NV30/35 in Toms review -

Due to a bug, ARB2 currently does not work with NVIDIA's DX9 cards when using the preview version of the Detonator FX driver. According to NVIDIA, ARB2 performance with the final driver should be identical to that of the NV30 code. So a lot has happened since John Carmack's latest .plan update (www.bluesnews.com). The questions about floating point shader precision will soon be answered as well.
 
RussSchultz said:
Humus, I think Natoma is trying to say that the "ARB2 path" is utilizing ARB2 extensions, which are presumably blessed by the ARB/OpenGL guiding committee as the shader input mechanism for OpenGL.

Heh. I take that back. Searching opengl.org for "ARB2" yields nothing.

What is meant by ARB2? using GL2_xxxxxx functions? Or....? Could somebody fill us in and vanquish the perpetuated misunderstanding?

edit:sssss my precioussss
 
- ARB is not Proprietary <-- I can't stress that enough
- FP 16 or Int 12 does not meet Minimum DX9 spec Fp 24 -Especially Int 12 where the speed is coming from
- There is no screen shots to base conclusions on, just graphs
- Vendor specific extensions are fine, Proprietary is not.


Doom 3 engine is being used as benchmark here, no different than 3Dmark, not a game in this scenario..benchmarks need a standard code path...otherwise its not a benchmark is it, all things need to be equal.
If a IHV decides to exceed the minimum compliancy spec, good for them but benchmarks should be run with at least the minimum spec of compliancy (DX9 or ARB 2).

If Futuremark did the same thing, would it be fair ?? Would having seperate code paths for every card give a good benchmark comparison.
This again is being used as a benchmark, getting sales (thats all this is about, just the Doom 3 graphs)

I would like to say also I feel it is time for ATI and Matrox and 3Dlabs to stop pushing standard code paths. I am tired of seeing the companies that are trying to make programmers lives easier by only having to write one code path for their engine end up on the short end of the stick.

I hope upper management at ATI and others now realize their work is meaningless when developers will take the time to optimize engines for non-standard code paths. If I was on that ARB team, going to those meetings seems pretty silly now doesn't it..standards are not needed.
ATI and 3Dlabs with Via should start flooding the ARB with Partial Precision shader extensions, they also need to start implementing these lower precision modes in hardware so they too can win the graph wars.

Again there is only one company in the Openg ARB that have Proprietary extensions.

Happy overall that Nvidia released a competive product, now prices should fall for people waiting to get a DX9 card.
Not happy with the total (as usual) complete end of the spectrum reviews from different sites.


The inclusion of Doom 3 in benchmarks with no comments from ID on the results also leads alot to be desired, and Carmack is more then due for a 'plan' update.
 
McElvis said:
Here's an interesting quote about Doom 3 / NV30/35 in Toms review -

Due to a bug, ARB2 currently does not work with NVIDIA's DX9 cards when using the preview version of the Detonator FX driver. According to NVIDIA, ARB2 performance with the final driver should be identical to that of the NV30 code. So a lot has happened since John Carmack's latest .plan update (www.bluesnews.com). The questions about floating point shader precision will soon be answered as well.


Because in the final driver they will be replacing whatever ARB2 shader JC ends up with with one that uses their fixed-point path... :)
 
Lezmaka said:
What is the primary purpose of video card reviews? To see which one runs games the best.

Not always true IMO. If your review is trying to figure out what is the best card to buy, then this is alright, just run the app how it's going to be run by users. The conclusion should also be worded so that it states that this or that card plays this or that game better or has this or that advantage in this or that app. It should not state that a certain card is a better card unless you've run test under which you can conclude this. If you're trying to figure out which card is the better product, then running them through the same piece of code is the only correct thing to do. What is a better product is determined 100% by the hardware and drivers. NVidia adding GL_NV_fragment_program extension to their driver adds value and makes the card better. A driver supporting this extension however does not change how good the card or the driver, thus does not change how good the product as such is.

It's like when you buy a cell-phone. Most of us just use it for talking and sending an SMS occasionally. So most people buy low-end phones. They are better buys for them, it has what they want and need at a low price, but they are not better phones then the high-end variants. Many telecom service providers also make special services for certain models. If you have an Nokia you're more likely to find special toys and services that works with your phone than if you have for instance an Sony-Ericsson or Motorola. This is purely because Nokia has a larger market share, but it doesn't make Nokia phones better product. Though for some that care about these services, Nokia may be a better buy.

I think it's important to highlight the difference and not conclude wrong things from tests. I have no problem with JC adding special support for some cards in DoomIII. If a review looks for the best card to run DoomIII and benches R300 through ARB2 and NV3x through NV30 path I'm fine with that (though any visible IQ difference should still be provided). If however someone from such a test concludes that NV30 is a better product I'm going to protest and suggest they run both cards through ARB2.
 
Doomtrooper said:
Again there is only one company in the Openg ARB that have Proprietary extensions.
Oh, you mean like:
GL_ATI_separate_stencil
GL_ATI_texture_float
GL_ATI_vertex_array_object
GL_ATI_fragment_shader

Proprietary=vendor specific.
 
WB DoomTrooper!


Well he does have a point. I remember a lot of video card revies that used UT in the reviews back in the V5 days; where they ran the V5 in D3D even though about 99% of the V5 owners out there used Glide. Granted a few sites botherd to do both. But back then it was ok to do that but now its not?

I think the only fair way it to run them twice. Once on which ever vendor specific path that it can handle. And once on a standard path to look at the differences...
 
Reverend said:
... I'm just pleading for you guys to be a bit more realistic. Like I said before, shootouts the way they are done now don't really tell the whole picture!
Then we agree that we don't have the whole picture. Probably JC is fair and the diferences are not big but a complete review is needed.

I am waiting one of yours review about it :)
 
I think the problem can be briefly stated:

  • It is fair to generally compare Doom3 using the NV3x specific path for the NV35, and the ARB2 or R200 path (depending on the importance of the performance difference to the comparison results) for the R350, as the NV35 featureset and integer performance emphasis are suited for what Doom 3 was designed to target.
  • It is not fair for this comparison to be done on nVidia's time table when ATI did not have the same emphasis and opportunity for testing and preparation, while nVidia indicates a strong emphasis on Doom 3 benchmarking presentation.
  • It is not fair for someone to walk away from the comparison thinking the Doom 3 performance of the NV35 is representative of DX 9 specifications and all of the "CineFX" and "128-bit floating point" nomenclature nVidia throws around.

I think the dispute is centered around not realizing that some are just assigning different priorities to each of these points, though that doesn't mean all points are not valid.

My own opinion is that, for the Hard OCP piece, effort was made to counteract the second and third point, and that a strong emphasis was placed on the first, and that with the given info about issues with the testing, this was done successfully to a reasonable degree. I think they could have done better, and I think they should have, but their priorities seem in line with what I perceive to be the focus of the site.
 
Well, I think having a standard is still useful. Not as useful as in an utopic world, but still useful.
That's because if there were NO standards at all, everyone might decide to use a very different way to do something, even if it isn't much more useful. So the programmers would be required to learn two different methodologies even though their efficiency is roughly equal.

So, with standards, at least even if the paths might be different, the vendors will have something to try to adhere to, so even if there are differences, there might be LESS differences between vendor proprietary systems.

Of course, it's impossible to proof, but IMO, it does make some sense.
Just food for thought.


Uttar
 
Reverend said:
Which "point" is more important in your opinion? Are you going to take "Doom3"'s (note my putting Doom3 in quotes!) performance as evidenced by these NV35/R350-256mb reviews as representative of all games?

These are previews of Doom3's state as-is running on the different cards+drivers. Don't read too much more into this than it is.

No. What I'm saying is that games should be benched in the standard paths for the engine. So if the standard path is ARB2 for DOOM3, then I think that the reviews should be benched for ARB2.

Then have a separate section stating that these numbers come up when the benches are run in their respective paths. Then a comparison can be made between the standard path and the proprietary paths to see what the IQ/speed differences are between the two. But I believe they should be kept separate.

The reason I think that reviewers should keep standard rendering in the forefront, as I stated later on in my post, is because I feel that card manufacturers will then be forced to make sure their hardware and their drivers comply with what is being tested.

If reviewers did nothing but test standard DX and standard OGL, and left out all proprietary extensions, would it not be in the manufacturers best interest to make sure their hardware/software was up to speed and in full compliance?

Reverend said:
But the bulk of the reviews, imo, should be run on the standard path.

No, they should use what each card is best run on, unless there are huge IQ sacrifices involved. And even if there are huge differences (which would run contrary to what Carmack told me ), reviewers should simply report it and not "equalize".

I'm glad that you're smart enough to know that Rev. But how many reviewers out there are not? I bring up the example last year of Anand benching 8x AF on the GF4 vs 16x AF on the R300. I remember last year people were up in arms because that was used as the final numbers for AF performance, but no explanation was given that stated that the 16x AF IQ was far and away better than the GF4's 8x AF.

Not everyone reads B3D, and not everyone knows that all numbers are not equal.

I would like to see an exhaustive review done on the standard paths that all cards support, and then at the end the differences between the standard paths and the proprietary paths.

Reverend said:
I don't buy a card to play one game. I buy a card that will support every game that I want to purchase, and run them without having to have uber optimizations for it. Frankly not every developer will necessarily do that.
Huh? You don't have anything to do with the optimizations, onluy the developer of the game you may buy!

If you buy a card to play all the games that you buy, why do you care if a developer bitches about the amount of work he has to put in? You should only care about your card and the games you buy... and if the developers put in the amount of work Carmack puts in, well, I'm lost... why do you care???

I care because any and every card on the market, if they are being marketed as a DX9/OGL2+ card, should indeed not have a problem running in those native environments. For instance, I've read that the GFFX has a terrible time running PS1.4 (or it can't run it at all?), even though it states that it is PS2.0 compliant.

That's an outright lie no? Doesn't PS2.0 compliance involve *all* versions of PS1.x through PS2.0? So, I buy a GFFX thinking it's PS2.0 compliant, and it runs just fine on DOOM3 because it only asks for PS2.0 shaders. Yes Yes I know. Different languages OGL and DX PS2.0 shaders, but you know what I'm saying.

However, I try to run another game that only supports PS1.4 shaders. I'm thinking, I've got a GFFX that can run PS2.0, so it should be fine, and it runs like a dog, while my friends R3xx runs it fine in 1.4 mode.

If a card is broken because it doesn't support all of the standards, then I think that point should be brought out. If the Nvidia cards cannot run ARB2 mode for whatever reason, or they run it dog slow, I think that point should be brought up, because there will probably be future games that only support ARB2, or not the proprietary extensions of the card you personally have.

That would then make creating an informed opinion on what to purchase easier would it not?
 
Natoma said:
No. What I'm saying is that games should be benched in the standard paths for the engine. So if the standard path is ARB2 for DOOM3, then I think that the reviews should be benched for ARB2.

Who determines which path is the standard path for a game? The reviewer? The developer? You?

Or by standard, do you mean one that uses no proprietary extensions? Then how would you go about comparing cards like GF4MX and GF4Ti and Radeon 9700? Would you be able to use ARB2 since there are some things that the GF4MX doesn't support that the other cards do support?
 
Back
Top