HardOCP and Doom 3 benchmarks

The point is, the Doom3 / R300 situation at E3 was unique.

I don't see it that way. Limit it to PC developers, ask them what the weeks leading up to E3 are like. It is always hectic. With id, odds are that the code base needed a lot less work then the typical title, but they also have a lot less people working on it too.

In terms of a game selling PC HARDWARE, which single upcoming game do believe is most important? Isn't it obvious?

I'd have to see more on HL2 before I make that call. For the hardware enthusiast market then of course it is Doom3, the question is what kind of impact will it have on gamers in general? There are already close to 3Million DX9 parts in the hands of consumers, not to mention all the DX8 parts that will run the game with lower detail settings. Even if we limit it to the higher performing DX9 level boards we are in the 1.5Million range. How many copies do you think Doom3 is going to sell? If Half-Life2 really does take advantage of DX9 level hardware I expect it to have a much larger impact on the market then Doom3 when looking at volume. If HL2(and/or CS, TF etc) includes a bench utility that scales as well as Carmack's engines do, I could see it having the larger impact on the gaming commuinty at large fairly easily.

That what exactly are you arguing against?

I'm not arguing. I'm saying there are an awful lot of unfair things that are done all the time in reviews. What is the big deal about this one? People bashed nVidia pretty badly because they came out against 3DMark2K3, I had no problem with that. I see a lot of those same people stating that they don't think that it's fair to test Doom3 as it was. Why not? I have no problem with anyone who thinks that 3DMark2K3, SC and Doom3 should all be excluded from benches, I don't see how it can be honestly stated that one is perfectly OK and the other is somehow unfair. That goes either way.

Yes, if ATI's hardware is seen by the public as the superior platform for Doom3. If that happens, Carmack may cater more to ATI. If Carmack puts up artificial road-blocks to help prevent that from happening, then it's "not fair."

Running the alpha build from E3 last year that was built for ATi the NV boards were still showing a big edge using drivers that predate the current ones by some time.
 
Dave-

NOt that I've followed this discussion, but how does this differ from 3DMark2001, but with differing vendors?

It's kind of tied in with the discussion :)

I was mentioning the utilization of PS 1.4 in 3DMark2K3 giving ATi hardware an edge over nV. I didn't clarify that I was talking about DX8 level parts in my prior posts as that was what was available when it launched, and I was doing so in my last post.
 
If HL2(and/or CS, TF etc) includes a bench utility that scales as well as Carmack's engines do, I could see it having the larger impact on the gaming commuinty at large fairly easily.

That's a pretty big if though :)
 
But I'm asking, how is the situation with 3DMark03 any different from the situation with 3DMark2001?
 
BenSkywalker said:
I don't see it that way. Limit it to PC developers, ask them what the weeks leading up to E3 are like. It is always hectic. With id, odds are that the code base needed a lot less work then the typical title, but they also have a lot less people working on it too.

Like I said, I know it's always hectic.

But name me one other specific example that meets the following:
1) One of the most highly anticipated PC titles, if not the most anticipated.
2) That title requires the highest end hardware to run reasonably well.
4) The highest end hardware was only available for a very short period of time before the show date.

I'd have to see more on HL2 before I make that call. For the hardware enthusiast market then of course it is Doom3, the question is what kind of impact will it have on gamers in general?

Sales and market perception of these high-end boards? It's huge. Obviously ATI and nVidia see the value of having the "low volume" high-end parts getting the "good press"....because it drives demand for the lower end parts with the same branding.

here are already close to 3Million DX9 parts in the hands of consumers, not to mention all the DX8 parts that will run the game with lower detail settings. Even if we limit it to the higher performing DX9 level boards we are in the 1.5Million range. How many copies do you think Doom3 is going to sell?

You are missing the point entirely. It doesn't matter how many copies of Doom3 sell. Doom3 will be the "defacto benchmark", as was every other iD engine before it, and all the mags / reviewers will gush all over any card that has a clear lead in that benchmark. And "gamers", whether they play doom3 or not, will view the "best card" as the ones the reviewers / mags gush over.

If Half-Life2 really does take advantage of DX9 level hardware I expect it to have a much larger impact on the market then Doom3 when looking at volume.

Diagree completely.

It may have more volume of game sales, but it won't have the same impact on the sale of PC hardware.

If HL2(and/or CS, TF etc) includes a bench utility that scales as well as Carmack's engines do, I could see it having the larger impact on the gaming community at large fairly easily.

Again, disagree. At best, it might have a similar impact.

I'm not arguing. I'm saying there are an awful lot of unfair things that are done all the time in reviews.

Agreed, and we discuss them here.

What is the big deal about this one?

Because this time, a GAME SOFTWARE company is directly involved. This is different than simple incompetance or even bias from reviewers. IT doesn't even give the reviewers a CHANCE at having a fair comparison.

People bashed nVidia pretty badly because they came out against 3DMark2K3, I had no problem with that.

Then why do you have a problem with this?

I see a lot of those same people stating that they don't think that it's fair to test Doom3 as it was. Why not? I have no problem with anyone who thinks that 3DMark2K3, SC and Doom3 should all be excluded from benches, I don't see how it can be honestly stated that one is perfectly OK and the other is somehow unfair. That goes either way.

Did 3dMArk give nVidia every chance to be just as involved as ATI prior to the release of the benchmark? Yes. nVidia declined and left the beta program on their own will at the last minute.

Did Id give ATI every chance to be just as involved as nVidia? No.

That is the difference.

Running the alpha build from E3 last year that was built for ATi the NV boards were still showing a big edge using drivers that predate the current ones by some time.

?? Then why would it be in ATI's interest to leak the demo?
 
BenSkywalker said:
I'm talking about DX8 parts here, apologies for not being clear on that point. Most of the rest of your points regarding this aspect revolve around DX9 class board it appears. When 3DM2K3 launched nV had no DX9 level part.

True - the technology of their existing parts had been surpassed. R200 and R300 were both available and both offered superior pixel shading capabilities. As it turned out, due to NV30's delays nVidia ended up without a part with advanced shading for quite a while, but this was their problem, not a problem with a new and advanced benchmark.

And proprietary extensions are an option for OpenGL. ATi is free to utilize their own extensions offering comparable functionality to nV's. Their hardware lacking some of those features may be an issue, but like WBuffer it is up to the vendor.

Yes - we can choose to go down the route of producing vendor specific extensions, or alternatively we can all try to work through the ARB to agree on standards and attempt to produce fast implementations and make life easier for developers by avoiding the need to write additional code paths. I guess it's a matter of preference. Certainly all members of the ARB had input into the ARB shading specs, and I'm sure there are various elements to it that are advantageous and disadvantageous to all parties.

Looking through the old DX8 docs a bit and comparing it to the information on what they are doing with SC's shadows for the NV boards and I'm not seeing what isn't supported even under 8.0. Actually, it appears that the NV specific version is much simpler, I don't see why ATi doesn't have support...?

When it comes to rendering in DirectX it is a general rule that the reference rasterizer is the specification. In Microsoft's own words the reference rasterizer "Supports every Direct3D feature".

Shadow buffering the way it is performed in Splinter Cell requires a depth format buffer (I think it's D3DFMT_D16) to be used as a texture. The reference rasterizer even in DX9 does not allow depth buffers to be created or attached as textures. Therefore if the Splinter Cell shadowing code was run on the reference rasterizer (which should run any legal DX9 or earlier application) it would fail. Hence this is not a supported feature.

In addition, I believe that when this texture format is attached on Geforce hardware it performs a different form of filtering on the data than would be used with an ordinary texture format. This filtering mode does not exist in DirectX and is therefore also outside the specification.

Shadow buffers can be supported legally by using a pixel shader to export data to a normal texture, and then using that texture as a shadow buffer - I guess that this would not take advantage of any specialised hardware in nVidia cards, but at least it would be within specs. This is fully supported by ATI cards, so why would we (or should we) hack our drivers to support a non-standard method?
 
BenSkywalker said:
It should run the precission required to get the end results needed. If people can't see any differences(and when I say any I mean any, if there is any visible differences then they should up their precission), then it is good enough.

In this scenario we are touting 'The Dawn of Cinematic Computing', DX9 precision is there for a reason..to move forward.
Are we testing DX9 cards here, or two year old register combiners.
There is reasons why DX9/ARB 2 requires minimum FP 24 and I'm sure it wasn't a number picked out of air over lunch at Taco Bell.

So essentially the question comes down to

- Are we testing DX9 cards here (which is what these sites are claiming)
- Are we testing how well a developer can optimize

Again Direct X 9 developers don't have a NV30 profile to work with and why I prefer DX9 titles over Opengl. as least there is some control on the engine that doesn't give any IHV a advantage like in this scenario.

I find this arguement alot like the 16-bit and 32-bit arguements in the 3DFX vs. Nvidia days...except this time the roles are reversed, and I remember alot of websites making big deals about 22-bit color, post filter AA and of course color. ;)

He also states-

NV30 ( full featured, single pass)

so yes it is the standard all ARB members agreed on, including Nvidia

You think that everything the ARB decides is without disagreement? :oops:

I'm sure it is not but the votes are in, but then again Nvidia found away around that by using the low precision proprietary profile, now tell me...how is that helping the graphics industry overall :?:
 
Clashman said:
Why would 1024x768 scores be HIGHER for AA than without? It also seems odd to me that the 5800 Ultra would only be 25% slower with 4XAA than the 5900 Ultra w/o. Any clues?

Wow....I never noticed that before.

The Radeon 9800 scores make sense:
At 1024x768, Radeon goes from 61 to 43 when enabling 4X AA.

the 5900 goes from 55 UP to 57 when enabling AA?

Similar story at other resolutions.....

Based on Tom's blurb, the "High Quality" setting apparently enables Anisotropic filtering. Both these non AA and AA scores on these pages are supposed to be at high-quality.

Obviously, something is amiss with nVidia's scores. Perhaps they dropped aniso (or dropped trlinear filtering), or dropped AA level "without warning".

It certainly raises the question of validity to a whole new level.

Again, not to sound like a broken record, but this is EXACTLY the problem with these rushed - one - shot - no screenshot "benchmarks." Utterly useless, and raises more questions than answers.

What's even more amazing, is to see Tom ask nVidia "why the non AA, high quality, benchmarks are so low", and not "why the AA, high quality, benchmarks are so high." Or at lease "why is there no change in performance." :rolleyes:
 
Don't want to say too much here, but the relationship with ID and any other IHV is unique. All previous versions of Quake and Doom were all leaked by somebody related to an IHV that had access to the early build. This seems to be a unique situation with Id, since we don't seem to see this type of consistent history of leaks with any other major developer.

In fact, it almost seems like both the guilty IHV and Id seem to profit from the public discourse initiated by the leak...

ATI may have found a little egg on its face from the D3 leak, but it hasn't hurt their chances of previewing the game at E3 at the ATI booth. Perhaps this was a way for Id to please both companies. Nvidia gets favorable treatment for the launch of their product, ATI gets favorable treatment at one of the year's most important trade shows, in addition to exclusive support at Quakecon. Leaks haven't really hurt relations (or public perception) of any of the three companies yet...
 
Squidlor said:
Leaks haven't really hurt relations (or public perception) of any of the three companies yet...

I agree, which is why I don't see this "Doom3 benchmark fiasco" as a legitimate form of "pay-back", if that is indeed what's going on here. This does have a chance to hurt ATI.

I will feel MUCH better about the situation if in a few weeks or so, id re-releases the same benchmark after ATI having a chance to optimize for it.

Ideally though, NO BENCHMARK should be released IMO unless review sites are given free and unfettered access to it's use, and it's also available to the public to download.
 
Joe DeFuria said:
I will feel MUCH better about the situation if in a few weeks or so, id re-releases the same benchmark after ATI having a chance to optimize for it.

Apparently, (according to what iD told Dave) NVIDIA and not iD is actually controlling the release of this benchmark. I guess the chances of the above happening are about nil. :rolleyes:

At this point, I take back what I said at the very beginning of this thread about this being somewhat legitimate...
 
Joe,

I agree, but I think that Id will find it in their best interest to find some form of parity between ATI and Nvidia performance. This is an engine that will be licensed for at least a dozen high-profile games, if not more. By the time that the game and its technology becomes available, all of these (p)reviews will be relatively unimportant and we'll have new cards (and new biased (p)reviews) to discuss. Who knows... Maybe the next biased (p)review will have an exclusive BitBoys build of DNF!!
 
Interesting...thx for the info Joe..
arge.gif
 
Squidlor said:
I agree, but I think that Id will find it in their best interest to find some form of parity between ATI and Nvidia performance. This is an engine that will be licensed for at least a dozen high-profile games, if not more. By the time that the game and its technology becomes available, all of these (p)reviews will be relatively unimportant and we'll have new cards (and new biased (p)reviews) to discuss.

I agree and disagree.

Whether or not Doom3 is actually available, the presense of the benchmark can drive sales of one card vs. another. I mean, the R350 and NV35 look extremely evenly matched....except for Doom3, where the NV35 appears to have a non-trivial edge.

Assuming the data is legit, and you were deciding on how to spend your money, which card would you buy if you were in the market for one?

To be clear, I don't expect nor demand a form of performance partiy between the two products. That may happen, but "may the best card win." The gap may widen for all I know. It's just that this benchmark release doesn't do anything to help judge if there is parity or not. At best, we just don't know...at worst, it can be highly misleading.

Who knows... Maybe the next biased (p)review will have an exclusive BitBoys build of DNF!!

We can only dream! :)
 
Doomtrooper said:
I'm sure it is not but the votes are in, but then again Nvidia found away around that by using the low precision proprietary profile, now tell me...how is that helping the graphics industry overall :?:
NV_fragment_program was finished before ARB_fragment_program extension. NVidia had functionality not supported by the OpenGL base spec or any available extension, so they made their own. It's the usual way.
 
Joe-

Like I said, I know it's always hectic.

But name me one other specific example that meets the following:
1) One of the most highly anticipated PC titles, if not the most anticipated.
2) That title requires the highest end hardware to run reasonably well.
4) The highest end hardware was only available for a very short period of time before the show date.

You are stating that somehow the conditions that were happening were different then a typical E3 prep. What do you think is different that would allow ATi to leak it versus, say, their prepping of HL2 for this years E3?

Sales and market perception of these high-end boards? It's huge. Obviously ATI and nVidia see the value of having the "low volume" high-end parts getting the "good press"....because it drives demand for the lower end parts with the same branding.

Gamers in general don't pay too close attention to benchmarks. Pick up an issue of PCGamer or the like and read through their tech sections. Good comedy in their, nothing useful. Read the FX5900U review at AVault for another example. There is a very large rift between what the typical gamers will see and what we will see. The largest impact that D3 is likely to have is word of mouth from people like us to others, and even then I would be more likely to consider HL2 performance in reccomending a board if it has a decent bench in it then Doom3.

You are missing the point entirely. It doesn't matter how many copies of Doom3 sell. Doom3 will be the "defacto benchmark", as was every other iD engine before it, and all the mags / reviewers will gush all over any card that has a clear lead in that benchmark. And "gamers", whether they play doom3 or not, will view the "best card" as the ones the reviewers / mags gush over.

Half-Life never had a decent benching utility, and it didn't scale. If HL2 does, I don't expect D3 will have the same iron grip its predecessors did. If the gaming mags use HL2 as a standard bench, chances are D3 will be considered the second tier bench by most gamers.

It may have more volume of game sales, but it won't have the same impact on the sale of PC hardware.

What kind of impact do you think Quake3 had on hardware sales?

At best, it might have a similar impact.

Go check the stats for CS v Q3 ;)

Because this time, a GAME SOFTWARE company is directly involved. This is different than simple incompetance or even bias from reviewers. IT doesn't even give the reviewers a CHANCE at having a fair comparison.

And how does SC or 3DMark2K3?

Then why do you have a problem with this?

Life isn't fair ;) I don't have a problem with sites using benches that aren't fair if they are represenative of the current situation.

Did 3dMArk give nVidia every chance to be just as involved as ATI prior to the release of the benchmark? Yes. nVidia declined and left the beta program on their own will at the last minute.

Did Id give ATI every chance to be just as involved as nVidia? No.

That is the difference.

Which means it isn't fair, but it is the situation.

?? Then why would it be in ATI's interest to leak the demo?

Didn't say it was in their best interest. I didn't say it was a smart thing to do at all.

Andy-

Would you consider this a decent description of what they are doing in SC?-

In this sample, the 3-D object that casts shadows is a bi-plane. The silhouette of the plane is computed in each frame. This technique uses an edge-detection algorithm in which silhouette edges are found. This can be done because the normals of adjacent polygons will have opposing normals with respect to the light vector. The resulting edge list (the silhouette) is protruded into a 3-D object away from the light source. This 3-D object is known as the shadow volume, as every point inside the volume is inside a shadow.

Next, the shadow volume is rendered into the stencil buffer twice. First, only forward-facing polygons are rendered, and the stencil-buffer values are incremented each time. Then the back-facing polygons of the shadow volume are drawn, decrementing values in the stencil buffer. Normally, all incremented and decremented values cancel each other out. However, because the scene was already rendered with normal geometry, in this case the plane and the terrain, some pixels fail the z-buffer test as the shadow volume is rendered. Any values left in the stencil buffer correspond to pixels that are in the shadow.

Finally, these remaining stencil-buffer contents are used as a mask, as a large all-encompassing black quad is alpha-blended into the scene. With the stencil buffer as a mask, only pixels in shadow are darkened

Doom-

In this scenario we are touting 'The Dawn of Cinematic Computing', DX9 precision is there for a reason..to move forward.

And INT12 and FP16 are both forward from where we were. If you are running higher precission then needed it is a waste. If it is needed, then it isn't a waste.
 
Back
Top