My response to the latest HardOCP editorial on benchmarks...

Maverick said:
Morris Ital said:
I agree but the figures given by two sites are 250 000 and 150 000 and that means a fractional numebr of passes. If it was 200 000 and 100 000 then I would agree as it would be two.

Even the sites are confused, if it is down to the number of passes, because they state it as if it was passes AND extra polygons ( read it yourself ! ) .

They state polygons and passes and I am trying to clear it up because if you take them to mean the same thing then the numbers they quote adds up to 4 passes for 1.1 and 1 pass for 1.4 and 250 000 polygons for 1.1 and 150 000 polygons for 1.4 and that does not add up.

Look, I am not trying to defend anyone of say someone is better than another, I am just trying to understand the maths and nobody is coming up with the right mathmatical answer.. yet

But we can't really give the right mathematical answer, as we don't have the right numbers to begin with. The 150,000 and 250,000 are just approximate numbers, and will vary depending on whereabouts in the scene you are, both because the amount of geometry in the scene will vary, and the number of lights affecting that geometry will change too.

One marine chappie from game 2 might be standing someplace where there is only one light shining on him, so in 1.1 it'll take 4 passes, and with 1.4 it'll only need 2. A second marine might be standing futher down the corridor, where there are 3 lights, so 1.1 needs 10 passes, and 1.4 takes 4, and so on. Over the whole of the scene, it apparently averages out to approx 250,000 and 150,000 polys, which happens to be a ratio of 5:3. So on average 1.1 is doing 5 passes for every 3 that 1.4 does.

I don't know if that is true but it sounds true ( consistent :) )

I am now happy with the polygons explanation, coupled with Dave H's information, until someone comes up with something better.

The problem is of course that I am not so dumb that if I have something that is niggling me, about something said in the public arena, then I come to beyond3d.com and get educated, but what about the general public ? It's taken me two pages of this thread to get me convinced because I took the 250 000 v 150 000 polygons verbatim .. how many other people will worry about it that much to dig deeper and instead just take Kyles or rage3d's word for it ?

The more you learn the more you doubt :(

I hope I didn't appear too much of a cult follower in this ( I bet I did ;) ) , I just wanted the maths to works out :oops:
 
I thought the consensus was that multitexturing wouldn't be as important because you can loop back to multitexture without doing additional passes? And, on the whole you're better off with a full rendering pipeline instead of an extra TMU.

I have a hard time believing that games are going to be single textured as well...
 
Hellbinder[CE said:
]Maverick,

It has beed discussed countless times at this forum that the next generation of games are moving away from the DX7 Multi-textured approach.

I'm a glutton for punishment, but I can't resist:

Future games are going to consist of a couple "no texture" Z/Stencil fill passes, followed by a shading pass that that uses multiple textures.

Unless future games are going to tesselate or provide polygons down to the pixel level (how else to get accurate per-pixel normals?), and compute all lighting and materials procedurally, there is no avoiding multitexturing. Performance still isn't good enough to support so much proceduralism, and not even the offline rendering market practices it.

At minimum, you're going to have a material texture and a normal map. But that's only for the most basic level of effects. If you want more than "shiny colorful surface", you'll need more textures to introduce defects, masks, reflection, etc.

A truer statement would be that future titles will probably be bottlenecked by a combination of geometry load (all those light source passes), stencil/z fillrate, and multitexture shader throughput.

I don't think synthetic tests are bad. The only issue is if such tests are coded optimally. For this reason, I'd advocate that the industry move to open-source benchmarks, so everyone can see exactly what's going on without ambiguity.

Yes, this may be the demise of ultra fancy/cool looking benchmarks, but do synthetic tests really need to be pretty to prove the point?

Edit: Why don't we modify Humus's demos into synthetic benchmarks? :)
 
Not as much as before. I followed a link on another site about the 3dMark release and couldn't resist reading a few threads. :) I still think I will participate far less then before, and I will try to stay out of flame wars and fanboi threads altogether as they are a time and emotional blackhole.
 
DemoCoder said:
Edit: Why don't we modify Humus's demos into synthetic benchmarks? :)

Humus's demos would be pretty much identical to the school of thought at futuremark, use all the video card features as much as possible. :) In fact he uses bump mapping quite a bit, which is a 3dmark2001 issue for nforce2 users relative to users of other motherboards, welcome square one. :)

DemoCoder said:
Yes, this may be the demise of ultra fancy/cool looking benchmarks, but do synthetic tests really need to be pretty to prove the point?

The industry in this case is web sites and people who visit them, if they only use ugly/unpopular benchmarks, they lose a lot of their audience/revenue. There is no "I want hardware that can do that" factor with geekmarks. :)

I'm sure sites like Ace's Hardware get a nice few hits, but it's a narrow audience.
 
As far as I understand, the Game 1 test was not single textured on the planes. It seems to use multiple textures in they way you mention, DC, except implemented using fixed function "shader" capabilities (i.e., "DX 7"). The problem nvidia has is apparently with the single textured backdrop...I think some quotes of the "nvidia 3dmark03 whitepaper" are simplified versions that drop that distinction.

I do think the test is representative of a light-but-full-featured DX 7 application, and in the larger context of the tests, it provides a fillrate/bandwidth (as opposed to computational) scaled component with lower geometry workload, and has some validity, I think, in that regard (i.e., unlike other tests, it seems to me it is might actually be more valid when considered as a component of the overall score than it is as an isolated benchmark).
 
Democoder..

Yes, this may be the demise of ultra fancy/cool looking benchmarks, but do synthetic tests really need to be pretty to prove the point?

So now Nvidia is going to dictate what people use for benchmarking? And of course you are all for it. now be totally honest. If ATi had withdrawn, for the same reasons and Nvidia endorsed it. Would you have just written that post about how we still need multitexture?

Further. Would you still be declaring the death of the *fancy,cool* benchmarking programs???
 
Kyle is truly the puppet now....

sad, at least we still have Brent do the Video Card reviews there.

Im not even gonna respond back in that thread over there, not worth it.

He cant even answer straight.


ME
I can understand where you are coming from, I would like optimizations to be more along the lines of game level, not benchmark level.

However, I do have this to say:

1. It simply looks bad at the moment since you wont run 3Dmark2003 as a test any longer, yet just recently you ran 3DMark 2001 on the GF-FX preview.

HIM:
1. No, we don't look bad. If taking a stand on something that we see that is going totally askew is looking bad, then sign me up. This kind of poor logic irks me. The benchmarks you mention are NOT the same product, not even close. At the very least 3DMark01 was based on an actual game engine used in a very popular game.

anyone else see the flawed logic here?
2003 is built upon DX9 for the most part right? Whereas 2001 was the Max-FX? but what about the Pixel shader test? Not a game test!
 
You make everything personal, Hellbinder, as a first resort, not a last.

I recommend you read DemoCoder's post again without the preconception that it is a blind pro-nvidia rant, and I think you'll see DC was correcting your comment. Why did you do the personal attack instead of correcting that correction if you disagreed?
Now, if your reply had been that you meant "DX 7 approach to multitexturing" and not "multitexturing (DX 7) approach", just maybe we could have gotten somewhere other than the first step on a road to a flamewar?

Ack!
 
Here is my response to the conclusion of the editorial.

I am going to skip to the conclusion.

Maybe separate benchmarks to pick from that are based on games that are shipping or are currently in development is what is needed for a proper evaluation process.

3dmark03 is a gauge of games comming later this year, and future games after that and accomplishes this perfectly.

Maybe part of the solution lies in having companies such as FutureMark, working with a new organization of all the folks mentioned above,[i/]

You apparently either dont know, or are intentionally ignoring the esisting beta program at futuremark. dozens and dozens of game and hardware companies were involved with direct input into the development of 3dmark03

in order to make sure that the 3DMarks of the world utilize the proper technology and give a score that we can all sign off on.

What is that supposed to mean??? Why dont you take a close look at 3dmark03. Because apparently you either dont understand the technologies being used, or you have some other motivation for such a comment. Nvidia product line and its technologies are more than fairly implimented. I seriously suggest you lean more about each technology so you can spot BS PR spin when its presented to you.

Maybe sharing a bit of technology with the right folks could give great benefit to the gamers and the industry.

What is that supposed to be talking about??? the right folks??? you mean Nvidias way or the highway dont you?? Those companies that would code it Nvidias way?

Maybe we need an organization with a logo that can be included on game boxes so you know the game you are buying is part of the solution in getting better products to your hands.

Again, completely ignoring the fact that there is a LARGE Group of companies that had input to 3dmark03. There is only one company slandering the test. Why cant you see that??? What is the problem here?

We are currently talking to many of the right people in order to move this initiative in the proper direction when it comes to getting better benchmarking utilities for all of us to use.

You are trying to suggest there are others to talk to than the many companys that already participated in the development of 3dmark03??? What you are really saying is.. Lets talk to companys that program straight to Nvidia Spec, and Nvidias desire, and **** everyone else.

In doing this, we will hopefully benefit the person that is the only one that really matters when it comes right down to it; and that is you, the guys with the cold hard cash in your pockets that wants a great product for your money.

Then stop helping Nvidia to dictate to everyone what features are used and how they should and should not be supported.
 
Hellbinder said:
3dmark03 is a gauge of games comming later this year, and future games after that and accomplishes this perfectly.
I'd differ in my opinion on this.

3DMark03 (and past versions) is not a (paraphrasing a little) "gauge of future games".

It is, IMO, nothing more than a showcase of currently available technologies (let's just forget phrases like "latest technologies") and this is what it does very well. It gives us a glimpse at what current technologies as dictated by both the latest API and hardware allows developers to do and this is presented in a visual manner. It additionally allows us to investigate specific technology performances.
 
I am not for Nvidia or ATI dictating benchmarks, or anyone for that matter. I am simply for open-source benchmarks, whether they look cool or not. (I just mentioned that open-source stuff probably won't look as polished as a heavily funded commercial entity's benchmark)

Currently, NVidia and ATI can exchange all kinds of claims, and people write all kinds of speculative theories about why feature X on card Y performed above or below their expectations. But the only way for us to truly criticize such benchmarks is if we can peer under the hood at the implementation.

Atleast with TPC in the OLTP world and SpecCPU, we have access to the source (for a price), so we can make informed decisions about what's wrong and what's right.

You assume I support Nvidia's recent pullout, I do not. I can entertain, in theory, their criticism of 3dMark2003, but it's mental masturbation until I see the source. I have no way of proving them correct or incorrect. I certainly do not think it is credible to support the development of a benchmark, and then drop such support if the result doesn't agree with your expectations. That's like supporting democracy, but then complaining about it when your party doesn't get elected. These things must be fair and open, and you must support such a process, even if in the short term, the result may not be as positive for you as you would have liked.



In summary, open the source of the benchmarks (whether they are sold for a fee or not. You don't even have to allow derivative works ala traditional open source licenses, just let us analyze the source.)
 
I did not attack him, I asked him a question.

I was actually more irritated by his comment about the death of cool benchmark programs, simply based on a few completely misleading Nvidia PR documents.

The reason I dont try to hide what im talking about in vague statements.

People need to be accountable for their postitions. That goes for you, me, anyone.

Dammit it is time to call people on the carpet for allowing Nvidia to Dictate what is good and bad in the industry. I have read many posts today at Futuremark, and other places. All stating more or less how Nvidia was cheated by ATi/futuremark. I am frankly tired of being called an Ati Fanboi just becuase I hold up a flag and say *hold on people what nvidia is doing is not right*.. There are several people here who get away with posting all kinds of masked pro Nvidia statements.

The real problem is that people can read the junk that Nvidia is posting, or the editorial at [H], or posts where people are openly endorsing these positions, or posting carefully worded replies that Blur the lines between the the truth and what one IHV is trying to push off on everyone... And instead of getting upset, and fighting for what is right. They attack the people who are doing the right thing and screaming * Stop that unethical @#$& *.

You know what. I get personal becuase it is personal. Its personal for all the employees of 3dmark. Who are now being openly attacked on several dozen hardware and fan sites. All becuase of some completely misleading statements Nvidia made and a few pro-nvidia people in the right places blurring the lines between Nvidias version of the truth, and the real Truth.

You want to call me an ATi fanboy for that go ahead. But you (in general) are not fooling me. I can see who the Nvidia Fanbois are to.
 
Morris, it took you two pages because no one plainly said the reason in a single reply--you had to combine two pages to get your answer. ;)

Plainly, rendering X polygons and displaying them onscreen are not necessarily the same thing (think occlusion detection, a la HyperZ). So a PS1.1 card may have to render (process) 250,000 polies to a PS1.4's 150,000, but both show the same amount of polies onscreen--it just takes the 1.1 card more passes for certain polies. And not all effects are used on all polies in a scene, so the number of rendered polies does not have to be an (whole, positive ;) ) integer.

People seem to be upset with 3DM2K3 b/c the difference in performance it shows between cards does not correspond with performance in current games. I suppose that's why the company is called Futuremark. It's good in that it forces companies to not only polish their drivers but also to release fully-compatible hardware (thus helping eliminate cards like GF4MX), and it's bad because it takes driver developers' time away from optimizing real games (and thus maximizing customer satisfaction).

Whatever. Enjoy it for the pretty pictures, I say.

But review sites can be smart and, like CGW/ET (Computer Gaming World/ExtremeTech), create a GameGauge and present an overall number based on the cumulative scores of real games. Benchmarking, like 3D gaming on the PC, is still evolving. We're getting there. I find it funny how more and more reviews now post the scores of individual 3DM2K1 tests, rather than the overall score. This is, to me, a backward approach to benchmarking. CGW's GameGauge is a more sensible approach--based on real games--provided the scores for each individual game are also shown and explained (for IQ, odd numbers, etc.).

That said, I'm more (most?) interested in Doom 3 as a benchmark, simply because I've been waiting for real-time shadows for a hell of a long time. I can't wait for Counter-Strike 2, and using shadows to see some guy creeping around a corner... :) I'm following Unreal 2 to a much lesser extent, maybe because I don't know as much about it as I should--which is a shame, as I think Deus Ex 2 will use the engine.
 
I think after my last post, i am going to check out of the whole debate. I am far to emmotional about it right now, and i am probably not being very objective.

Im going to give it a week before i post anything else.
 
So where would we be if we used "game x" for a benchmarking tool for videocard reviews. In the same place we are now. Video card manufacturers would develop their drivers specifically to make this game better. (like they do now). And what if this game is using videocard specific optimizations (like they do now). How is this any more fair of a benchmark? People keep cheering Kyle on while he is saying reviewers should use the U2k3 benchmark. Wasn't it epic who had a 9700 but never bothered to test their game on it? (I could be wrong on this, it may have been another game developer) How would this be any better for benchmarking? What if I don't play ut2k3? (which i don't) How does that benchmark benefit me? 3dmark03 isn't perfect, but it least it gave all the videocard companies a chance to provide input on its development.(for a price of course, which I think sucks) But I think Nvidia is just crying sour grapes because they couldn't get their way. Upset because it wasn't created using CG? I'm sure they would have no problems with 3dmark03 if it was created with CG. They would be haralding it as the 2nd coming of Christ. Next thing you know they will be pushing websites to use Gunmetal in their reviews or risk not getting that shiny new card to test. It just seems to me that lately Nvidia is trying to be a mini microsoft, only they don't have as much muscle to get away with it. They want eveyone to do things their way and get pissed when they don't. They need to realize they are not the only high performance 3dcard company out there anymore. Going from big dog on the porch to second banana hasn't been a very good transition for them. They are not taking it very well and are making themselves look rather silly in my opinion.
 
Err, HellBinder, re-read my statement about "death of cool benchmark programs" after you cool down.

I simply called for a transition to more transparency in benchmarking, and that may mean OpenSource(tm) benchmarks, not written by a commercial entity, but maintained by net hobbyists. Open Source software isn't usually the most polished or slick in terms of graphics or user interface, so all my remark means is that if we switch to using benchmarks that provide source, it may mean having to deal with benchmarks that aren't as dazzling as the 3dMark ones.

In no way am I denigrating FutureMark. I love the graphics in 3dMark03. I wish there were any games that looked as good. The Demo2 looks almost like Halo2. Point is, I would vastly prefer source along with benchmarks over binary only benchmarks.

Moreover, a philosophical question: Does a 3d benchmark actually have to "look good" artistically?
 
Moreover, a philosophical question: Does a 3d benchmark actually have to "look good" artistically?

Yes. We don't have $400 videocards to play solitaire. We have them for eyecandy.
 
Welcome back, DC. ;)

simply called for a transition to more transparency in benchmarking, and that may mean OpenSource(tm) benchmarks, not written by a commercial entity, but maintained by net hobbyists.

Yeah, that would be cool. But there could still be the same arguments. Surely, all of the "net hobbyists" wouldn't all agree on what approaches to take, so a decision has to be made somewhere, and ultimately, the decision is apt to be to the "advantage" or "disadvantage" of one IHV or another, giving reason to "cry foul."

I do agree that even publishing the source would be a great start...though on the other hand, Game Companies don't publish their code immediately for IP reasons, and FutureMark is in this for money too. So I don't blame them. FutureMark does go into some detail about how they struture their tests and does provide some code in their whitepaper...I think that's pretty reasonable, though I agree not ideal.

Does a 3d benchmark actually have to "look good" artistically?

Sort of....I mean, games strive to "look good artistically", so it would make sense for 3D Benchmarks to strive for the same.
 
Back
Top