My response to the latest HardOCP editorial on benchmarks...

It seems to me like the only thing 3DMark03 is good for is telling you how well a video card is going to do at running 3DMark03.

Applause.

Yup. And this is not a bad thing if you understand what 3DMark is doing at a high level....stressing advanced shaders in complex ways. Of course, if you just eat nVidia PR and assume that everything FutureMark does in their tests "is wrong," then you've made up your mind.

Whereas 3DMark2001SE seemingly had a little more relevance to the games of the past couple of years,

Right...it did it's job. Did you hold that opinion of 3DMark2001SE when it was released?

If it's more important how fast games that aren't made yet will run, about the only thing you can do is wait for those games to be made.

Again, so in other words, you are stating that we're all just shit outta luck with trying to get a grasp on future performance. Pretty pessimistic view, I'd say.

[qutoe]I guess the last point I would make is, if you really want a synthetic benchmark to compare the features of current video cards, then the benchmark should test each feature separately and completely independently of each other.[/quote]

Right...because that's how GAMES work!? Who gives a rats ass how individual features work/perform? How does THAT have any more relevance to predicting future performance than testing serveral features simultaneously...like...games do?

We RUN synthetic, feature isolated tests for primarily two reasons:

1) To try and find out why there is anomolous behavior in some "real" application benchmark.

2) To try and predict how a card might perform when one subsytem is stressed over another. Which when applied to predicting the future, means making a prediction of the characteristics of future games...

Synthetic tests...in and of themselves are wholly uninteresting for the audience of 3DMark...gamers looking to find the "best card."

By choosing game-like tests to compute the scores used to compare products, 3DMark03 inherintly shows they are focused on not showing feature comparisons, but gaming comparisons....

Hence, the "Gamers Benchmark" tagline, perhaps?
 
Kyle never ceases to amaze me, the truth behind the editorial comes to light. Reading his feeback on his editorial is his forums was funny as hell, one person on there making valid arguments and Kyle back peddaling out of everything, especially when confronted with why Bapco was still being used in his reviews in light of obvious bias that was revealed.

People buy DX9 cards for DX9, and he is stating Doom 3 will be the benchmark they use yet Doom 3 is not DX9 :LOL:

Spoon fed drivel:

I visited NVIDIA offices last week and we discussed 3DMark03. At that time we had the benchmark for a well over a week and I think NV had it for a day. At that time I had already made my mind up that 3DMark03 did not represent gaming. We KNEW NVIDIA's thoughts had merit when they came to us with their opinions.

If you say we did not prove our case as to why we do not want to use 3Dmark03's overall score, I am sorry. I am still not going to use their scoring as I am not comfortable with it.

You are really tied up on the whole NV / 3Dmark issue and I think you are losing focus of the article we posted this morning. Remember that while NVIDIA is telling you they do not like 3DMark03, they are still WINNING the benchmark

"they are still WINNING the benchmark"

Yep with a card thats been cancelled..wohooo :LOL:
 
Remember that while NVIDIA is telling you they do not like 3DMark03, they are still WINNING the benchmark

NO, THEY ARE NOT, KYLE.

In every test with actual SHIPPING PRODUCTS in the SAME PRICE BRACKET, (you know...the only tests that the vast majority of web-sites that don't personally visit nVidia offices can do) nVidia is getting stomped...and quite badly.
 
Joe DeFuria said:
In every test with actual SHIPPING PRODUCTS in the SAME PRICE BRACKET, (you know...the only tests that the vast majority of web-sites that don't personally visit nVidia offices can do) nVidia is getting stomped...and quite badly.

Stop and think on this a second.

Does that really represent reality? Are all NVIDIA products in the same price range as their equivalent ATI products so much worse that they should get stomped (and quite badly)?

Or, as NVIDIA suggests, there's something that's being measured that skews the results that shouldn't be measured?
 
Does that really represent reality? Are all NVIDIA products in the same price range as their equivalent ATI products so much worse that they should get stomped (and quite badly)?

Yes, because it's a benchmark stressing advanced techniques[.b], and the "equivalent products" (same price range) from ATI actually support more advanced techniques, by a whole DX revision.

I DON'T expect GeForce4Ti to get stomped by Radeon 9500 Pro in Quake3, nor does it, and that's what 3DMark03 is not going after. I do expect that Radeon 9500 Pro should score much higher in tests that stress global shading techniques, and that's what 3D Mark shows.

Just as I suspect Radeon 9500 pro to be a significant performance leader over GeForce4 Ti in Doom3.

I also expect Radeon 9500 to absolutely slaughter the GeForce4 in all DX9 applications. ;)
 
My last post on this topic.
Crusher said:
What good does it tell you to know that one card is faster than another at rendering stencil shadows when calculating the geometry redundantly at intermediate steps in the rendering process, if no game is ever going to do that? Does it tell you how fast their vertex shaders are relative to one another? Or does it tell you how much each card stalls when throwing in redundant processing at different points? Or does it say something completely different?
If you read the release notes, Futuremark states that things were done this way to increase the workload. In other words, to add complexity without having to create a lot more content.

It's easier to redraw something n times than to create n new objects. Does it really matter that it's the same geometry? No.
The one thing that you know it doesn't say is how fast Doom 3's vertex processing will be.
I seriously doubt Doom 3 will be vertex limited in any sense. And guess what? GT2 and GT3 aren't vertex limited either! This has been covered on other web sites already.

So why would nvidia claim they were vertex limited? My opinion is that it was a smoke screen.
Does it matter how well a card can render a single textured background with quad textured models if no game is ever going to use a single textured background with quad textured models?
Forgive me for being blunt, but this argument has no value whatsoever. What you're saying is that the application has to render everything just like a game. Well, the end result may not look like a game, but uses the same features. Many games still have single textured pixels and the test represents that. In fact, the depth complexity of these single textured pixels is generally quite low, so I'd say it's pretty well balanced.
It seems to me like the only thing 3DMark03 is good for is telling you how well a video card is going to do at running 3DMark03. Whereas 3DMark2001SE seemingly had a little more relevance to the games of the past couple of years, and a much better chance at predicting how games released after it would be rendered, I see very little hope for 3DMark03 to be able to do this.
So tell us how 3D Mark 2001 SE did a better job. Also, give us the skinny on what future games will be like.
I guess the last point I would make is, if you really want a synthetic benchmark to compare the features of current video cards, then the benchmark should test each feature separately and completely independently of each other.
This is also a valid method for testing but no more valid than what 3D Mark 2003 uses. Interaction between states is very important as well.
By choosing game-like tests to compute the scores used to compare products, 3DMark03 inherintly shows they are focused on not showing feature comparisons, but gaming comparisons, and at that they fail miserably. If their scored tests for vertex and pixel shaders were more like their fillrate test, then I could see some legitmacy in the results. Instead, they've tried to fix the flaws that 3DMark2001 had (dependance on other parts of the system) in a day and age when the type of benchmarking methodology that 3DMark2001 used (make it similar to how they think games will be) is inadequate.
Who says it's not gamelike? nvidia? Please, do your own analysis before making such claims.

3D Mark 2003 uses single texturing, multitexturing, stencil buffering, vertex shaders, pixel shaders and more. Since it's not a "gamelike" benchmark, I guess games don't use these features.:rolleyes:
 
First time chime!

NVidia is in damage control mode. And NVidiots are simply in denial.

FACT: At each price point, ATI products perform at par with their NVidia counterparts both in 3DMark2001 and existing DX7/8 games. 9500+ cards actually surpass NVidia offerings when AA and AF are used, even with todays applications.

FACT: At each price point, ATI product perform much better than their NVidia counterparts in 3DMark2003. When GFFX comes out, it will only achieve parity in this benchmark, at a higher price point. There are no DX9 games.

Given this, and ignoring the 'but the drivers still suck' rallying cry (and ignore it I will, since it is largely moot at this time), there is no reason for a consumer buying a video card today to go with NVidia's current offerings over ATI. NVidia knows this, hence the FUD.

Is 3DMark2003 going to be an accurate representation of game requirements in the future? Maybe not. But no matter what, future developer decisions will never reverse the advantage that ATI has at the moment.
 
First time, eh?

Try not to use "nvidiot" in your future posts. It'll make the time you spend here much more pleasant.
 
Ah, sorry Russ, my apologies. Won't happen again. I guess I've just been reading too many of these boards the last few days, and forgot I was posting on one of the few where civility and tolerance are actually maintained.
 
Crusher said:
I thought just about everyone here was against the idea of buying hardware because it's "futureproof".

I'll make a stand too and say that now that I'm a mere gamer, that I too would take into account a card's futureproof-ness(or should I say, lifespan). I like most other people would look for a card that meets their needs, be it cost, performance, features or specific games support. As for myself, I would take into account the performance of the games I'm playing now and then use 3DMark03 as an indicator of its lifespan. If one card costs the same or just a little more and 3DMark03 tells me it has a longer lifespan, then I'm going for the one who has the longer lifespan. Did 3DMark03 accomplish what it was designed to do? Pretty much if you ask me.

In the end it was no question for me. For $0 increase I was able to get a card that was _able_ to run DirectX 8 titles. Will I ever be able to play those games at a good enough frame-rate? Probably not, but I believe I at least got a better bang for my buck and that's all that matters in the end to me. I suspect others will agree.

Tommy McClain
 
and forgot I was posting on one of the few where civility and tolerance are actually maintained.

Yeah...for the most part. ;)

This is indeed one of the few boards where disagreements and arguments can be fleshed out (even "heated ones" at that), but the majority of times, doesn't get reduced to personal insults and flames.

Welcome aboard... ;)
 
vandersl said:
Ah, sorry Russ, my apologies. Won't happen again. I guess I've just been reading too many of these boards the last few days, and forgot I was posting on one of the few where civility and tolerance are actually maintained.

Well....sort of...... :LOL:
 
Dio said:
Might be worthwhile someone putting up a price / chip comparison to argue about. :)

Hey I tried to get Joe to help me to do another price guide that would do exactly that. Maybe now there's more of a need for it. Wouldn't you say Joe? :)

Tommy McClain
 
Calculating the same thing multiple times and calculating multiple copies of the same thing might be approximately equal if it's done at the same time, but you know as well as I do, OpenGL guy, that doing things in different places in the rendering order can have vastly different affects on performance. Especially in Direct3D, where state changes can have a large impact on performance, and are handled very differently by different drivers. If they want to test features as they will be used in games, they should test them as they will be used in games, not take the lazy way out and introduce multiple points of inefficiency at a variable and unknown performance impact. If they want to test the features themselves, they should do it independently of other features.
 
Crusher said:
Especially in Direct3D, where state changes can have a large impact on performance, and are handled very differently by different drivers.
If your driver/hardware cannot handle state changes efficiently, then you should be penalized for that. Games change states all the time. Some are better at doing so efficiently, others are not.
If they want to test features as they will be used in games, they should test them as they will be used in games, not take the lazy way out and introduce multiple points of inefficiency at a variable and unknown performance impact.
Inefficiency in the application should have little impact on the hardware as it will be inefficient on all platforms. Again, it's up to the driver and hardware to handle it.
If they want to test the features themselves, they should do it independently of other features.
You cannot test features in a void. If you wanted to do that why not just look at raw fillrate and be done with it?

P.S. I guess I was wrong about my previous post being my last.
 
Kyle and Tom Pabst are buddies again, they both come to the same conclusion..

http://www.tomshardware.com/column/20030213/index.html

This quote is classic:

These tests use ps1.4 for all the pixel shaders in the scenes. Fallback versions of the pixel shaders are provided in ps1.1 for hardware that doesn't support ps1.4. Conspicuously absent from these scenes, however, is any ps1.3 pixel shaders. Current DirectX 8.0 (DX8) games, such as Tiger Woods and Unreal Tournament 2003, all use ps1.1 and ps1.3 pixel shaders. Few, if any, are using ps1.4.

These shaders don't only ensure bad results of PS 1.1 cards compared to those that support PS 1.4 (they need more passes for the effect) they are also hardly used in actual 3D games. Xbox can't run PS 1.4 code as well
 
Sigh...

You know, you would think it would be prudent for these web sites to actually read the 3DMark white paper first before writing articles on it...

And this is classic:

The Image Quality tests are only restricted to screen shots, instead of showing special scenes that could demonstrate advantages and disadvantages of a card's specific FSAA implementation.

Ok, let me get this straight.

All this criticism is about 3DMark not being representative of "real games". And here's a feature in 3D Mark, (image quality / frame grabbing) that allows you to grab ANY SPECIFIC FRAME YOU WANT, and apply AA to it, so you can examine AA effectiveness in different "real world" situations. 3DMark allows you to grab the same EXACT FRAME on multiple systems, eliminating the "well, we got these shots as close as we could get, but they're not identical" problem we've been having thus far.

But that's bad apparently, because it's not some "special scene" (artificial? Non game?) to show off technical AA quality.

:devilish:
 
If these guys would simply look at the specs, they'd see why there is no 1.3 shader: because 1.3 doesn't increase the length of the shaders nor does increase the number of textures. If you need more than 4 texture samples, or a program longer than 12 ops, 1.2 and 1.3 will do nothing for you. The only time you might use 1.3 is if you need to do a very specific kind of texture operation (depth replace, specific dependent lookup, etc)

Now that 2.0 exists, I don't have much desire to code for 1.4, and if it gets supported, it will only be because the HLSL compiler generates it automatically, and it fits within the specs of 1.4

However, with respect to 1.2 and 1.3, even if I were hand coding in assembly, and even IF I wanted to specifically support 1.3, there is a very small subset of shaders that it can provide any benefit for, vs ps1.1. Basically, it's identical for all practical purposes.

1.2 and 1.3 should really be called 1.1.2 and 1.1.3
 
Back
Top