3D Mark 2001 SE Today?

well ain't that sweet.... time to find a new benchmark tool.... um any smart guys here want to make one :smile:
 
Not such the convincing argument compared to the WAY you phrased it. Looks like ATi has its CLEAR advantages as well. the stencil issue is due to the memory controller, not inferior Stencil support.

ATI's cards have always treated stencil (and alpha) as second-class citizens (stencil is a render state, not a texture attribute). This is in their memory controller, yes, but it was a conscious choice on ATI's part to optimize the Radeon's rendering in cases where stencil and dest. alpha weren't affected.

You're welcome to argue for this all you want -- that doesn't change the fact that I could write a benchmark using nothing but non-textured stencil shadow passes to "prove" that the GeForce 3Ti200 is faster than the Radeon 8500. If MadOnion wanted to give NVIDIA a real boost, they'd have just such a test.

Bring on Ansiostropic filtering as a PERFORMANCE comparative test. I dare you. See what happens to Nvidias scores. Ati will win Virtually every render compare test you throw at it.

And if there were a benchmark that rewarded rendering quality, ATI would certainly lose, do to their incorrect sampling/convolution method.

Mip Mapping on 3D textures is a DESIGN choice. Ansio Stropic Filtering is a Design choice. As are ALL the examples you just brought up.

No, it has to do with the fact that ATI doesn't have a dedicated unit for handling MIP mapping. You can call this a "design decision" if you like -- it doesn't change the fact that cubic environment maps look awful (the amount of shimmering in their provided pixel shader demo suite, especially on the sphere+diffuse+specular, is simply disgusting). It would be completely possible (dare I say *legitimate*) for MadOnion to test this functionality (since it *is* part of DirectX8), especially if they wanted to show that NVIDIA hardware was superior.

The whole point of my post was to debunk the notion that MadOnion's test was somehow biased for any specific company. If anything, it is designed to specifically use a common set of features that will result in the most contention for first place, and in that sense is unrepresentative of an actual game engine (which is designed to get maximum performance out of whatever hardware is present). It really wouldn't be too hard to write a benchmark that allowed a GeForce 3 Ti200 to outscore the upcoming Radeon 8500XT by 15% or more, or to design a specific ps1.4 shader that requires 4 passes to perform in ps1.1 and use that to "prove" that the Radeon 8500 is vastly superior to the GeForce 3.
 
On 2002-02-13 05:14, Teasy wrote:
Demo, your example does nothing but back up the opinion of some people that Madonion are bias toward Nvidia. Graphics chips have supported EMBM from before 3dmark2000, did 3dmark2000 have EMBM support?.. no. However

Maybe I'm forgetting something here, but what other chipset than G400 supported EMBM? S3?
 
Maybe I'm forgetting something here, but what other chipset than G400 supported EMBM? S3?

At the time 3DM2000 was released, none. Subsequently Radeon (all), KYROI/II and GF3 all support the feature.
 
Isn't 3dmark supposed to show performance on a card?
Lets say MadOnion make the Nature-scene only work on PS 1.4 card. Both GF3 and GF4 will then loose a lot of points, making Radeon beat the crap out of GF3 and closing hard on GF4.
Would that show the real performance on these cards in todays games? Is Radeon 8500 beating the crap out of GF3 and performing similar to GF4?
Well, anyone with IQ above 75 knows the answer.
 
Isn't 3dmark supposed to show performance on a card?

Lets say MadOnion made the Nature-scene only work on the GF3 card. All the existing videocards then lose a lot of points, making the GF3 beat the crap out of them.

Would that show the real performance on these cards in today's games?
====
Isn't PS1.4 a feature of DX8.1 and 3DMark2001 SE a "DX8.1" performance test?

Let's say MadOnion adds a PS1.4 test but also allows the GF3/4 to run this test and completely reverse their ideology by 180 degrees by not only providing a score for this, but also providing the alternate code path so as non-PS1.4 videocard can run this test. Wouldn't this downplay PS1.4 and make a (false) case against it?

Well, anyone with an IQ above that of a small kitchen appliance knows these answers.
 
Lets say MadOnion make the Nature-scene only work on PS 1.4 card....

Let's not say that, because no one is suggesting that MadOnion eliminate PS 1.1 tests from their performance suite.

Let's say that MadOnion creates a new PERFORMANCE scene using 1.4, or re-does the Nature scene to include two different render paths. One for PS 1.4 that requires 1 pass, and one for PS 1.1 that requires 2-3 passes. Let's also assume that the two render paths generate the same exact quality (big assumption when comparing multi-pass to single pass variations).

Would that show the real performance on these cards in todays games?

No. If you want to show the "real performance" in today's games...then by all means RUN TODAY'S GAMES and benchmark them! If you want to try give an idea not ONLY how todays games run, but how future games might perform, that's why your run a SYNTHETIC benchmark like 3D Mark.

Is Radeon 8500 beating the crap out of GF3 and performing similar to GF4?

Oddly enough IT MIGHT on future games...WE DON'T HAVE ANY CLUE. And that's the problem with how 3D Mark 2001 is set-up. If the folks at Mad-Onion made a PERFORMANCE test for PS 1.4/1.1 variants, then the benchmark would be more useful.

Nah...no one else would code two separate shading paths for their engine...oh wait...Carmack's doing exactly that.
 
Let's look at this PS 1.4 vs PS 1.1 (or 1.3) hullabaloo in another way: When Doom III comes out, it will be a main benchmark to gauge cards against. Well ain’t it funny then that the engine is more or less optimized for both PS 1.1 and PS 1.4 (or rather the hardware functionality, as it is OpenGL after all).

So why didn’t the folks at 3Dmark2001SE do the same thing? Optimize for both PS levels? This is what any important game engine probably will include sooner or later. Real sloppy of the guys at MadOnion, real sloppy since they had the chance with SE.

Note that I don't care whether ATI would have "won" or "lost" by doing this. But it would have made the benchmark a whole lot more interesting.

Regards, LeStoffer

Edit: Joe DeFuria already mention my point in the post above. You're a clever man, Joe! ;)

<font size=-1>[ This Message was edited by: LeStoffer on 2002-02-13 16:52 ]</font>
 
To address some of the points made here after my last post:

Is it fair to run Nature test on GF3 and give GF3 a benifit from that? (before any other cards got dx8 support). Probably not, but does it give a good impression of how much faster GF3 is than GF2? Yes.
Is the changes from dx8 to dx8.1 so big that it should result in a new performancetest benifiting a dx8.1 card? IMO no, since most can be done with PS1.0-1.3. Coding Nature in both 1.4 and 1.3 on the other hand is a good idea if you ask me.

As I see it MadOnion could have done three things:
1. Only add PS 1.4 as a feature test.
2. Remake Nature to support both PS1.4 and the old ones.
3. Make a new performance test that only showd off PS1.4.

Arguments against the last solution IMO is that 3dmark points would no longer be compatible with version 1.0. People seems to forget that 3DMark2001 1.1 is another version, not another program. It's mainly bug-fixes. Dx8.1 is not something drastic new thing like dx7-dx8.
Another problem with adding a new PS1.4 performance test is that it would actually give an impression that Radeon 8500 is faster than it really is.


<font size=-1>[ This Message was edited by: Galilee on 2002-02-13 18:00 ]</font>
 
Is it fair to run Nature test on GF3 and give GF3 a benifit from that? (before any other cards got dx8 support). Probably not...

I actually disagree to an extent. First of all, I DO think it's "fair" for the GeForce3 to get "additional points" for being able to run pixel shaders compared to DX7 cards.

The inherent complexity here, is that with one "single 3D Mark score", we are trying to capture two different things: performance and features. The single score method is never going to change, because that's the marketing pull. Just compare two numbers and you are suppossed to know which set-up is "better."

So I do believe that when a card supports some advanced "feature" (like Pixel Shaders) that when utilized can have a significant impact on performance or quality, the score should reflect that in some way.

I don't necessarily agree with how MadOnion did this with 3D Mark 2001, having a separate game test that can only be run by pixel shader hardware, but the end result was acceptable to me: GeForce3 was given "credit" for having the advanced API support.

but does it give a good impression of how much faster GF3 is than GF2? Yes.

Not IMO. Again, the 3D Mark is not all about speed. It's about speed and features. The fact that the score includes a test that the GF2 can't run, doesn't indicate to me how much "faster" the GeForce3 is...but how much "better"...combination of speed and feature support.

Is the changes from dx8 to dx8.1 so big that it should result in a new performancetest benifiting a dx8.1 card? IMO no...Coding Nature in both 1.4 and 1.3 on the other hand is a good idea if you ask me.

I would have been satisfied with coding Nature (or scrapping Nature for a brand new "Advanced Pixel Shading performance test", coded for both paths.

The problem is, MadOnion (for whatever reason) wanted the "new" score to be directly comparable to the "old" score. If you change a performance test, you can't do that. Every DX8 board would have to be wiped out of the database and re-run.

If they added a "new" PS 1.4 performance test that required PS 1.4 to run and get some points for...then the only boards that would have to be re-run are the ATI cards.

So, if MadOnion really has some valid reason for not creating a "new 3D Mark 2001 SE" score, option 2 is more viable.

Personally, I think MadOnion should have just called this 3D Mark 2002, and started a new scoring database. Then they could have just re-written the game 4 test with two pixel shading paths. Traditionally, MadOnion has stated a new benchmark score annually...why different this time?

I stated in the old B3D boards, that I think MadOnion should re-think their game tests entirely....I'll rehash....

I think EVERY game test should be able to be run by EVERY video card at some level. There are already "low and high" detail levels for the game tests. "Low" detail should be DX7 style, low polygon count. "High" Detail would be the same DX7, but higher polys / more texture passes, etc. THEN include an "Advanced" (Ultra High)detail level where pixel / vertex shader effects are used. Code each "Advanced" detail level with code paths for each major DX revision that supports pixel shaders...DX8.0, 8.1, and next DX9.

So, we might have 3 game tests, each with 3 "detail" levels. DX7 cards can run 6 of the 9 tests. MadOnion then must choose to "weight" the scores of each test (much like they do now) to come up with the final 3D Mark number.

I think that approach does three important things.

1) It's consistent. Pixel / vertex shaders are supposed to enable the addition of more detail. (Not necessarily a "different" game scene") Not all cards can handle "ultra high" detail, so not all cards will get points for it.

2) It's interesting. Now you can directly compare "gaming scenes" with and without pixel shading effects, so we can see what kind of quality (and performance) difference there is. Look how much "better" the water looks! Look at the illumination on the walls! Etc...

3) It's "fair". By supporting the pixel shading paths of each major DX revision...it makes it much more of an objectively presented approach.
 
On 2002-02-13 17:41, Galilee wrote:
To address some of the points made here after my last post:


Another problem with adding a new PS1.4 performance test is that it would actually give an impression that Radeon 8500 is faster than it really is.


<font size=-1>[ This Message was edited by: Galilee on 2002-02-13 18:00 ]</font>


Bahhh humbug, the entire idea behind testing new technolgy is to see advantages. Carmack already stated 30% increase in Doom 3 , so if the game uses PS 1.4 then its not artificially doing anything but giving REAL WORLD results.
Does the game/program have to used PS 1.4 to get the speed increase..sure it does. The same goes for Pixel shader 1.1-1.3, so I have no IDEA what your getting at.
 
Well this it not an easy question actually. It's all about what is supported in games and what is not.
You said it yourself Doom, the game has to support PS1.4 to get the supposedly increase. But how many games have this support? And how many games can't do the same in PS1.3?

Shark; call it 3dmark2002? You don't think you maybe puts to much into this DirectX revision do you? PS1.4 is one of 1000 features. It's a very small change from dx8 to dx8.1, why should they make a new benchmark? When dx9 comes it's time to do it IMO.
Should they make a new gametest with EMBM support? Should they make a gametest with Dot3 bm? Should the vertex test count on the score?

The way I understand 3D Mark is that it gives you a basic idea of how fast a card will play "most" DirectX 8 games. Regardless of features.
Interpreting the benchmark that way shows that it does a pretty good job also. The fact that Radeon performes better than GF3 in 3Dmark might not reflect the performance on most games, but it's pretty accurate.
 
On 2002-02-13 07:59, jvd wrote:
well ain't that sweet.... time to find a new benchmark tool.... um any smart guys here want to make one :smile:

Well, I've already done one :smile:
http://hem.passagen.se/emiper/index.html

It's called GL_EXT_reme, gives useful data but aint showing any kinds of impressive graphics. I'd be very interested in how well a GF4 would do in it.
 
I love it. I bring up the concept of selective memory. And then use EMBM as a counter example to defend MadOnion.

Then someone claims that MadOnion didn't put EMBM in 3dmark until the GF3 came out which supported it! Hahahahha

They take, at the time, what was a Matrox only 3dMark2000 test and "remember it" as being a Pro-Nvidia bias.


All of this is shades of 3dfx, only now the *boys have attached themselves to ATI. Instead of 3dfx. It seems at first they attached themselves to Kyro, then BB, but given those two failures, they have not placed their bets on ATI.

I guess if you eventually keep doubling down your bet, you'll eventually win a $1.
 
This discussion has now also evoked shades of a certain type of Nvidia fan (dare I call them *boys?) who seem to believe that no criticism of Nvidia -- or, more to the point, of the often inexplicable free pass that Nvidia is given by the powers that be in the online hardware community --could ever possibly be based in anything other than an irrational hatred of Nvidia, let alone objectively justified.
 
100% agreed. I share the same Democoder's feeling. It seems to me that everything madonion does can be used as a proof of them being biased toward nvidia. C'mon guys..
(btw..Democoder..you have a private message :smile:)

ciao,
Marco
 
On 2002-02-13 19:30, DemoCoder wrote:

I love it. I bring up the concept of selective memory. And then use EMBM as a counter example to defend MadOnion.

Then someone claims that MadOnion didn't put EMBM in 3dmark until the GF3 came out which supported it! Hahahahha

They take, at the time, what was a Matrox only 3dMark2000 test and "remember it" as being a Pro-Nvidia bias.


All of this is shades of 3dfx, only now the *boys have attached themselves to ATI. Instead of 3dfx. It seems at first they attached themselves to Kyro, then BB, but given those two failures, they have not placed their bets on ATI.

I guess if you eventually keep doubling down your bet, you'll eventually win a $1.

Finally the true colors show themself, if you can't understand the concept of being objective and FAIR then fine. Your EMBM example was totally unrelated, were talking tests that affect the overall SCORE here since everyone looks at 3DMARK, EMBM does nothing for the score. Nature DOES affect the score, is that too hard to understand ?
 
i never understood how people gauge 3dmarks perfomances ... what's the real world perfomance of it ?

and everything tested adds up to the score
 
On 2002-02-13 19:53, muted wrote:
i never understood how people gauge 3dmarks perfomances ... what's the real world perfomance of it ?

and everything tested adds up to the score

Test Methodology
All tests (except for the image quality tests) measure the average framerate (rendered frames / second), like described in the list below. Some tests give a result of a different unit (Fill rate test: million texels / second), but even in these tests the frame rate is measured before the result is converted to the given unit.

All performance tests are run the following way:

Initialize the scene
Render 1-3 (depending on the test) frames for "warm-up", to download necessary textures to the texture memory of the graphics card.
Start the timer
Render as many frames as possible within n seconds
When n seconds is reached, draw a black triangle after all the frames in the pipeline, to ensure that all frames have been displayed
Lock the frame buffer
Stop the timer
Unlock the frame buffer
Calculate the result based on "NumberOfFramesRendered / Time"
De-initialize the scene


3DMark®2001 contains a number of tests divided into different categories. There are game tests, theoretical tests, feature tests and image quality tests (image quality tests are available only in the Pro version of 3DMark2001).

Game Tests
The game tests give the actual 3DMark score, which is the standard measurement of PCs' 3D game performance. When running the game tests, 3DMark2001 keeps track of the frame rate and calculates an average frame rate after the test is run.

The 3DMark overall score is based on the four game tests only. For more information, see the benchmark results section.

The game tests are short runs of a number of different types of scenes that simulate future games. The game tests produce the same workload for your PC as real 3D games do, because they use a real 3D game engine. One of the game tests is made even more game-like by using real-time physics and artificial intelligence. With additional effort, all of these game tests could be made into full games. Still, when running short tests and measuring the performance, we don't need a huge 3D world to play through, a game scheme and a storyline, or any game controls. However, to prove to our users that all these tests can be made into a game, we have added a game demo, where one of the game tests is equipped with game controls.

Theoretical and Feature Tests
Traditionally benchmarks in our 3DMark series have also included a number of theoretical tests, designed to measure the performance of some particular quality of your PC's graphics hardware. 3DMark2001 also includes these and we have concentrated on the most essential performance qualities of 3D hardware: fill rate and polygon throughput. Additionally, there are some feature tests. Firstly, there are bump mapping tests demonstrating environment bump mapping and Dot product 3 bump mapping (DOT3). Then there are what we call DirectX®8 (DX8) feature tests. These show new features in DX8, which are demonstrated for people curious about the differences between DX8 and earlier versions. They are also a good way for people with DX8 graphics hardware to confirm that the hardware does what it should. The tests do measure frame rate, which shows the difference in speed between hardware acceleration and software emulation. These frame rate results can also be used for comparing performance between DX8 graphics hardware, but our main goal when developing these tests was to demonstrate these new DX8 features.

FROM MADONION
 
Back
Top