3D Mark 2001 SE Today?

The claims of bias are so tired now it beggars belief at times. Hey aren't all games biased too? After all nearly every single one of them only run on one operating system; can't use them for benchmarks in reviews then.

Amen!

BTW I don't believe including some level of PS1.4 support can do anything BUT hurt ATi. Since the technology is exclusive to them ATM. It sounds like MadOnion is helping ATi out, if anything.

I mean do you think NVIDIA pushed for this update? Geez...

MO should have just left the benchmark as is, and save the new stuff for the next major revision.
 
>>"Perhaps they should be chided for artistic unoriginality, but bias? Give me a break."<<

Well, they should be chided for wildly and constantly changing their standards/policies 180 degrees at each and every juncture of advancing their product line. Inconsistency is the mother of all evils.

You mentioned the Nature test yourself. Obviously this test has the same "fall-back" mode of operation, yet there is no way to get an avg. fps or result from this run. I have no qualms with "fall backs" in demos since there is no final score for comparison as they are, of course, apples and oranges comparisons. But a 1.0/1.1 PS test that will run in fallback mode is now allowed to be run and a score/result is reported for (mis)handling and (mis)comparison.

Time and time again, MadOnion changes their own rules and standards for whatever reason. It's the trend of reasoning that shrouds these flips in mentality that begs the question... why?

Some may consider the outrageous 180 degree flips in policy as "coincidence"- but when you stack all them up and start to discover the trend is always 100% in favor of one particular chipset, it will of course cause some to judge this as bias.

The question is- how many more of these "coincidental" changes in policy/standards are going to occur, and will a single case ever *not* be in the flavor of an nvXX PR tool?

Cheers,
-Shark

<font size=-1>[ This Message was edited by: Sharkfood on 2002-02-13 01:53 ]</font>
 
Selective sampling in your worldview. I already provided an example of the converse: EMBM in 3DMark. It wasn't supported in earlier GeForce chipsets. Also, when they originally included anti-aliasing support in the benchmark, Nvidia's AA performance sucked rocks and only made NVidia's AA look very bad compared to 3dfx.


Once you come to a conclusion, you can interpret any slight evidence to enforce that worldview, and ignore any contrary data. The conspiracy nuts have been doing it for years, whether it's UFOs, FreeMasons, Bilderbergers, grand-oil-conspiracy, etc


If MadOnion had a coding bug in 3dmark that made it run poorly on brand X, it will be interpreted as being biased against brand X, even if they never intended it to be so, and was merely an unintentional coding error. If MadOnion is lazy fixing a bug, or refuses to waste time writing specialized code for a card with a small market share (e.g. Kyro), it will be interpreted as bias. If they write such code as a fallback for the vast majority of the market, it is interpreted as bias.


Look, writing something that takes specific advantages of PS1.4 and SHOWS AN ADVANTAGE is hard. It's hard to pull off an effect that is immediately and dramatically better looking *AND* also guarantee that it will win in performance. In most cases, a PS1.1 version can be made to look just as good.


Carmack has already shown that sometimes paradoxical results happen. Sometimes 2 passes aren't naturally slower than 1. What if Carmack's "fallback" for the GF4 showed that the GF4 multipass beat the ATI PS1.4?

Are you going to cry it is unfair that Carmack wrote a multipass fallback, even if it looks 100% identical?


Isn't it the look and performance of the final result that matters, not how it is done? Gamers can hardly care about the elegance of collapasing passes, say, in Doom3. They care only about how good it looks and how fast it runs. If ATI's PS1.4 can't beat a GF4 doing identical output with multipass, it's not bias. It's factual data that shows that PS1.4 has no advantages and perhaps we should move on to something better, like PS2.0 or OpenGL2.0
 
BTW I don't believe including some level of PS1.4 support can do anything BUT hurt ATi. Since the technology is exclusive to them ATM. It sounds like MadOnion is helping ATi out, if anything.

Livecoma,

That has gotta be the strangest way of thinking I've ever read. Let me get this straight, one year ago I was sitting with a Radeon Vivo. 3Dmark 2001 is released, download it and run it, gets to Nature

NO HARDWARE SUPPORT..Skipping

ok fine ..next test

Pixel Shader

NO HARDWARE SUPPORT..Skipping

Huh ..WTF...The Radeon has a Pixel shader..jump on Rage3D already a thousand posts all over..ATI Sucks can't even get the Pixel shader right..yada..yada
Now we come to find out the Radeon Supports Pixel Shader 1 not 1.1, at the last minute MS changed it..nice.

So here is all these Radeon users unable to view this test because of a hardware limitation. There was no 'fallback option' offered to us Radeon users, we simply could NOT run these tests, like the GF2.

Meanwhile the ORB is DOMINATED by Geforce 3's with the help of Nature and Pixel Shader support.

Now here we are 1 year later now the tables have turned, we have a 8500 able to do more advanced effects and faster yet this doesn't count does it. Now this FEATURE which is DX 8.1 doesn't count for nothing but a stupid DEMO.

Ya lets not utilize these advanced features Until the OTHER company can do them too, then it will be ok.

Sad.
 
I already provided an example of the converse: EMBM in 3DMark. It wasn't supported in earlier GeForce chipsets.

But it was supported by a multitude of others, not just the one or singular.

Also, when they originally included anti-aliasing support in the benchmark, Nvidia's AA performance sucked rocks and only made NVidia's AA look very bad compared to 3dfx.

I must have missed that 3dfx product. From day one, a GTS with it's add-on 6.XX drivers using AA was still clobbering even a V5 in final scores due to the HW T&amp;L advantage- even moreso on the processors of the time.

Once you come to a conclusion, you can interpret any slight evidence to enforce that worldview, and ignore any contrary data. The conspiracy nuts have been doing it for years, whether it's UFOs, FreeMasons, Bilderbergers, grand-oil-conspiracy, etc

Or alternatively, you can turn a blind eye towards any visible or extensive accumulation of evidence and simply discount *because* it conflicts with one's world view.

If someone is set upon one set of principles- they will simply try to rationalize away any evidence of the contrary. Same thing but in reverse.

If MadOnion had a coding bug in 3dmark that made it run poorly on brand X, it will be interpreted as being biased against brand X, even if they never intended it to be so, and was merely an unintentional coding error.

There isn't an example of the contrary so this theory stands to reason. If there was a code bug introduced to make the product run poorly on Brand Y.. well, that would never happen if Brand Y was the designated platform from which under it was designed from square one.

If MadOnion is lazy fixing a bug, or refuses to waste time writing specialized code for a card with a small market share (e.g. Kyro), it will be interpreted as bias.
Understandable hypothesis, but again not consistent with reality. V1.1 of 3DMark2000 came out almost immediately once it was determined a race state occurred under one specific chipset (guess) when running tests in series. In otherwords, on one particular chipset- when running all the tests in sequence, one score was adversely effected from having some sort of ramification from the previous test. This was *instantly* fixed and patched almost immediately. No such issue have I ever seen reproduced on a 3dfx or ATI Radeon of the time, only the GeForce/GTS cards.

If they write such code as a fallback for the vast majority of the market, it is interpreted as bias.

Here is another case where the theory is sound, but the real-world inconsistency doesn't match.

If "fallback" comparisons are to become the status quo, then this is something totally new for MadOnion. Again, simply reference the Nature test- which not only did not offer a fall-back execution mode for lesser featuresets but also attributed positively to the final "score" from which they use to make hardware upgrade "recommendations." Much like their XLR8R or System Analyzers, which would certainly make bogus recommendations based on erroneous at best data.

Look, writing something that takes specific advantages of PS1.4 and SHOWS AN ADVANTAGE is hard.

What's so hard about showing what multi-pass means to performance? I'm sure the same could be said about the original Banshee versus the Voodoo2. Hell, as long as you didn't perform any multitexturing, both were single pass in operation. Too bad there weren't any companies making benchmarks to specifically isolate optimal, emulated, single pass tests back then else the Banshee's would have likely outsold the V2's.

It's hard to pull off an effect that is immediately and dramatically better looking *AND* also guarantee that it will win in performance. In most cases, a PS1.1 version can be made to look just as good.

Your stipulation, not mine. The only lack of "guarantee" would be due to driver or hardware issues from ATI and not the basic principle of PS1.4 vs. 1.1.

Obviously, through selective example coding you can make a PS 1.1 "tech" demo look every bit as good as a PS 1.4 demo. You can do the same with and without HW T&amp;L too and not "guarantee" any improvement if you approach it with that goal in mind. This is still no testament to the absence of HW T&amp;L being no major benefit.

Carmack has already shown that sometimes paradoxical results happen. Sometimes 2 passes aren't naturally slower than 1. What if Carmack's "fallback" for the GF4 showed that the GF4 multipass beat the ATI PS1.4?

This would be a good event that will never be given the objective light of day. Instead, the correct angle will be the one being taken now- which is to simply discount/discredit any *possible* gains in the complete absence of any real data. This seems to be the beckoning call here anyways- and with MadOnion.


Isn't it the look and performance of the final result that matters, not how it is done? Gamers can hardly care about the elegance of collapasing passes, say, in Doom3. They care only about how good it looks and how fast it runs. If ATI's PS1.4 can't beat a GF4 doing identical output with multipass, it's not bias. It's factual data that shows that PS1.4 has no advantages and perhaps we should move on to something better, like PS2.0 or OpenGL2.0

I agree entirely here. Unless there is remarkably different resultant IQ, a gamer truly doesn't care how it was arrived- as long as it simply looks as good as Leading Brand X and does so without any cost in dropped effects, detail, IQ or likewise.

But this isn't the purpose of 3DMark2001. It has never been a tool to provide "look and feel"- at least not until now with it's (yet again) complete change in policy. If this is a new evolutionary step in the benchmark, I'd be all for it. But somehow, I doubt there will be any "fallbacks" or custom optimized code paths for alternate hardware in any DX9.0 or above version that may stem in the future. Just a gut feeling.

Cheers,
-Shark
 
This is what I mean by Inconsistent:

Perhaps the wording in the documents could have been clearer maybe but the idea behind the APS test is to show the performance difference that PS1.4 has over earlier revisions by having the 2 phases per pass - this is what makes it "advanced".

Huh? Maybe that's the idea? Perhaps the documentation could have been clearer? The documentation prettyl clearly says exactly 100% of the opposite of what you are proposing. The documentation specifically says this is not about performance at all, but about "compatibility." It's CLEAR from the FAQ that the test is NOT designed to be used to "compare performance" from one PS version to another.

If that was the "goal", then wouldn't it make sense for MO to have put in the option for PS 1.4 boards to run in EITHER PS 1.1 mode, or PS 1.4? That's the only TRUE way you can gauge the effectiveness of saving passes. If you have to use two different architectures to do the compare, it's an apples to apples comparison.

Look, writing something that takes specific advantages of PS1.4 and SHOWS AN ADVANTAGE is hard. It's hard to pull off an effect that is immediately and dramatically better looking *AND* also guarantee that it will win in performance. In most cases, a PS1.1 version can be made to look just as good.

Really? Based on what? My point is, MadOnion did not appear to make an attempt to do this. The specifically focused on "compatibility" and not performance.

Of course, MadOnion could by lying in their documentation...

Are you going to cry it is unfair that Carmack wrote a multipass fallback, even if it looks 100% identical?

Of course not. But
1) Are they 100% Identical? If not, what is the differences?

2) The effects Carmack is using are all very GENERIC. You don't need pixel shaders at all to do them.

3) The 3D Mark "Advance Pixel Shader" test is not a performance test!. This is a FEATURE TEST.

Let me make one thiing clear. I would have LIKED to have seen MadOnion put in an "Advanced Shader" PERFORMANCE test and make it part of the 3D Mark score.Tat would have made the MOST sense to me from the beginning. (Search the old D3B boards). I even sugeested that a "fall-back" mode for performance testing is viable...as long as there is some "penalty" assigned if the fall-back does not equal the quality of the "standard."

What MO did, is completely useless. It's either a "feature test" which is not designed to show what Ps 1.4 can do in terms of a graphical feature, or a "performance test" which does nothing for the 3D Mark score, or provide any means to arrive at an apples-to-apples comparison.
 
On 2002-02-13 03:55, DemoCoder wrote:

Selective sampling in your worldview. I already provided an example of the converse: EMBM in 3DMark. It wasn't supported in earlier GeForce chipsets. Also, when they originally included anti-aliasing support in the benchmark, Nvidia's AA performance sucked rocks and only made NVidia's AA look very bad compared to 3dfx.


Once you come to a conclusion, you can interpret any slight evidence to enforce that worldview, and ignore any contrary data. The conspiracy nuts have been doing it for years, whether it's UFOs, FreeMasons, Bilderbergers, grand-oil-conspiracy, etc


If MadOnion had a coding bug in 3dmark that made it run poorly on brand X, it will be interpreted as being biased against brand X, even if they never intended it to be so, and was merely an unintentional coding error. If MadOnion is lazy fixing a bug, or refuses to waste time writing specialized code for a card with a small market share (e.g. Kyro), it will be interpreted as bias. If they write such code as a fallback for the vast majority of the market, it is interpreted as bias.


Look, writing something that takes specific advantages of PS1.4 and SHOWS AN ADVANTAGE is hard. It's hard to pull off an effect that is immediately and dramatically better looking *AND* also guarantee that it will win in performance. In most cases, a PS1.1 version can be made to look just as good.


Carmack has already shown that sometimes paradoxical results happen. Sometimes 2 passes aren't naturally slower than 1. What if Carmack's "fallback" for the GF4 showed that the GF4 multipass beat the ATI PS1.4?

Are you going to cry it is unfair that Carmack wrote a multipass fallback, even if it looks 100% identical?


Isn't it the look and performance of the final result that matters, not how it is done? Gamers can hardly care about the elegance of collapasing passes, say, in Doom3. They care only about how good it looks and how fast it runs. If ATI's PS1.4 can't beat a GF4 doing identical output with multipass, it's not bias. It's factual data that shows that PS1.4 has no advantages and perhaps we should move on to something better, like PS2.0 or OpenGL2.0

EMBM doesn't affect your score, NATURE does so your comparison is moot. If Pixel Shader 1.4 optimized theoretically gave a 30% boost then the total 3Dmark is also raised..not supporting EMBM does not affect the score at all.

Game1 + Game2 + Game3 + Game4(Nature)

Based off what Carmack says:

A test of light interaction speed initially had the 8500 significantly slower
than the GF3, which was shocking due to the difference in pass count. ATI
identified some driver issues, and the speed came around so that the 8500 was
faster in all combinations of texture attributes, in some cases 30+% more.
This was about what I expected, given the large savings in memory traffic by
doing everything in a single pass.

The 8500 could have a nice speed increase if implemented properly, which in turn would give a higher OVERALL 3Dmark score...gee I wonder why that didn't happen :rollseyes:
 
3dmark 2001se is supposed to be a performance test of dx8.1. Pixel shader 1.4 is a feature of dx8.1 much like pixel shader 1.1 was a feature of dx8. Why if pixel shader 1.1 test counts on your score wouldn't 1.4 count. Because Nvidia doesn't have any true dx8.1 cards. It is a complete double standard. I get tired of hearing people say "pixel shader 1.4 is an ATI only thing". It isn't. It's a direct x 8.1 thing. 2001se is supposed to be a dx8.1 performance analyzer. Kid yourself all you want but this is completely biased. When Nvidia had the only cards that did p.s. 1.1 those tests were included in the score and rightfully so. You should get more points for being able to do all of the features of dx. Mad-onion had it right the first time, scoring for all of the tests. Now that they're using dx8.1 they should do the same thing with the p.s. 1.4. Being able to do all of the features of dx8.1 should still count for something. Why don't they remove the scoring for the nature test if this is the route they're going to take?
 
More importantly-

Should we start the wagering now?

A month or two from now once the GF4's start shipping, *which* website will be the first to revisit their p/reviews with 3DMark2001 SE "Advanced Shader" scores and wrongfully apply the results to build an argument for PS1.4 vs PS1.1?

Should we start a pool? 5 bucks a square?
 
Selective sampling in your worldview. I already provided an example of the converse: EMBM in 3DMark.

Demo, your example does nothing but back up the opinion of some people that Madonion are bias toward Nvidia. Graphics chips have supported EMBM from before 3dmark2000, did 3dmark2000 have EMBM support?.. no. However it did support another bump mapping technique that the top of the line Nvidia products of that time just happened to support (Dot3). Then as soon as Nvidia finally decided to support EMBM in Geforce 3 Madonion then decided that EMBM was a worthile feature... isn't that od? Matrox cards, Radeon, Kyro, all support EMBM and yet EMBM was overlooked until Nvidia's top product had support for it... thats not a sign of bias?

Once you come to a conclusion, you can interpret any slight evidence to enforce that worldview, and ignore any contrary data.

Which is what you seem to be doing.

The conspiracy nuts have been doing it for years, whether it's UFOs, FreeMasons, Bilderbergers, grand-oil-conspiracy, etc

Oh yeah because the "Madonion is bias towards Nvidia" comspiracy theory is right up there with UFO's isn't it :rollseyes:

If MadOnion had a coding bug in 3dmark that made it run poorly on brand X, it will be interpreted as being biased against brand X, even if they never intended it to be so, and was merely an unintentional coding error. If MadOnion is lazy fixing a bug, or refuses to waste time writing specialized code for a card with a small market share (e.g. Kyro), it will be interpreted as bias. If they write such code as a fallback for the vast majority of the market, it is interpreted as bias.

People did not say Madonion was bias when the Kyro bug was found (some did but most people didn't). Because it was equally MS's fault not just Madonion, it was a genuine mistake on Madonions part.

However people started to get a little suspitious of some bias when it took Madonion 6+ months to fix the bug (especially since they fixed a bug for another chip almost instantly). Then people got more then suspitious when Madonion sold a "performance analyzer" to Nvidia that used data that they knew was incorrect (the Kyro scores) in order to sell people Nvidia cards. Did they include a disclaimer telling any Kyro owner that was using the analyzer that the advice of the analyzer could not be taken seriously for there particular card?.. nope, they just went right ahead and advised people to buy Geforce 2 MX200's as upgrades for Kyro II's, this wasn't a mistake, they knew the data was incorrect when they sold the analyzer to Nvidia, so basically they knew they were selling lies. But they still did it. Thats when allot of people really started to beleive the Nvidia bias theories.

Isn't it the look and performance of the final result that matters, not how it is done? Gamers can hardly care about the elegance of collapasing passes, say, in Doom3. They care only about how good it looks and how fast it runs. If ATI's PS1.4 can't beat a GF4 doing identical output with multipass, it's not bias. It's factual data that shows that PS1.4 has no advantages and perhaps we should move on to something better, like PS2.0 or OpenGL2.0

Then why don't Madonion do this for other tests then? Why not allow the pixel shader test (the one with the ocean) to be done with multi-texturing in DX7?.. don't say it couldn't be done because that test isn't even very good looking. This is the problem that your missing, they only decide to allow a fallback like this when it suits Nvidia.

<font size=-1>[ This Message was edited by: Teasy on 2002-02-13 05:27 ]</font>
 
Doesn't it just boil down to what it was back in 2000? NV is evil, 3dfx..er..ATI is good, gang up on NV. :rollseyes:

Seriously folks, it gets old, and this is from a Voodoo5 owner.
 
Seriously some people can't see double standards, even if it smacked them right in the face.
There is a double standard here..live with it.
 
The MadOnion guys get propositioned by every hardware manufacturer to "include feature X" or "ignore feature Y."

The way several things are done in 3DMark 2001 is inefficient on NVIDIA cards, and wouldn't be done in an actual game engine (or at least a good game engine).

Similarly, features like destination alpha or stencil support (which have traditionally been *very* weak on ATI parts) or MIP mapping on cubic or 3D textures hasn't been tested. There's also a whole host of render accuracy tests that could be performed (AA lines (or lines in general), anisotropic filtering, etc.) that would make the NVIDIA part look better in comparison.

If the MadOnion guys were truly biased, they'd be out of business. People buy video cards based on 3D Mark scores -- if one graphics company mopped the floor in 3D Mark, the other companies would slowly go out of business... and once there is only one company left, there isn't any use for a benchmark.

If anything, MadOnion specializes in generating extra competition. If they wanted to create a test that favored NVIDIA, it wouldn't be terribly difficult to put together a generic test suite where a GeForce 3 Ti200 absolutely destroys a Radeon 8500 (see JC's comments about rendering in the stencil buffer as an example). Similarly, you could architect a test that runs fine on R8500, but sucks on the GeForce 3.
 
It really does not matter what kind or how many sound arguments and examples you give.

Some people are totally unwilling to see it. They simply dont want to, and they never will.

The only thing that irritates me is when they try to Turn it around like your the idiot.

Oh well thats life,,,,,
 
In most cases, a PS1.1 version can be made to look just as good.

Actually, within the limits of precision (ps1.0-1.3 is only 9-bit precision/component, while ps1.4 is 12-bit precision), this can be proven.

If you treat pixel shaders as vector spaces, you can show that any effect possible in ps1.4 is also possible in ps1.3, but might require additional passes, or clever data manipulation (and, with the exception of depth replace (which is possible on NV20, but not in ps1.1), every effect supported by ps1.4 is also supported by ps1.1).

ps1.4 doesn't really offer anything new effects-wise, it just offers a more convenient (and potentially more efficient) method of representing the same things.

Why not allow the pixel shader test (the one with the ocean) to be done with multi-texturing in DX7?

Looking good and being possible are two entirely different things. The big thing pixel shaders add is dependent texture accesses. The *only* dependent texture address mode supported in DX7 is EMBM. If the ocean (ugly as it may be) uses dependent texture reads, it *can not* be done in DX7.

That's not to say that there isn't a different way to render an ocean without using dependent texture reads that looks better, though.
 
Similarly, features like destination alpha or stencil support (which have traditionally been *very* weak on ATI parts) or MIP mapping on cubic or 3D textures hasn't been tested. There's also a whole host of render accuracy tests that could be performed (AA lines (or lines in general), anisotropic filtering, etc.) that would make the NVIDIA part look better in comparison

Mip Mapping on 3D textures is a DESIGN choice. Ansio Stropic Filtering is a Design choice. As are ALL the examples you just brought up.

Bring on Ansiostropic filtering as a PERFORMANCE comparative test. I dare you. See what happens to Nvidias scores. Ati will win Virtually every render compare test you throw at it.

Some John carmack you say??
A test of the non-textured stencil shadow speed showed a GF3 about 20% faster
than the 8500. I believe that Nvidia has a slightly higher performance memory
architecture.

[And later.....]

ATI identified some driver issues, and the speed came around so that the 8500 was
faster in all combinations of texture attributes, in some cases 30+% more.
This was about what I expected, given the large savings in memory traffic by
doing everything in a single pass.

Not such the convincing argument compared to the WAY you phrased it. Looks like ATi has its CLEAR advantages as well. the stencil issue is due to the memory controller, not inferior Stencil support.

This is more Prime examples of the ABJECT BIAS you nvidia people have. It makes me ill. If its not done the way Nvidia does it then its WRONG.

Your ENTIRE post proves the Madonion bias beyond question. WHY? Becasue it is automatically assumed by you that the Madonion Way will Equal the Nvidia way and that ATi perfromance will suffer.

Need anyone say any more? I am really strugglng with staying cool over this. It is insulting that you guys bring this drivel to the tabe as a sound "UNBIASED" argument. What you think we are all stupid?






<font size=-1>[ This Message was edited by: Hellbinder[CE] on 2002-02-13 07:13 ]</font>

<font size=-1>[ This Message was edited by: Hellbinder[CE] on 2002-02-13 07:18 ]</font>
 
I don't understand this one bit... Okay I have a radeon that i score 3,400 points on it. I have a radeon 8500 that i get 7500 points on it. if i disable nature and pixel shaders i get 6500 points , thats a loss of a 1000 3dmarks... if we can asume the same drop for the geforce 3 wouldn't that put the geforce 3 levels very close to the radeon and geforce 2 cards ? Also if what carmack says about the radeon8500 being 30%+/- over the geforce 3 ti 500 , it would be safe to asume that if using pixel shaders 1.4 for the radeon we would get a score much closer if not higher than that of a geforce 4 if they had added a game test to the new 3dmark ?
 
On 2002-02-13 05:14, Teasy wrote:

Demo, your example does nothing but back up the opinion of some people that Madonion are bias toward Nvidia. Graphics chips have supported EMBM from before 3dmark2000, did 3dmark2000 have EMBM support?.. no. However it did support another bump mapping technique that the top of the line Nvidia products of that time just happened to support (Dot3). Then as soon as Nvidia finally decided to support EMBM in Geforce 3 Madonion then decided that EMBM was a worthile feature... isn't that od? Matrox cards, Radeon, Kyro, all support EMBM and yet EMBM was overlooked until Nvidia's top product had support for it... thats not a sign of bias?

Except that you are wrong. 3dmark2000 has the following bumpamming tests:

Emboss 3-pass
Emboss 2-pass
Emboss 1-pass
Enviromental

So it is EXACTLY opposite of what you are saying: NO Dot3, YES EMBM. BTW, it takes only 30 seconds to look that up.

<font size=-1>[ This Message was edited by: Geeforcer on 2002-02-13 07:28 ]</font>
 
On 2002-02-13 07:22, jvd wrote:
I don't understand this one bit... Okay I have a radeon that i score 3,400 points on it. I have a radeon 8500 that i get 7500 points on it. if i disable nature and pixel shaders i get 6500 points , thats a loss of a 1000 3dmarks... if we can asume the same drop for the geforce 3 wouldn't that put the geforce 3 levels very close to the radeon and geforce 2 cards ? Also if what carmack says about the radeon8500 being 30%+/- over the geforce 3 ti 500 , it would be safe to asume that if using pixel shaders 1.4 for the radeon we would get a score much closer if not higher than that of a geforce 4 if they had added a game test to the new 3dmark ?

The 30% is just a estimate based on Carmacks comments but basically what your saying is correct.

1 year ago 3Dmark 2001 was released, the ONLY card that could run all the tests were the Geforce 3. The Geforce 3 could not be knocked out of top spot because they were the only card scoring points in Nature (requires pixel and vertex shaders). Increasing the Nature score gives a big return in 3Dmarks. Now ATI's PS 1.4 can do Nature in a single pass yet Madonion chose not to optimize nature for Pixel Shader 1.4, EVEN THOUGH Pixel Shader 1.4 is superior and is DX 8.1 and 3Dmark 2001SE is supposed to be a DX 8.1 benchmark.
 
Back
Top