Late, noisy, HUGE!!! - makes 17k Mark...

Status
Not open for further replies.
Sharkfood said:
There are also at least a few logical thinking people here that will stop and scratch their head at a PS1.4 benchmark that not only runs perfectly on non-DX8.1 compliant hardware, but in most cases equals or exceeds it in performance.. on lesser performing hardware. :)
Could you provide me with a PS1.4 benchmark, please. I haven't seen one. I do have seen a DX8.1 supporting(!) benchmark, and I have seen a benchmark with a test that utilizes PS1.4(!) (which is by the way called "Advanced PixelShader Test" and not "PixelShader 1.4 Test" FYI).

Not sure about you, but I'm loosing interest in this. You are as stubborn as a .. whatever, and you don't seem to listen to reason.

Be ignorant and stay in your little world of conspiracy. It's your call.
 
nAo said:
And about single pass PS1.4 shaders being faster in the general case versus multipassed PS1.1 shaders..IIRC John Carmack doesn't seem to agree on this. But I can't find the quote at the moment..

ciao,
Marco

I found two quotes, albeit not entirely with the same message: :?

John Carmack on Gamespy, May 23, 2002:
(http://www.gamespy.com/e32002/pc/carmack/index2.shtml)

Based on the feature set, the Radeon 8500 should be a faster card for Doom than the GF4, because it can do the seven texture accesses that I need in a single pass, while it takes two or three passes (depending on details) on the GF4. However, in practice, the GF4 consistently runs faster due to a highly efficient implementation. For programmers, the 8500 has a much nicer fragment path than the GF4, with more general features and increased precision, but the driver quality is still quite a ways from Nvidia's, so I would be a little hesitant to use it as a primary research platform.

John Carmack, February 11 2002, his plan update:
(http://www.shacknews.com/finger/?fid=johnc@idsoftware.com)

A test of light interaction speed initially had the 8500 significantly slower than the GF3, which was shocking due to the difference in pass count. ATI identified some driver issues, and the speed came around so that the 8500 was faster in all combinations of texture attributes, in some cases 30+% more. This was about what I expected, given the large savings in memory traffic by doing everything in a single pass.
 
Sharkfood said:
So, now what's next? Worm and NVidiacoder roll eyes, self declare that all this is imagined/made up then spin fingers in the air? Well, at least the "air" part is right. :)
I missed this..

Can't resist :rolleyes:
 
If we look at interview dates it seems Carmack is contradicting himself, but maybe the Gamespy interview was made before JC wrote that comments on his .plan and then published later.

EDIT: sorry, I missed the fact JC is talking about GF3 in one comment and about GF3 in the other, so he's not in contradiction

ciao,
Marco
 
There has been no valid reason why future gametest could not have been done the same way as the advanced pixel shader tests in SE ..there is none besides being lazy.
Future would run whatever the highest pixel shader version of that card would support:

Geforce 3 Ps 1.1= X FPS
Geforce 4 Ps 1.3 = X FPS
Radeon 8500 Ps 1.4 = X FPS

Now how would that 'screw up the orb' scores..it still makes no sense.
 
Doomtrooper said:
There has been no valid reason why future gametest could not have been done the same way as the advanced pixel shader tests in SE ..there is none besides being lazy.
Future would run whatever the highest pixel shader version of that card would support:

Geforce 3 Ps 1.1= X FPS
Geforce 4 Ps 1.3 = X FPS
Radeon 8500 Ps 1.4 = X FPS

Now how would that 'screw up the orb' scores..it still makes no sense.

I think having all video cards using the same code as much as possible is what you want in a benchmark. Having one card to 1.1 and another 1.4 and trying to discern some meaning from the results would be rather difficult. It would be the same as if 3dmark99 had had a game test with bump mapping where all cards but the G400 were forced to use multi pass compared to one pass for the G400. Once you go down that road, having each card running different code paths optimized for it's specific features you would end up with endless complaints of favouritism. Erm wait.. :)

What they could have done though is have options in the settings for enabling a 1.4 mode or any other extra features enabled for various cards, this would be separate from the base standard test, but fans of the various cards would be able to show how their favourite is superior, pleasing everyone. :)
 
This is not showing favortism in any way, it shows the merrits of each and every revision...what this 'benchmark' was supposed to do...demonstrate DX8 Technology. Technically showing improvments in shader code improvement between revisions is no different than comparing DX9 Ps 2.0 to Dx 8.1 Ps 1.4...will fans argue bias then ??

IMO this benchmark easily could have done this (codereatures supports all pixel shader versions and Madonion took the time to write a completly different shader test vs patching their existing one WITH fallback options)

How can anyone say supporting PS 1.4 is showing bias, its is part of the DX 8.1 Spec.
 
Doomtrooper said:
IMO this benchmark easily could have done this (codereatures supports all pixel shader versions and Madonion took the time to write a completly different shader test vs patching their existing one WITH fallback options)
Please.. How old is 3DMark2001, and how old is Codecreatures? :rolleyes:

We do also support all pixel shader versions. Up to 1.4 that is. ;)

Doom,

you seem to be glued to 1 position; ignorance. I (and some others) have tried and tried to get thru to you that changing the the game tests to use PS1.4 (if possible) might have changed the whole test in a way or another, causing the score system to be of track.. I'm running out of ideas how to present this simple little thing to you.

If we would have done a completely new benchmark in 2002 (= not comparable or compatible with 2001), we would have most probably used PS1.4 in the game test(s). SE is an update to 2001, introducing DX8.1. How can this be so hard to understand?
 
The only problem with that, Doomtrooper, is that when you are running different code on different platforms, there is always the problem of "less than optimal" code, as well as possibly lower image quality on one implementation.

Perhaps the best way to do something like this would be to make use of a HLSL to compile to the different pixel shader versions (Microsoft's can do this, can't it?). This is probably the best way to make the benchmarking as even as possible between different architectures.
 
Worm,

What example would you have that including nature gametest would throw off the scoring.. The game test was initially designed with 8.0 and what would cause such a huge problem with including support in SE with DX 8.1 ??

Its not ignorance, you have given no legit example how this would have affected scoring...its very simple as Futuremark has no issues with fallback options on the advanced test (that has no impact on scoring)..so what exactly are you trying to protect here.

Supporting PS 1.4 in scoring was left out for a reason, and so far all I've seen is
might have changed the whole test in a way or another, causing the score system to be of track
yet you have no problems with the advanced test and its results...why is that...where doesn 'might' come from..based from what Data.
 
Doomtrooper said:
What example would you have that including Future gametest would throw off the scoring.. The game test was initially designed with 8.0 and what would cause such a huge problem with including support in SE with DX 8.1 ??
Ok.. Finally I start to see what you mean with the "Future gametest". You are refering to GameTest 4 (Nature), right? Let's stick to GT4,GameTest 4 or Nature, ok? There is no test called "Future". :)

Back to the point. If we would have started to implement support for PS1.4 into the Nature test, it could have caused the score for GT4 to fluctuate. We didn't want to go that route, so we decided to make it as a feature test. Making something to work on PS1.4 from PS1.1 isn't just "a click away". It's not that simple. Of course, there might be other reasons too that I am not aware of (technical issues).

Doomtrooper said:
Its not ignorance, you have given no legit example how this would have affected scoring...its very simple as Futuremark has no issues with fallback options on the advanced test (that has no impact on scoring)..so what exactly are you trying to protect here.
Hehe.. I'm not protecting anything. We are very open to what we do, as you _should_ see. I have explained why, how and when several times to you and to the others.

This discussion really sounds like an old record stuck on 1 track. Let's move on, ok? If you don't believe in me, or us, it's your call. I hope the mod would close this thread as it seems that this is going nowhere.

Now off to test some new and crazy stuff! ;)
 
you seem to be glued to 1 position; ignorance. I (and some others) have tried and tried to get thru to you that changing the the game tests to use PS1.4 (if possible) might have changed the whole test in a way or another, causing the score system to be of track.. I'm running out of ideas how to present this simple little thing to you.

The only thing i read here is.. Would have possibly made ATI cards faster.. thus We did not use it becuase we get a lot of cash from Nvidia. Score system off track indeed. like that has ever mattered to you guys before. just look how much GF2 MX's outscore Kyro's. There is no score balancing of any kind going on there, even though YOUR benchmarks does not reflect ANYWHERE even CLOSE to real world performance between the two.

It is patently rediculous imo to try to make the case that adding PS 1.4 support for cards detected with it would *change the nature of the scoring*.. please.. :rolleyes:

But.. having said that I will *try* and reserve any future judgement where Futuremark is concerned to 3dmark 2003. I am looking forward to seeing what features you support on Whos hardware, and how it scores. Especially considering the rather Convenient timing of its release with a certain Nvidia product.
 
I think I understand where worm is coming from. Introducing new features into the game benchmarks would have obviously changed scores and meant that the earlier database entries prior to SE would have to be removed from the database. Otherwise the comparison would be inaccurate where the same video card would get different scores. Obviously something that Futuremark did not want.

Not too hard to understand IMO.
 
I personally don't care what Madonion would have to do to the database, what I care for is accurate data in Nature using the most advanced version of pixel shaders.
Maybe If I worked for the 'Madonion Database Team' I would care, as a consumer looking for a unbiased and accurate benchmark..having to modify the orb database doesn't cut it as a excuse.
 
Slides said:
I think I understand where worm is coming from. Introducing new features into the game benchmarks would have obviously changed scores and meant that the earlier database entries prior to SE would have to be removed from the database. Otherwise the comparison would be inaccurate where the same video card would get different scores. Obviously something that Futuremark did not want.

Not too hard to understand IMO.

They had no problem changing the database, or not having relevant comparisons, when going from 3DMark99 to 3DMark2000, or again to 3DMark2001.

Since there was no "3DMark2002" I don't know why having direct comparability to tests performed two years ago was such a big concern. They had already made major changes once a year in the past, why not this time? Why couldn't 3DMark2001SE have been the "2002" version?

Because for once it wouldn't favor a certain card? I think it's at least a valid question to ask. worm's explanations seem like a lot of excuses to me.
 
Question:

Can you take code written for 1.0 or 1.1 and make it twice as fast using 1.4? In all cases?

I can see going from 1.4 to 1.1 or something, it probably would be doing quite a lot visually that would just take more passes or set of instructions to do with a more limited set of instructions. Not sure if the results would be identical though, could be losing precision by having to reread values. Seems that emulating 1.4 would always be slower, not sure if you can replace 1.1 with 1.4 versions of instructions and always get faster.

Trying to visualize it: (no idea about shader crap)

From 1.4 to 1.1:

PS 1.4 a = 2 + 2 + 2 + 2;
PS 1.1 a = 2; a += 2; a+=2; a+=2;

From 1.1 to 1.4

PS 1.1 a = 2 + 2
PS 1.4 a = 4??

If you wonder where I am going with this, well, should 3dmark have been "updated" to totally rewrite the nature test to use 1.4 with a fallback to 1.1? Why stop with 1.4, the 9700 is out now, shouldn't it be updated again to use 2.0? When the GFX comes out should it be updated to be able to do whatever extra it's hardware is capable of beyond 2.0? Maybe the best performance would be from really long shaders, you can always fall back to multiple shaders for older cards. :)
 
As I understand it Futuremark’s business model revolves around its performance database from which they base their other products.

http://www.futuremark.com/products/dataservices/
It might not be such a trivial issue for them.

Perhaps, they wanted a larger and thus more accurate performance database. Having a newer version of 3DMark in 2002 would have meant that their older benchmark database would have had to be retired. Obviously you cannot compare the scores taken from two different 3DMark versions. Why they really skipped 3DMark in 2002 is something I cannot answer. It could be a legitimate reason of available resources or something else. If they do the same in 2004, with an SE revision instead of a new version, this would simply indicate a change in their 3DMark timetable and hopefully quash the conspiracies. If, however, a completely new version is releases in 2004, you all conspiracy buffs would have a stronger point.

All this is speculation right now.
 
Personally, I think it's obvious why a new benchmark was not released in 2002. DX9 wasn't released until just last month. 3DMark2001 was updated for PS 1.4, but it would have been rather silly to wipe their whole database just for one video card. There just wasn't enough of a change in hardware to justify the release of a new benchmark. Now that DX9 is out, the next version should be out before too long.

As a side note, for those who don't know, I think all of the 3DMark benchmark programs are very poor benchmarks. While they may have some use for their synthetic tests, they have no place alongside real game benchmarks. They're synthetic, and that's it.
 
Status
Not open for further replies.
Back
Top