Futuremark: 3DMark06

boltneck said:
So you are telling me that I just spend a ton of money on a card that is missing basic features of the new generation???

Can you explain why this is acceptable?

How big an impact wiill not having fetch 4 affect me in the Future?

My question, too. That's what about my pm towards Jawed was asking. I still could cancel the card ... :) But I think this is off topic here.
 
After reading through these posts it does seem to be a reasonable conclusion that for some reason, behind the scenes, Futuremark partnered with Nvidia on this project. It does not look like "coincidence" at all. Specific choices where made over several months that had specific impacts on different vendors.

Choices were made that frankly favor Nvidia at this time. How can having the best SM3 processing hardware not be allowed to have an impact in a supposed "future" looking benchmark?

Once the X1900XT gets released things are going to change in a hurry though. Looks like this one might put a beat down on the competition.
 
Jawed said:
It's not the fetching or filtering that's causing the significant performance difference, though (as far as I can tell).

The problem is that it's taking the ATI cards 3x bandwidth to create the shadow maps compared with the NVidia cards.

With hardware shadow mapping disabled in 3DMark06, I believe it also reverts to using R32F depth maps on all hardware, as well as removing the use of either PCF or Fetch4.
 
boltneck said:
After reading through these posts it does seem to be a reasonable conclusion that for some reason, behind the scenes, Futuremark partnered with Nvidia on this project. It does not look like "coincidence" at all. Specific choices where made over several months that had specific impacts on different vendors.

Choices were made that frankly favor Nvidia at this time. How can having the best SM3 processing hardware not be allowed to have an impact in a supposed "future" looking benchmark?

Once the X1900XT gets released things are going to change in a hurry though. Looks like this one might put a beat down on the competition.

I am not so sure about the 1900Xt suddenly ruling all in this benchmark unless it is far far away better than the 7800GTX 512 .
 
Cowboy X said:
Are those snippets of soon to be revealed common knowledge or , just hope :)
Yes.
yep.gif
 
Hanners said:
With hardware shadow mapping disabled in 3DMark06, I believe it also reverts to using R32F depth maps on all hardware, as well as removing the use of either PCF or Fetch4.

This being the case I suppose that (in theory at least) we can roughly extrapolate the performance hit on the X1800 by seeing what the hit is on NVidia cards when hardware shadow mapping is disabled. I'm pretty sure that I saw some NVidia benchmarks with/without hardware shadowing on EB's website. Unfortunately, their site is borked at the moment so I can't check it this!
 
Mariner said:
This being the case I suppose that (in theory at least) we can roughly extrapolate the performance hit on the X1800 by seeing what the hit is on NVidia cards when hardware shadow mapping is disabled. I'm pretty sure that I saw some NVidia benchmarks with/without hardware shadowing on EB's website. Unfortunately, their site is borked at the moment so I can't check it this!

Elite Bastards is working fine for me, although it is playing its bi-weekly game of "whore's drawers" because I dared to post some content. :rolleyes: ;)

Anyhow, from looking at the charts I can tell you that a 7800GT (on an Athlon64 3500+ system with 1GB of RAM) is losing just under 300 points in the Shader Model 2.0 tests. This equates to about 2 frames per second lost each in graphics tests one and two.
 
Hanners said:
Elite Bastards is working fine for me, although it is playing its bi-weekly game of "whore's drawers" because I dared to post some content. :rolleyes: ;)
DAMN JOU!!! DAMN JOU TO HELL!!!!!!

I keep telling you, if you keep it up like this the place will be getting all touristy. :rolleyes:














;)
 
Hanners said:
With hardware shadow mapping disabled in 3DMark06, I believe it also reverts to using R32F depth maps on all hardware, as well as removing the use of either PCF or Fetch4.
Ah, ok. I think I'm getting more confused now. I shoulda stuck with my attitude from yesterday: hold my tongue until someone writes an in-depth article on the intricacies of 3DMk06 shadowing.

Jawed
 
Nick[FM] said:
And what's the difference if we would have physics & AI in the graphics tests, eliminating pure GPU benchmarking? :???: We now have 4 graphics tests and 2 CPU tests (which do use lots of physics, AI etc.), and we are able to output a 3DMark score based on your systems' gaming performance, and sub scores for pure graphics & CPU benchmarking. I'm sorry, but I don't see the logic in your post since that's what we did. Only difference is that we separated those two aspects (CPU & GPU) in order for people to do more in-depth benchmarking.

Cheers,

Nick
Exactly.
You took the graphics card completely out of the equation, where as a regular game test (1280x1024) that just hit the CPU much harder than the other tests would be more realistic.
Well actually since some of the tests are already heavily cpu limited why didn't you make them hit the GPU more?
It seems the harder you guys try to create a good benchmarking tool the harder you guys fall.
 
Last edited by a moderator:
Jawed said:
It's not the fetching or filtering that's causing the significant performance difference, though (as far as I can tell).

The problem is that it's taking the ATI cards 3x bandwidth to create the shadow maps compared with the NVidia cards.

Whereas the 16-bit fallback that ATI are supposedly suggesting would work equivalently for both IHVs.

Hope I've got that right.

Jawed


Doesn't even look like that, that means the x1600 would perform noticable worse then other cards in its catagory while in the sm 3.0 path. Since from what Nick has said these features aren't aviable in the sm 3.0 tests? Sure doesn't seem like its getting hurt much. Actually its doing very very well.
 
radeonic2 said:
Exactly.
You took the graphics card completely out of the equation, where as a regular game test (1280x1024) that just hit the CPU much harder than the other tests would be more realistic.
It seems the harder you guys try to create a good benchmarking tool the harder you guys fall.


Well they didn't fail, its a synthetic test, its good for analyse of weak and strong points of gpu tech, for true benchmark tests games are the only way to go. Lets see there was only one benchmark out there and it was the game FEAR, would it be a great benchmarking tool? Well yeah for that one game, but Doom 3 will be different, Far Cry would be different, etc, etc.
 
Razor1 said:
Doesn't even look like that, that means the x1600 would perform noticable worse then other cards in its catagory while in the sm 3.0 path. Since from what Nick has said these features aren't aviable in the sm 3.0 tests? Sure doesn't seem like its getting hurt much. Actually its doing very very well.
I'll be honest, I've reached a sort of indefensible muddle as to what 3DMk06 is doing, when.

I'm just gonna wait now till some in-depth analysis. Should anyone be bothered.

Jawed
 
Jawed said:
The problem is that it's taking the ATI cards 3x bandwidth to create the shadow maps compared with the NVidia cards.
Why would it require 3 times the bandwidth?

Whereas the 16-bit fallback that ATI are supposedly suggesting would work equivalently for both IHVs.
Maybe. Or maybe there would be graphical anomalies.
 
Chalnoth said:
Why would it require 3 times the bandwidth?
Ask ATI - that's what they're saying ;)

Maybe. Or maybe there would be graphical anomalies.
If you read Nick's qualification of why 16-bits are not enough, you'll realise that his answer is not satisfactory. ;)

The winks, by the way, are because you apparently haven't read the thread.

Jawed
 
Cowboy X said:
I am not so sure about the 1900Xt suddenly ruling all in this benchmark unless it is far far away better than the 7800GTX 512 .

My reason is its 48 Shader units and it has fetch-4.
 
The more i look at this the more just wrong it seems.

http://www.anandtech.com/video/showdoc.aspx?i=2675&p=3

There is just no way, that the X1800XT should be slower than a GTX in a benchmark that is supposedly heavy on SM2 and SM3.

To me this is as blatantly fixed as you can get. Its on the same level as the Troy palumalu INT in the Steelers/Colts game that was suddenly a "incomplete pass".

Whats the point in going out of your way to make a Card that is completely geared towards Shader rendering if companies because of money or whatever just find ways to work around it so you still lose?

The X1800Xt should be within a few % of the GTX 512. Anything that shows any different is not worth the hard drive space its saved on.

I am now officially on the futuremark needs to go the way of the dodo bandwaggon. Tehy need to dissapear of the scene and now before they do any more damage.
 
Back
Top