3D Mark 2003 Release date

Mine too, based on each number being 1/200th of a second based on rough counting.
But what can we talk about in such a thread in the meantime? :-? Disputes over the math involved in converting the countdown number into a date? ;)

Hmmm...methinks this should just be a news post or something...except that would bump the John Carmack interview off the top. :(
 
We can guess on whether or not there will be another round of "Is Futuremark part of some IHV conspiracy theory" after the first batch of 3D Mark scores come in on DX9 parts. (9700 vs. NV30)

My prediction: yes. ;)

And who will be doing the finger pointing? (ATI fans or nVidia fans) That's easy...whoever comes out on the short end of the stick. :D
 
Wouldn't it be cool if they allowed the user to choose the precision they desired for the pixel calculations. :D

If I saw such an option, I would fall off my chair. :oops:
 
Well there could be a DX9 Gametest using FP(16?, 24?, 32?).

I'd assume there has to be a DX9 game test...what would be the point otherwise? ;)

I'd say there will be at least one DX7, one DX8, and one DX9 game test, given that the min spec is a DX7 level card.

Though I suppose they could eliminate a DX7 test, and just have some DX8 test that "falls back" to some lower quality mode on DX7 cards. I don't see that as likely though.

I see two possible "issues for debate" coming up.

The obvious one is the DX9 floating point shader test, due to the fact that NV30 and R-300 have different possible precisions. Will nVidia be "legally" able to run the DX9 test in FP16 mode? Can nVidia run it in fp16 mode in DirectX at all? Is there some debit or weight to the scoring based on actual precision in the pipeline? In other words, would fp16 scores be weighted lower then fp24, which is weighted lower than fp32?

The other issue that might not be so obvious is the DX 8 test. We could have a repeat of 3DMark 2001 SE. Will the test be DX 8.0? DX 8.1? DX 8.1 with a DX 8.0 Fallback?
 
Joe DeFuria said:
I'd assume there has to be a DX9 game test...what would be the point otherwise?................The other issue that might not be so obvious is the DX 8 test. We could have a repeat of 3DMark 2001 SE. Will the test be DX 8.0? DX 8.1? DX 8.1 with a DX 8.0 Fallback?
I see those issues too and it will be interesting if we can get a definitive answer about NV30's FP capabilities and it's implementation of PS1.4.

edit: messed up quoting
 
Joe DeFuria said:
Is there some debit or weight to the scoring based on actual precision in the pipeline? In other words, would fp16 scores be weighted lower then fp24, which is weighted lower than fp32?

I think that this would be a nightmare. And should we care if one card uses fp16 vs fp 24 or fp24 vs fp32 unless we could notice a difference between them ?

Carmack on the subject:

There is no discernable quality difference, because everything is going into an 8 bit per component framebuffer. Few graphics calculations really need 32 bit accuracy. I would have been happy to have just 16 bit, but some texture calculations have already been done in 24 bit, so it would have been sort of a step back in some cases. Going to full 32 bit will allow sharing the functional units between the vertex and pixel hardware in future generations, which will be a good thing.
 
I think that this would be a nightmare. And should we care if one card uses fp16 vs fp 24 or fp24 vs fp32 unless we could notice a difference between them ?

I agree that trying to "weight" scores would be a nightmare. (And to date, 3D Mark has not weighted scores based on image quality.) On the other hand, if FP32 is no different than FP16.....then why does nVdia support FP32 in the first place?

I see those issues too and it will be interesting if we can get a definitive answer about NV30's FP capabilities and it's implementation of PS1.4.

Wow...that makes it really interesting, especially if NV30 does PS 1.4 via floating point, and it's much slower than "integer" PS 1.1-1.3.

What if there is a PS 1.4 test with a PS 1.1 fallback? Does the GeForceFX run it in PS 1.4 mode, which is potentially slow, or does it fall back to PS 1.1-1.3, which won't be as clearly apples-to-apples with other cards like the Radeon 8500 and Radeon 9700?

Will cards be "allowed" to fall back to lower PS versions, if they support the higher PS version?

Of course, if there's no PS 1.4 support in 3D Mark 2003, the point is moot. ;) But then, all the conspiracy theorists will get more ammo....
 
And should we care if one card uses fp16 vs fp 24 or fp24 vs fp32 unless we could notice a difference between them ?

Whats the point of having higher precision then ?? We've seen this arguement before with 16 bit and 32 bit, and there was a difference..

It sounds like it's going to be another late 90's arguement all over again..

There should be a DX9 minimum for precision stated somewhere...and I thought 96 bit was it.
 
Joe DeFuria said:
I agree that trying to "weight" scores would be a nightmare. (And to date, 3D Mark has not weighted scores based on image quality.) On the other hand, if FP32 is no different than FP16.....then why does nVdia support FP32 in the first place?

Well, maybe it does make a difference when you start doing off line rendering ?

Whats the point of having higher precision then ?? We've seen this arguement before with 16 bit and 32 bit, and there was a difference..

Carmack is only talking about the current batch of cards with their "limited" frambuffer. And both the R300 and NV30 are made for the professional market where the difference in quality might make a much bigger difference.
 
There should be a DX9 minimum for precision stated somewhere...and I thought 96 bit was it.

Can anyone verify? I don't know what the minimum precision spec is for DX9 floating point pipes....fp16? fp24?
 
Bjorn said:
I think that this would be a nightmare. And should we care if one card uses fp16 vs fp 24 or fp24 vs fp32 unless we could notice a difference between them ?

Agreed. If there's no discernable difference in image quality, the card using the lower precision is simply doing the same job more efficiently. This is something that ATI really should allow for in future hardware.
 
From B3D's Nv30 vs 9700 Comparison:

The CineFX architecture also allows for an alternative lower precision FP16 format, which allows for 16-bits per component, thus giving a lower overall overhead in performance. This lower precision 64-bit format is compliant with the minimum level of DirectX9 compatibility
 
Doomtrooper said:
From B3D's Nv30 vs 9700 Comparison:

The CineFX architecture also allows for an alternative lower precision FP16 format, which allows for 16-bits per component, thus giving a lower overall overhead in performance. This lower precision 64-bit format is compliant with the minimum level of DirectX9 compatibility

That makes even more sense when you consider the precision hint available in PS2.0 that allows for instructions to use a lesser precision if directed to do so.
 
Back
Top