3DMark03 Mother Nature IQ comparison b/w FX & R300

Sharkfood said:
It's difficult to put one's finger on the difference between the two images..

One thing for certain though- frame 1669 is NOT the same frame between the two cards, which is odd. The blades of grass and falling leaves are in different locations, which makes it difficult to try and do exact comparisons here.
They are the same in the 4x AA comparisons.
 
OpenGL guy said:
They are the same in the 4x AA comparisons.

That still doesn't give me the warm and fuzzies- if in several examples the scene geometry is at different locations, who's to say that in the 4xAA examples we got "lucky" with leaves and whatnot but there is instead a small variance in light source position?

I'd also like to see noAA/noAF in the same position as it almost *looks* like some of this is attributable to the difference in AA approaches. Obviously rotated grid + gamma correction is yielding much better looking leaves in the tree leaves.. and this might partially explain the yellow grass blades. The sky then is the only thing left which could also be due to some form of color vibrance setting for a particular hue (i.e. a bit more "blue" ramp on ATI cards versus possibly some "green" ramping on NVIDIA cards, etc.etc.).

I think precise NoAA/NoAF comparisons might help.. then again they might lead to more questions as well. :)
 
notice that in all the shots the butterfly is in the same locations, as well the rocks, some of the trees the basic ground; so it is natural to conclude that the camera is in the same spot and hence it would be very much the same frame. i still stand by my speculations above as to why but i am definitely looking forward to hearing back from worm and the crew about their take on this.
 
Sharkfood said:
The sky then is the only thing left which could also be due to some form of color vibrance setting for a particular hue (i.e. a bit more "blue" ramp on ATI cards versus possibly some "green" ramping on NVIDIA cards, etc.etc.).
Not possible.

If the application is doing something like "mad a,b,c" then the results on card A and card B should be identical, up to their respective precision. Any gamma ramping in the RAMDAC won't have an effect on the captured result. Since the scene is being rendered in 32-bit color, the output from the GeForce FX and 9700 should be very nearly identical, where it's apparent they are not.
 
Hrm, just took a closer look at that.

It really looks like there are some very slight depth discrepancies in the images (look at the rocks by the stream). As for the brightness, I don't really know. It would be useful to compare images with different drivers, and, if possible, the reference rasterizer.
 
Chalnoth said:
It really looks like there are some very slight depth discrepancies in the images (look at the rocks by the stream).
One pixel differences there are hardly worth mentioning. Especially when you consider that 4x AA is enabled and the cards use different sample patterns.
 
I honestly could not tell much difference between those shots. Yeah, the R9700 was a little brighter and more colorful, but ATI has had better image quality for years. Honestly, outside of price/performance considerations, this is why I haven't used an Nvidia card since the GF256. Some fanboi at Nvnews just finally realized the image quality difference wasn't imaginary, after all. :oops: :oops:
 
from what i understand it is more an issue of DACs there though. also, some nvidia based cards have color that rival ati actually; i have never seen one quite there, but really close. yet then again, you can't really see brighter colors in a sreenshot when your own videocard makes it washed out, so that kind of comparison is hard to do. but there is definitely more going on in that shot than just a little color richness by all means, it realy does look like a rainy day in the fx shots.
 
Right now I very much want to see a referrence image from FutureMark....

From what I"ve read, the GFFX uses its own way (in the driver) to run 3DMark, as the 3DMark standard code is inefficient (according to NVIDIA) ? Is that right ? Sorry I forgot the tech words to describe that :p
 
Nagorak said:
Some fanboi at Nvnews just finally realized the image quality difference wasn't imaginary, after all. :oops: :oops:

I resent that remark ! ;) :)

I was actually just looking for tangible differences in the FSAA quality of the two shots when I noticed the sky difference. I posted the original thread at Nvnews because I knew there were people there who have GFFX's and could try capturing the same shot on different drivers.

I, and everybody else, are still waiting in that regard. I'm also curious if the GFFX 42.63 driver (unenhanced) will have a more vibrant sky than the ATI. Of course 24bit FP is ~256 times more precision than 16bit FP, so I expect any difference to be really minor, but it still might be visible.
 
Could the GeForce FX be skipping re-calculating the geometry for some of the frames? That would explain the image discrepancy and the mysterious performance improvements.
 
OpenGL guy said:
Not possible.

If the application is doing something like "mad a,b,c" then the results on card A and card B should be identical, up to their respective precision. Any gamma ramping in the RAMDAC won't have an effect on the captured result. Since the scene is being rendered in 32-bit color, the output from the GeForce FX and 9700 should be very nearly identical, where it's apparent they are not.

The key here is whether or not color correction is being implemented in the DAC or elsewhere. (not specifically only gamma). If any form of color correction is being done through drivers rather than RAMDAC, variances in driver default settings for color correction might yield more blue saturation versus green or whathaveyou- and have such changes effect the final screenshot buffer.

I'm pretty sure that at least NVIDIA's Digital Vibrance is indeed RAMDAC/post render, but not entirely sure of how all individual color correction/adjustments are handled by both ATI or NVIDIA... and who knows what's going on in the drivers for the NV30.
 
Sharkfood: afaik, the digital vibrance settings for geforce cards also works with DVI displays, which I'd assume to mean that it must be happening pre-dac?

Speaking of which, has anyone looked at the 2X and quincuxx modes on the FX using the dvi-out?

Nite_Hawk
 
Toasty said:
I, and everybody else, are still waiting in that regard. I'm also curious if the GFFX 42.63 driver (unenhanced) will have a more vibrant sky than the ATI. Of course 24bit FP is ~256 times more precision than 16bit FP, so I expect any difference to be really minor, but it still might be visible.

FP24's mantissa is only 5 bits larger than FP16 (32x more precision), but even FP16 seems to be precise enough for a scene like this. The bigger impact would be in the exponent. FP24 has the same exponent as FP32, which is why they are so compatible, whereas FP16 has a much smaller exponent.

This is what would cause the reduced brightness. FP16's range is 1/8 the size of FP24, and that seems to be the issue here in the sky and the grass.

So, is anyone up to task of taking a screenshot with older drivers as mentioned in this thread?
 
Mintmaster said:
Toasty said:
I, and everybody else, are still waiting in that regard. I'm also curious if the GFFX 42.63 driver (unenhanced) will have a more vibrant sky than the ATI. Of course 24bit FP is ~256 times more precision than 16bit FP, so I expect any difference to be really minor, but it still might be visible.

FP24's mantissa is only 5 bits larger than FP16 (32x more precision), but even FP16 seems to be precise enough for a scene like this. The bigger impact would be in the exponent. FP24 has the same exponent as FP32, which is why they are so compatible, whereas FP16 has a much smaller exponent.

This is what would cause the reduced brightness. FP16's range is 1/8 the size of FP24, and that seems to be the issue here in the sky and the grass.

So, is anyone up to task of taking a screenshot with older drivers as mentioned in this thread?

Do you have a link about the 16/24/32/64+ bits FP standard telling about mantissa a exponent size ?
Tried google with no success.
Are 16/24 bits FP IEEE Standard ?
 
16 and 24-bit FP types are not IEEE standards, whereas 32 and 64-bit are. The exponent, mantissa and range of each type are:
  • FP16 (as used in GFFX): 5-bit exponent, 11-bit mantissa, range 2^-24 to 2^16 (as found here; supports denormalized numbers; search for 'fp16' in the document)
  • FP24 (as used in R300): 8-bit exponent, 16-bit mantissa, range 2^-126 to 2^128 (IIRC, it is essentially FP32 with the lowest 8 mantissa bits ripped off and no support for denormalized numbers. Cannot remember the source of this information, though)
  • FP32 ('single precision' IEEE-754 standard, look here or google for 'IEEE 754'): 8-bit exponent, 24-bit mantissa, range 2^-149 to 2^128.
  • FP64 ('double precision' IEEE-754 standard): 11-bit exponent, 53-bit mantissa, range 2^-1074 to 2^1024.
  • FP80 ('extended precision' quasi-standard, supported by x86 and 68k series processors): 15-bit exponent, 64-bit mantissa, range 2^-16444 to 2^16384
  • FP128 ('quadruple precision' IEEE-754 standard): 15-bit exponent, 113-bit mantissa, range 2^-16493 to 2^16384. Rarely used.
Except FP80, the most significant bit of the mantissa is not stored explicitly (as it is always 1). All formats also have an extra sign bit.
 
arjan de lumens said:
16 and 24-bit FP types are not IEEE standards, whereas 32 and 64-bit are. The exponent, mantissa and range of each type are:
  • FP16 (as used in GFFX): 5-bit exponent, 11-bit mantissa, range 2^-24 to 2^16 (as found here; supports denormalized numbers; search for 'fp16' in the document)
  • FP24 (as used in R300): 8-bit exponent, 16-bit mantissa, range 2^-126 to 2^128 (IIRC, it is essentially FP32 with the lowest 8 mantissa bits ripped off and no support for denormalized numbers. Cannot remember the source of this information, though)
  • FP32 ('single precision' IEEE-754 standard, look here or google for 'IEEE 754'): 8-bit exponent, 24-bit mantissa, range 2^-149 to 2^128.
  • FP64 ('double precision' IEEE-754 standard): 11-bit exponent, 53-bit mantissa, range 2^-1074 to 2^1024.
  • FP80 ('extended precision' quasi-standard, supported by x86 and 68k series processors): 15-bit exponent, 64-bit mantissa, range 2^-16444 to 2^16384
  • FP128 ('quadruple precision' IEEE-754 standard): 15-bit exponent, 113-bit mantissa, range 2^-16493 to 2^16384. Rarely used.
Except FP80, the most significant bit of the mantissa is not stored explicitly (as it is always 1). All formats also have an extra sign bit.

To clarify - the sign bit is not 'extra' - it is part of the 32 bits (otherwise the alignment issues with accessing floats would be horrific), with the space made by the implicit mantissa bit.

So in terms of storage -

32 bit is S23E8 - Sign bit, 23 mantissa bits, 8 exponent bits
64 bits is S52E11 - Sign bit, 52 mantissa bits, 11 exponent bits

I believe that nVidia's half format is S10E5

[edit] I misunderstood the wording in the inital response [/edit]
 
andypski said:
arjan de lumens said:
16 and 24-bit FP types are not IEEE standards, whereas 32 and 64-bit are. The exponent, mantissa and range of each type are:
  • FP16 (as used in GFFX): 5-bit exponent, 11-bit mantissa, range 2^-24 to 2^16 (as found here; supports denormalized numbers; search for 'fp16' in the document)
  • FP24 (as used in R300): 8-bit exponent, 16-bit mantissa, range 2^-126 to 2^128 (IIRC, it is essentially FP32 with the lowest 8 mantissa bits ripped off and no support for denormalized numbers. Cannot remember the source of this information, though)
  • FP32 ('single precision' IEEE-754 standard, look here or google for 'IEEE 754'): 8-bit exponent, 24-bit mantissa, range 2^-149 to 2^128.
  • FP64 ('double precision' IEEE-754 standard): 11-bit exponent, 53-bit mantissa, range 2^-1074 to 2^1024.
  • FP80 ('extended precision' quasi-standard, supported by x86 and 68k series processors): 15-bit exponent, 64-bit mantissa, range 2^-16444 to 2^16384
  • FP128 ('quadruple precision' IEEE-754 standard): 15-bit exponent, 113-bit mantissa, range 2^-16493 to 2^16384. Rarely used.
Except FP80, the most significant bit of the mantissa is not stored explicitly (as it is always 1). All formats also have an extra sign bit.

To clarify - the sign bit is not 'extra' - it is part of the 32 bits (otherwise the alignment issues with accessing floats would be horrific), with the space made by the implicit mantissa bit.

So in terms of storage -

32 bit is S23E8 - Sign bit, 23 mantissa bits, 8 exponent bits
64 bits is S52E11 - Sign bit, 52 mantissa bits, 8 exponent bits

I believe that nVidia's half format is S10E5

[edit] I misunderstood the wording in the inital response [/edit]

arjan, in all IEEE fp standards and the likes the exponent is taken as unsigned, then biased by half its power (i.e. and 8-bit exponent is taken as unsigned_exp - 128), and the lowest 2-3 values of the exponent usually signify special numbers. so the ranges should be more along the following lines:
  • FP16 - range 2^-14 to 2^16
  • FP24 - range 2^-126 to 2^128 (that's was correct)
  • FP32 - range 2^-126 to 2^128.
  • FP64 - range 2^-1022 to 2^1024.
  • FP80 - range 2^-16382 to 2^16384
  • FP128 - range 2^-16382 to 2^16384
also, keep in mand those ranges should be interpreted as open from both ends (i.e. (a, b)) as the complete exponential formula for a fp number is M * 2^E, where M is (-2, 2), so the range actually is (-2^(max_E + 1), 2^(max_E + 1))

ed: ok, the range of the mantissa is more like (-2, -1] U [1, 2), but that doesn't change the ranges of the fp's.
 
Hmmm I just learn today that M is between 1 and 2. With a signed bit. (For fp32 anyway) So my teacher is wrong?
 
Back
Top