Use of Mandelbrot demos in reviews

Neeyik

Homo ergaster
Veteran
I've noticed an increased use of Mandelbrot demo's to point out the rendering precision differences between the various cards on the basis of visual quality. Specifically, the NV30/31/34 have been accused of running at FX12 in such tests, again on the basis of visual differences between those cards and the Radeon 9500/9700s.

For example, in this review:

http://www.hardware.fr/articles/468/page4.html

we see two such images taken from Humus' MSR demo for D3D. The 5600 produces nasty, blocky looking images unlike the 9500 which is all very svelt and pretty. Ergo, the reviewer makes the point that the 5600 is using FX12 for PS2.0 shaders; how all very evil and underhand.

However, somebody over at the FM forums has used the OGL version, which presumably runs the 5600 at FP32 unless the ARB_nicest_thingy hint, and came up with these images:

5600
56ogl.jpg


9700
97ogl.jpg


Now on face value, you'd probably think "Well, so what. FX12 worse than FP24. FP24 worse than FP32". What I am actually wondering though is whether such visual differences are sufficient evidence to make the claims of FX12 being used; after all, the differences between FP24 and FP32 are similar to the claimed FX12 and FP24.

Given that pcchen's precision tester hasn't quite helped clear up this malarky with the default PS2.0 shader precision in the NV30/31 (ie. it's 10 bits but is that FX12 10-bits or FP16 10-bits?) and the inconclusive (to my mind at least) MSR results, are we ever likely to get to the bottom of this issue, short of coshing an NVIDIA employee over the head at the forecoming GDC/ECTS in London and blackmailing Kirk to tell us the truth, lest we send bits of the employee back via airmail???
 
Hi

This are mine screenshot :D

To have this difference you have to zoom a looooooot

unfortunately there is no tool to provide you the degree of zoom.
 
AS for the performance side, something interesting on the OpenGL version. In the initial shader, we have :

#RSQ curr0.x, tmp.x;
#MUL tmp, tmp, curr0.x;

If you activate juste one, or the two function, on ATI there is a very little change in fps, but on NVIDIA the performance are divided by a factor of three. Any idea ?
 
Neeyik said:
Given that pcchen's precision tester hasn't quite helped clear up this malarky with the default PS2.0 shader precision in the NV30/31 (ie. it's 10 bits but is that FX12 10-bits or FP16 10-bits?) and the inconclusive (to my mind at least) MSR results, are we ever likely to get to the bottom of this issue, short of coshing an NVIDIA employee over the head at the forecoming GDC/ECTS in London and blackmailing Kirk to tell us the truth, lest we send bits of the employee back via airmail???

It is very unlikely that running that accuracy test shader with FX12 gives 10 bits result. However, I'll try to write some code to check for fixed point numbers. Then I hope it can clear up the whole mess :)
 
Neeyik-

In case Marc's response wasn't clear: with the Mandlebrot demo, the precision artifacts get worse the more you zoom in; put another way, the higher the precision, the more you can zoom before stuff starts looking wrong.

The pair of screenshots you posted is zoomed in much more than the pair at hardware.fr showing the differences between FP16 and FP24. Even those shots are zoomed in quite a bit from the initial view.
 
pcchen said:
Neeyik said:
Given that pcchen's precision tester hasn't quite helped clear up this malarky with the default PS2.0 shader precision in the NV30/31 (ie. it's 10 bits but is that FX12 10-bits or FP16 10-bits?) and the inconclusive (to my mind at least) MSR results, are we ever likely to get to the bottom of this issue, short of coshing an NVIDIA employee over the head at the forecoming GDC/ECTS in London and blackmailing Kirk to tell us the truth, lest we send bits of the employee back via airmail???
It is very unlikely that running that accuracy test shader with FX12 gives 10 bits result. However, I'll try to write some code to check for fixed point numbers. Then I hope it can clear up the whole mess :)
Well, FX12 is 12 bits, but it is clamped to [2,-2], so there's only 10 bits of data in the range [1,0] that is displayed (the rest being used for just a little bit of dynamic range). It wouldn't really surprise me if it registered as having 10 bits of precision.
 
Chalnoth said:
Well, FX12 is 12 bits, but it is clamped to [2,-2], so there's only 10 bits of data in the range [1,0] that is displayed (the rest being used for just a little bit of dynamic range). It wouldn't really surprise me if it registered as having 10 bits of precision.
With FX12, you have ~1024 numbers in the range [0, 1], evenly distributed. With floats, you have about a quarter of all representable numbers in [0, 1] (depending on the exact format), which are ~16384 for FP16. And ~1024 numbers in [0.5, 1]
 
Neeyik said:
Now on face value, you'd probably think "Well, so what. FX12 worse than FP24. FP24 worse than FP32". What I am actually wondering though is whether such visual differences are sufficient evidence to make the claims of FX12 being used; after all, the differences between FP24 and FP32 are similar to the claimed FX12 and FP24.

I don't think it's FX12, would probably not work at all. Though fp16 is kinda likely.
 
Marc said:
AS for the performance side, something interesting on the OpenGL version. In the initial shader, we have :

#RSQ curr0.x, tmp.x;
#MUL tmp, tmp, curr0.x;

If you activate juste one, or the two function, on ATI there is a very little change in fps, but on NVIDIA the performance are divided by a factor of three. Any idea ?

That is very odd. Sounds like a shader replacement thingy, but that feels kinda unlikely. Would they go as far as going after my demos? :? Though, on the other hand, there was a fair deal of talk about it when I released it and shader precision etc. so chances are they figured somebody would use it in a review to highlight the difference.
 
Humus said:
Marc said:
AS for the performance side, something interesting on the OpenGL version. In the initial shader, we have :

#RSQ curr0.x, tmp.x;
#MUL tmp, tmp, curr0.x;

If you activate juste one, or the two function, on ATI there is a very little change in fps, but on NVIDIA the performance are divided by a factor of three. Any idea ?

That is very odd. Sounds like a shader replacement thingy, but that feels kinda unlikely. Would they go as far as going after my demos? :? Though, on the other hand, there was a fair deal of talk about it when I released it and shader precision etc. so chances are they figured somebody would use it in a review to highlight the difference.

Maybe someone could try to increase/decrease a little the number of iterations in the mandelbrot development ? This could indicate if NVIDIA is modfying the shader or not.

I try it on a Radeon 9800 Pro and it decrease/increase a little the framerate. But I don't have a GeF FX to try this.
 
Yes..FP24 vs FP 32 on extreme Zoom...using the 44.10 drivers the 5800 Ultra is FP 32, on the 'hacked' 44.03 drivers the 5800 is using low precision.
 
Doomtrooper said:
Yes..FP24 vs FP 32 on extreme Zoom...using the 44.10 drivers the 5800 Ultra is FP 32, on the 'hacked' 44.03 drivers the 5800 is using low precision.

Yes the 9700 is supposed to be blocky? (sorry, I'm running low on sleep so my comprehension is broken)
 
Humus said:
That is very odd. Sounds like a shader replacement thingy, but that feels kinda unlikely. Would they go as far as going after my demos? :? Though, on the other hand, there was a fair deal of talk about it when I released it and shader precision etc. so chances are they figured somebody would use it in a review to highlight the difference.

So many cheats so little time. Maybe this is the real reason the NV3x was delayed so much ;) .
 
nelg said:
So many cheats so little time. Maybe this is the real reason the NV3x was delayed so much ;) .

Actually, I think it is just that the driver guys had too much time on their hands while waiting for the hardware and got inventive :)
 
Back
Top