Which Future Hardware?

Chalnoth said:
Anyway, if it is true that this is a series of 1-bit errors, then what we are seeing is a total lack of error-correction hardware in the blenders. Apparently ATI's engineers didn't feel there was any need to have more than 1-2 stages of blending. Simply making the last bit pseudo-random would fix the 1-bit errors from accumulating (making the last bit random could be as simple as using a flip flop circuit that is swapped every clock time a blended pixel is written, but would obviously look better with a more chaotic function).

No - making the last bit 'pseudo-random' as you suggest would introduce noise, and also result in blending output being non-deterministic between two identical input states by introducing a dependency on the starting state of the hardware - such a dependency is not anticipated by any API, and might also break GL conformance which generally requires exact repeatability. This is not a desirable situation and would also not necessarily look better at all.
 
Chalnoth said:
Lars obviously didn't have the same explanation for why there was a difference. He had another explanation that seemed completely plausible to me: the drivers were automatically using an alpha test in conjunction with an alpha blend that totally removes from the pipeline pixels whose alpha is below a set value, pixels that would presumably be too dim to see any difference with one or two levels of transparency, but appear when multiple levels of transparency were used.

This seemed perfectly plausible to me, given the fact that not only was the smoke dimmer in the ATI shot, but the brightest/dimmest parts of the center of the smoke were not the brightest/dimmest parts in the nVidia shot. That and the already dimmer (and smaller) smoke plume in the background was *much* dimmer in the ATI shot. If this smoke plume in the background used fewer levels of transparency than the foreground one (which might be a good optimization technique), then the alpha test enabling would make sense.

The overdraw visulisation as shown that r3xx and nv3x are both drawing the same number of layers.

What you say is something I've seen happening in various degrees on my 8500 and when i asked about it here quite a while ago at least one person said that gf3/4s had a similar thing.
 
andypski said:
No - making the last bit 'pseudo-random' as you suggest would introduce noise, and also result in blending output being non-deterministic between two identical input states by introducing a dependency on the starting state of the hardware - such a dependency is not anticipated by any API, and might also break GL conformance which generally requires exact repeatability. This is not a desirable situation and would also not necessarily look better at all.
pseudo-random numbers are deterministic by their very nature. 1-bit noise is probably the best way to remove errors, unless you want to go in and actually figure out how errors accumulate for a particular algorithm, and re-center them about zero. I'm not sure that's possible for an arbitrary algorithm.

Regardless, a more conservative approach would be to simply use rounding instead of truncating.
 
I don't know exactly how far OpenGL goes with its repeatability demands, but it's rather far.

PRN is deterministic and repeatable when you input the same seed.

The only stuff you can use to seed the PRN is stuff that is enough for OpenGL to loosen up on the repeatability claim. There's not much stuff that can be used.

I wouldn't be surprised if nothing but the usual dithering is allowed.

Btw
AFAIK ATi is the only company that actually tried to induce pseudo-randomness (that depend on anything but pixel position) in the pixel pipe to even out error patterns. Some time (Rage128?) I heard that they changed the dithering pattern over time. They did however remove that featre since users prefered a fixed dither pattern over the "noisy" look. I'd guess that OpenGL compliance would have ben a problem, but that was back when one could be hapy if a card was "rather OpenGLish".
 
well then, my questions are now these:

1. assuming it's R500 and it is producing images that are almost identical to refrast, can we safely assume (safely :p ) that it uses FP32?

2. does Refrast use _pp if it is specified?

3. does AM3 use _pp?

If the answers to all three are yes... I'm suddenly a lot more curious about R500.
 
The Baron said:
well then, my questions are now these:

1. assuming it's R500 and it is producing images that are almost identical to refrast, can we safely assume (safely :p ) that it uses FP32?

2. does Refrast use _pp if it is specified?

3. does AM3 use _pp?

If the answers to all three are yes... I'm suddenly a lot more curious about R500.

1. I don't know if those assumptions are valid.

2. dunno sure someone can answer

3. Do the vast majority of alpha textures get pixel shaders applied I'd assume not for smoke ect but I could be wrong.
 
Chalnoth said:
pseudo-random numbers are deterministic by their very nature. 1-bit noise is probably the best way to remove errors, unless you want to go in and actually figure out how errors accumulate for a particular algorithm, and re-center them about zero. I'm not sure that's possible for an arbitrary algorithm.

Regardless, a more conservative approach would be to simply use rounding instead of truncating.

They are deterministic in the sense that they produce the same result after the same amount of iterations from initialisation from a given seed - however they are not necessarily deterministic in a multi-tasking 3D environment. It would be possible to seed the random generator at certain points to try to ensure that you get repeatable behaviour for any given application, but you could still be undone by unpredictable elements such as task switches to other 3D apps, and then the same random number state might not be recoverable unless you put additional hardware in to allow storage of the state. Any way you look at it this is not a good solution - you want to avoid anything that introduces potentially unpredictable behaviour as far as possible.

1-bit noise doesn't actually 'remove' any errors - you are instead introducing errors in the hopes that they balance out some inherent bias.
 
Basic said:
AFAIK ATi is the only company that actually tried to induce pseudo-randomness (that depend on anything but pixel position) in the pixel pipe to even out error patterns. Some time (Rage128?) I heard that they changed the dithering pattern over time. They did however remove that featre since users prefered a fixed dither pattern over the "noisy" look. I'd guess that OpenGL compliance would have ben a problem, but that was back when one could be hapy if a card was "rather OpenGLish".

The 8500 does randomised dithering in 16bit in d3d. In opengl im sure i remember it doing the same on earlier drivers but switched to a fixed pattern (which looked worse imo) in later drivers.
 
A one bit difference could be caused by something as trival as a difference in the rounding logic and has absolutely nothing to do with pixel shader precision as far as I can see. If the blending stage of the pixel pipe used the same precision as the pixel shader we'd surely have blending on floating point buffers by now. Even if it would be slow and eat up lots of transistors...
 
Thanks for the correction Bambers.
So they still do it in D3D? That means that the real cause probably is OpenGL conformance.
 
andypski said:
They are deterministic in the sense that they produce the same result after the same amount of iterations from initialisation from a given seed - however they are not necessarily deterministic in a multi-tasking 3D environment.
Obviously the seed, whatever it may be, would need to be set at the beginning of each frame, and the algorithm would need to be such that it wasn't always the same for each pixel.

Anyway, you're thinking too narrowly. I'm not talking about the standard random number generator that works like:

1. Take seed.
2. Generate number from seed, store new seed.
3. Repeat 2 for each random number requested.

It could, for example, just select the value of a bit that is different for each pixel and highly random in most 3D rendering.

And yes, inserting random noise is an attempt to introduce errors in order to remove some inherent bias. Obviously ATI's card has some inherent bias when alpha blending. This is one way to do it. Another way may be simple rounding, but that may produce undesirable results for nonlinear functions.
 
Back
Top