About Dave's comment at nVnews

K.I.L.E.R

Retarded moron
Veteran
Originally posted by DaveBaumann
R300 doesn't have integer and/or FP pipes, it only has FP pipes. Regardless of the operation being carried out, it is always done at FP24 (minimum) precision - PS1.1, PS1.2...etc... PS2.0 all done in float.

So older PS titles should look better because of higher accuracy over earlier non-FP chipsets like NV20?
 
Except, when using standard fixed-point buffers, all data is stored in standard 16/32bpp pixel formats. That the thing may use 24-bit floating point internally won't give any noticeable quality differences since errors are introduced when reading/writing low-precision fixed-point buffers multiple times, (visible) errors don't crop up inside the pipelines themselves since these already use higher than 32-bit precision in pre-DX9 class 3D accelerators...

*G*
 
Ilfirin said:
Can you name any older PS titles that go noticeably over the max precision of 32-bit?

Unfortuantely, no. :)

I was hoping maybe Morrowind could if you have a DX9 capable card. I know it's a DirectX 8 game, though some games have options for future hardware and I was hoping MW has.
 
I've seen significant quality improvements on the 9700 vs 4600 as a result of more precision in colour interpolants, particularily when using them as normals. No idea if this is a direct consequence of teh 9700's FP24 support...

John.
 
Maybe Dave is a syndicate of about 50 people. Would certainly explain how he gets so much done in so little time... :LOL:

MuFu.
 
JohnH said:
I've seen significant quality improvements on the 9700 vs 4600 as a result of more precision in colour interpolants, particularily when using them as normals. No idea if this is a direct consequence of teh 9700's FP24 support...

John.
Perhaps Soldier of Fortune II would be a good game to test this higher quality question? I've read it shows banding even at 32-bit.
 
MuFu said:
Maybe Dave is a syndicate of about 50 people. Would certainly explain how he gets so much done in so little time... :LOL:

MuFu.
Yeah... I can only imagine how his girlfriend feels whenever she sees the courier guy appearing... "Oh no! Not AGAIN!"
 
Pete said:
JohnH said:
I've seen significant quality improvements on the 9700 vs 4600 as a result of more precision in colour interpolants, particularily when using them as normals. No idea if this is a direct consequence of teh 9700's FP24 support...

John.
Perhaps Soldier of Fortune II would be a good game to test this higher quality question? I've read it shows banding even at 32-bit.

Not exactly sure if you know what he ment by 24bits.

SOF2 uses 32bit colour which is 8x4 (8 bits in 4 channels which gives you 32 bits per pixel).
He means 24 per channel which is 24*4 = 96bpp. According to Dave, Ati's R300 line of chipsets have 24bit FP (floating point, 6 decimal places accuracy) accuracy over 8 bit accuracy of non DX9 hardware which allows for higher precision rendering.
The need for such higher precision rendering is due to inefficiences of video cards. Doing multipass rendering, data gets lost and scenes therefore exhibit certain deficiencies, IE: banding.

HOWEVER, Ati can only have 24bit accuracy through shaders without the need for software devs supporting higher accuracy in their games. I hope this makes sense.

I hope this is a valid explanation. :)
 
the banding in sof2 is simply due to the way they do their fog and there is nothing that can be done about it outside of rewirteing the code. it is ugly as all hell though, i found it absurd to find such an issue in a modern game. :?
 
kyleb said:
the banding in sof2 is simply due to the way they do their fog and there is nothing that can be done about it outside of rewirteing the code. it is ugly as all hell though, i found it absurd to find such an issue in a modern game. :?

I have seen a few games with light and fog banding, not just SOF2. Of course I have seen games without the banding.
 
Yep, I was a bit fast and loose with my bits, KILER. I already knew what you posted (due to spending too much time in these fora), but I wasn't clear in my question. Thank you for the review, though. :)

I'm under the impression that the banding is caused by rounding errors in multipass rendering. I thought, if the R300 rendered internally at a higher precision (96bpp vs. 32bpp), the banding might be reduced or eliminated. kyleb's post reminded me that the fog is probably not rendered with fixed shaders, so it would see no improvement on the R300. Is the fog calculated in the GPU or the CPU? I'm guessing GPU (due to the framerate hit I take in CS with fog). So would it be a trivial matter to send fog code to the shaders?

(I'm obviously _not_ a 3D coder, though I'm going to try picking up OGL with NeHe's tutorials. I'm amazed his examples run on my integrated Mach64, though I'll probably want to find a GF/Radeon before I progress much further.)
 
If you use multi-pass rendering, the precision will be limited by the frame buffer (i.e. 8 bits per channel), no matter how good your internal precision is.

For example, my per-pixel point lighting program (using cube map for normalization) shows the same problem on Radeon 9700 as on a GF3/GF4, although Radeon 9700 has higher internal precision.
 
Doing all calculations internally at 24-bit cannot help with any title that does not support DX9 (or the equivalent OpenGL extensions, as usual). Here's why:

1. Texture filtering is still done at 8-bit. This means that pretty much all of the older games, games that only use basic texturing, cannot show any benefit from the increased accuracy of the 9700, as they don't make use of it (except for gamma-correct FSAA).

2. DX8 hardware is designed with enough precision to handle the longest shaders possible for that hardware. There just aren't significant rounding errors in pretty much any pixel shader you make for DX8 hardware, as the primary limitation is that the source and destination formats are all 8-bit, and all shaders are short.

3. The source and destination formats are all 8-bit. You just can't make use of data that has higher accuracy, so you can't make use of the 24-bit accuracy.

As a side note, I think some parts of the GeForce4 and Radeon 8500 pipelines actually run around 24-32 bit accuracy (texturing ops), which is why the "texdepth" operation actually gives acceptable results on the GF4.
 
Chalnoth said:
1. Texture filtering is still done at 8-bit.
Not neccessarily. You could easily interpolate at higher precision (that's texture coordinates and texture samples as well as colors).

3. The source and destination formats are all 8-bit. You just can't make use of data that has higher accuracy, so you can't make use of the 24-bit accuracy.
Not true. Many calculations can profit from higher internal precision even if the input and output values are only 8-bit. Just do some multiplications, you'll see the difference.
 
Sure, 8-bit buffers will lower quality, but Kyro and Voodoo3's "16"- and "22"-bit output show that higher internal precision can yield better quality output. I'm sure we've all seen the Q3 reviews showing 32/16-bit texture/bitdepth looks nicer than 16/16-bit. Perhaps storing temp results in 8-bit format too often per pixel will make higher internal precision moot, though.
 
Pete said:
Sure, 8-bit buffers will lower quality, but Kyro and Voodoo3's "16"- and "22"-bit output show that higher internal precision can yield better quality output.

those are different things than the topic at hand (i.e. 'internal precision').
kyro could do multi-pass preserving the maximal tile precision due to the per-tile loopback (i.e. no errors introduced by passing data through the framebuffer's lower precision). voodoo's "22" bit was mere framebuffer ditering - one could think of it as a post-processing effect (though it wasn't). luckilly, the voodoo had the capability to 'undo' the "dither filter" precisely for multi-pass operations, as dithered "22bit" usually made things wose when multi-passing.
 
Back
Top