Two new (conflicting) Rumors

Althornin said:
and far worse than ANYONES 32bit implementation.
whats you point? that not all 16bit implementations sucked as badly?
Ok, so what?
None of em are as good as 32bit...
Uhm, you obviously never talked to 3dfx fans during that time ("I don't need no stinkin' 32bit")... ;)
 
Althornin said:
and far worse than ANYONES 32bit implementation.
whats you point? that not all 16bit implementations sucked as badly?
Ok, so what?
None of em are as good as 32bit...

Yes. My point was not all 16bit implementations were as ugly.

This was also to pose a question, maybe it should be in another thread... The techniques that 3dfx used for effective-22bit implementations, are they usable on higher bit-depths? If so, is there any possibilities of seeing Nvidia (or other companies) employing this to improve 32bit color output? Or, as DC pointed out, is it worthless because a game that employs 64bit or 128bit color is doing so in combination with multi-pass?

Sometimes it came down to a decision where the user had to trade off speed (22bit) for color beauty (32bit). Is it possibly to see the same thing happening in the future? Or is that sort of thing much more difficult to have with the program being more in-control of the card with DX9-apis? IE: Do you forsee a "force 64bit/128bit" rendering in driver control panels for legacy DX8-level games?

Does anyone know the speed-hit incurred for 64bit or 128bit rendering? Or is that something that has to be kept under wraps until DX9 is released?

--|BRiT|

BTW: Gollum, people choosing 22bit instead of 32bit for the extra speed it gave you never seemed as insane to me as people turning off all graphical options and running at a resolution of 512x384 in a Quake game that made everything appear asstastic...
 
aye I always felt I had a choice no AA and 32bit or AA in 22 bit with my V5. I invariable chose 22bit/2xAA
 
BRiT,

Voodoo still computed everything in 16 bit and than upscale to 22 bits on output. This means if a game would rely on alpha or would do many passes the scene would still look much worse than at 32 bit. Kyro is a different story since it does all the calculations in 32 bit mode and then clamp it down to 16 bit (so only low bits are cut).
Radeon 9700 actually does all the calculations in 24 bits per component (96 bits total) and then, depending on a render target, outputs either 64bits or 128bits. The only thing you pay is memory bandwidth.
There will not be such thing as "force 64/128bit" rendering since you won't actually see 64/128 bits on your monitor. It would be a problem to display this 64bit and 128bit images on screen anyway because this color can range above 1.0.
Tricks like "3dfx's 22 bits" on Voodoo can't be done in 64/128bit modes, since extra precision only meters for calculations (+textures and render targets) and you won't actually see such high color depths.
 
MDolenc said:
Voodoo still computed everything in 16 bit and than upscale to 22 bits on output. This means if a game would rely on alpha or would do many passes the scene would still look much worse than at 32 bit. Kyro is a different story since it does all the calculations in 32 bit mode and then clamp it down to 16 bit (so only low bits are cut).

voodoo's pipe computed everything in true color, but reduced color to 16 bit (w/ ordered dithering) at framebuffer writes, and eventually postfiltered to ~22 bit. still, there was an option for destination alpha, which was kept in a separate alpha buffer, in place of the a depth buffer, IIRC.
 
Back
Top