HDR+AA...Possible with the R520?

Actually I find fp10 quite interresting. Aa far as I see HDR it is more about ganing more contrast and precision then effects.
For example if you use lightmaps with standard 32bit precision your result has bad contrast. The same is propably true for any multitexture/pass technique.
So if FP10 is good enough to increase contrast for some of those techniques it is very interresting even if image qualitity is bad for more sophisticaed HDR effects.
I mean if FP10 is good enough to allow high contrast lightmaps/multi texturing, than in some cases it might be the ideal choice, because it is basicly free.

But one propably important question I have: Will it be possible to convert a fp10 Framebuffer to standard 32bit with resonable loss of performance?

If yes any particle/post effetcs can still be rendered in 32bit and the loss of alpha precision would be meaningless.

If not, than fp10 would always mean, that you can´t use all those fancy alpha blending effects (without loosing quality), which are basicly standard in modern games, making fp10 alot less usefull.
 
DotProduct said:
If not, than fp10 would always mean, that you can´t use all those fancy alpha blending effects (without loosing quality), which are basicly standard in modern games, making fp10 alot less usefull.
Destination alpha (a stored alpha value) is rarely used, you don't need it for the usual transparency stuff.

DaveBaumann said:
An "FP10" mode for the PC probably would require support from MS for it to be implemented in DX.
No, it wouldn't. IHVs are free to introduce new rendertarget and texture formats.
 
Destination alpha (a stored alpha value) is rarely used, you don't need it for the usual transparency stuff.

Yes. But I do not think, that you can use a standard 32bit source with an fp10 target. If I am wrong here, than this would be an extreme plus for fp10.
 
DotProduct said:
Destination alpha (a stored alpha value) is rarely used, you don't need it for the usual transparency stuff.

Yes. But I do not think, that you can use a standard 32bit source with an fp10 target. If I am wrong here, than this would be an extreme plus for fp10.

It might be possible similiar to how ATi does 32 bit precision and outputs to 24 bit?
 
Destination alpha (a stored alpha value) is rarely used, you don't need it for the usual transparency stuff.
You might as well use fp11/fp11/fp10 then, and grab the extra mantissa bits on the red and green channels (which, arguably, need them).
 
DotProduct said:
Yes. But I do not think, that you can use a standard 32bit source with an fp10 target. If I am wrong here, than this would be an extreme plus for fp10.
Every texture format gets converted to shader precision (FP24/FP32) after sampling. And the shader result is converted to the rendertarget format again on output.

Bob said:
You might as well use fp11/fp11/fp10 then, and grab the extra mantissa bits on the red and green channels (which, arguably, need them).
Chalnoth said:
It'd probably be even more useful to use a shared exponent for the three channels.
Both agreed. But the more options, the better (if it's cheap to implement in hardware).
 
It'd probably be even more useful to use a shared exponent for the three channels.

Shared exponent formats are great for textures, but less good for framebuffers (or RTT applications). You can't easily mask particular channels, and blending is a lot more complicated (and may not give you the desired results).
 
Well, any FP format that fits into 32 bits total is going to be bad with some measure of blending. Blending is only going to get bad in this case if you're doing a large number of blends on a relatively bright background.
 
But we've still got this basic question of how many blends are done at FP32 precision in a shader (e.g. multi-FP16-texturing plus lighting), versus the number of blends that are done in FP16 (or FP10) against the backbuffer.

I'm still intrigued to know what specific rendering techniques actually use backbuffer blending, to introduce banding.

Jawed
 
Well, any FP format that fits into 32 bits total is going to be bad with some measure of blending
But fp11/fp11/fp10 is much better behaved in that regards compared to shared exponent. With shared exponent, you can get unexpected smearing across color channels. At least with fp11/fp11/fp10, the banding problem is well understood and to a certain extent can be mitigated. This becomes a lot harder once you share the exponent.

Developers would likely hate it if their games has weird single-channel banding or shimmering issues that weren't caught during QA. Banding, although ugly, isn't distracting. Channel smearing typically is. If that smearing is unpredictable (due to the dynamics of the game, for instance), then not all cases may be testable / fixable.

But we've still got this basic question of how many blends are done at FP32 precision in a shader (e.g. multi-FP16-texturing plus lighting), versus the number of blends that are done in FP16 (or FP10) against the backbuffer.
Those are typically two very different operations. You blend in the backbuffer to simulate translucency or other effects, where you need to affect the existing contents of the framebuffer. You blend in the shader to perform lighting on your current fragment.
 
Jawed said:
But we've still got this basic question of how many blends are done at FP32 precision in a shader (e.g. multi-FP16-texturing plus lighting), versus the number of blends that are done in FP16 (or FP10) against the backbuffer.
Oh, I'm certain you can do very many framebuffer blends at FP16 precision before you start to notice blending. But, as Bob said, this is important for translucency not normal texturing (as in grass, particle effects).

With FP10, well, it's already below FX8 precision in terms of the mantissa, so we're talking only a couple of blends before you start to notice issues, I'd be willing to bet.
 
Bob said:
You blend in the backbuffer to simulate translucency or other effects, where you need to affect the existing contents of the framebuffer.

How many times would this blend take place per pixel? That's what I'm trying to get at.

Are we talking about HDR lighting within smoke as the apocryphal case with dozens or hundreds of blends?

Jawed
 
It really depends on the scene, and the rendering algorithm.

If, for example, you are playing Doom3, each pixel is blended as many times as there are lights that hit it. So average number of blends per pixel is likely to be 3-5.

If, however, you are playing FarCry with the SM2b or SM3 paths, most pixels on the screen can be fully-shaded in one pass.

But translucent objects really throw a wrinkle in all this. If you're now playing UT2004 in a level with a lot of grass, all that grass uses alpha blending. So, you could have 10+ levels of transparency.

Then, beyond that, you could possibly have a game that uses particle effects for fire. Here you could be approaching upwards of 50 levels of transparency on some pixels (though you definitely wouldn't want such a fire to encompass a large portion of the screen).

So, in the end, it varies dramatically. It varies based on the rendering algorithm, and it varies further based upon the scene in question.
 
Chalnoth said:
It really depends on the scene, and the rendering algorithm.

If, for example, you are playing Doom3, each pixel is blended as many times as there are lights that hit it. So average number of blends per pixel is likely to be 3-5.

I presume that's an int8 framebuffer, and no-one's complained about banding in D3, have they?...

If, however, you are playing FarCry with the SM2b or SM3 paths, most pixels on the screen can be fully-shaded in one pass.

This is the case I posited earlier, with most "blending" being performed in-shader at FP32 precision.

But translucent objects really throw a wrinkle in all this. If you're now playing UT2004 in a level with a lot of grass, all that grass uses alpha blending. So, you could have 10+ levels of transparency.

But a texel in a grass texture is either transparent (nothing to blend) or opaque (nothing to blend) isn't it?

Then, beyond that, you could possibly have a game that uses particle effects for fire. Here you could be approaching upwards of 50 levels of transparency on some pixels (though you definitely wouldn't want such a fire to encompass a large portion of the screen).

Well I'm struggling to find any discussion of game-specific rendering techniques for HDR fire/smoke...

Jawed
 
Jawed said:
I presume that's an int8 framebuffer, and no-one's complained about banding in D3, have they?...
Right, and so a D3-like scenario might be one where this sort of rendering could work. However, where ATI's s6e3 format will fall down will be in brighter scenes, or brighter parts of scenes.

But a texel in a grass texture is either transparent (nothing to blend) or opaque (nothing to blend) isn't it?
Actually, I don't think UT2004 uses a plain alpha test for any of these surfaces, be they grass or fencing. If you look closely, you will notice that the edges are slightly transparent. The game uses a combination of an alpha blend (to reduce aliasing) and an alpha test (so that it may go unnoticed if the surface is rendered in the wrong order).

So no, the grass isn't simply 100% transparent or 100% opaque.
 
Back
Top