FP blending in R420?

The only things that are directly destroyed by me are mine personally ;) Great locale, love having the mountains to hike / bike / ski in, I prefer the dryness and I love the weather: it's a mile above sea level so never gets too hot here, but it's too far south so it never gets freezing cold either.

Economy blows, driving is bad (if you're in a car accident, there's a >50% chance of the driver being uninsured, and a >35% chance of them being under the influence of something), make sure when you move here that you've got a good paying job already lined up because you'll never "run into one" ;)

If you've got your TS and an engineering degree, the sky is the limit :)
 
FUDie said:
Chalnoth said:
Um, because it won't be realtime?

If you want to do that much blending, you'd find a way to do it all within the pixel shader, to avoid what would become an extreme memory bandwidth penalty.
If you need to combine multiple objects via blending, what choice do you have? Doing it in the pixel shader won't help you as it means you're reading the destination as a texture anyway. The pixel shader can't combine the results of multiple fragments before writing to the framebuffer.

-FUDie
The key word there was, "game."

If you're going to be blending multiple objects, performance reasons will prevent you from going beyond the numerical limitations of FP16 for color data.

If you have a situation in a game where you're tempted to do some blending hundreds of times per pixel, you'd be much better off finding another approximation of the same effect that accesses the framebuffer much less. An example might be a particle effect to simulate fire or smoke. If you want to do a good job of simulating this, you may be better off doing the integration entirely in the pixel shader using some representation of the smoke/fire other than simple particles. One thing you might use instead is a procedurally-animated 3D texture, which you then integrate in the pixel shader.
 
Albuquerque said:
If you've got your TS and an engineering degree, the sky is the limit :)

I'm probably too much of a monster for TS work. :D Really I think I'd need more education to work at Sandia. But one of these days I'll have to go see what Albuquerque's like myself.
 
Briareus said:
Really I think I'd need more education to work at Sandia. But one of these days I'll have to go see what Albuquerque's like myself.
Most national laboratories don't even require a bachelor's, but if you work at one, you are expected to want to go on to graduate school at that point. I'm pretty sure that the vast majority of permanent positions are for those with doctorates, but that doesn't mean you can't get a job while you're doing your education (and even then, the pay is great, so it's a good way to pay for it).
 
Different divisions of the government call it different things; to the department of energy, a top secret clearance is a TS. :)
 
Chalnoth said:
If you have a situation in a game where you're tempted to do some blending hundreds of times per pixel, you'd be much better off finding another approximation of the same effect that accesses the framebuffer much less.

There are already games that blend to the same pixel hundreds of time. Lots of simple things are often better than a few complex things.
In your example of smoke, particles are a very good techinique. The problem with manual integration is that you have to do things like z-testing for good results.

A pixel shader smoke would likely render incorrectly in many cases. Take a fire in a fireplace. The pixel shader smoke to be correctly z-buffered has to rendered as a plane at closest approach (the closest the smoke gets to the viewer). But halfway up the fireplace there is a chimley, that should occlude the smoke from the viewer position. The plane of closest approach is in front of the chimley so unless the manual integration can access the z-buffer (pixel shader can't without extra work) the smoke will leak in front the chimley.


Its the same reason decent fog is hard to do in a pixel shader (you end writing a ray-tracer with a simplified model of the world) but virtually trivial via alpha planes and polygons.
 
DeanoC said:
A pixel shader smoke would likely render incorrectly in many cases. Take a fire in a fireplace. The pixel shader smoke to be correctly z-buffered has to rendered as a plane at closest approach (the closest the smoke gets to the viewer). But halfway up the fireplace there is a chimley, that should occlude the smoke from the viewer position. The plane of closest approach is in front of the chimley so unless the manual integration can access the z-buffer (pixel shader can't without extra work) the smoke will leak in front the chimley.
To every problem, there is a solution. If want to solve this problem completely, what you may do is use a polygonal volumetric fog rendering technique to figure out how much of the 3D texture you want to render. Using this technique, you'd basically render to another buffer the distance into the material you want to render. If the fire is occluded, this distance would be zero. If the fire has some object in it, the distance would be from the edge of the 3D texture to the object. This would be input into the shader as a variable in the integration.

And I still claim that particle effects in games will be using closer to dozens of blends per pixel in normal situations. If you wanted very high particle density, you'd be better off doing something else.
 
DeanoC said:
... access the z-buffer (pixel shader can't without extra work)

You may found that extra work worth it - even for simple traditional particle effects.
It looks very nice - finally getting rid of the particles clipping with geometry problems.
I've always found that problem annoying in most of the games I played.
 
Chalnoth said:
I seriously doubt that FP16 will ever prove inadequate for framebuffer blending of color data
Chalnoth is very right on this point, guys. Forget the precision arguments for things like smoke and stuff. FP16 has a few more bits than 8-bit fixed point even in the worst case.

Let's face it: FP blending, even FP16, is a very useful feature of NV40. R4xx users will miss out on some HDR effects, but not all.

What I find amusing, Chalnoth, is your recommendation of volume fog. Weren't you telling me how it's not very good compared to alpha fog? I'm sure you agree with me now that it can be a very useful and good looking technique when used in the right way. For the fog depth calculation, FP16 will very likely be inadequate (which is why I bolded "color"), but dithering will help for now, and you can always encode into multiple channels if you really need to.
 
If you'll look, Mintmaster, you'll notice that this application is completely different. The volume fog approach would simply give an outer bound to the fog volume, with the actual color not given by depth, but by the integration through the 3D texture. For this application, FP16 should also be enough (since the fog depth doesn't directly affect color).
 
Can't disagree that FP16 will enough for a while for colour data. Except for stupid corner cases (like flying at the sun and then in deep space) thats enough range. To be honest we would probably need spectral rendering (>3 colour channels) before we need >FP16

Its like 32 bit depth, you can construct some cases that want more but in reality its a good balance enough for 95%.
 
Chalnoth said:
If you'll look, Mintmaster, you'll notice that this application is completely different. The volume fog approach would simply give an outer bound to the fog volume, with the actual color not given by depth, but by the integration through the 3D texture. For this application, FP16 should also be enough (since the fog depth doesn't directly affect color).
How is that so different? The other application uses a constant color fog, so the integration is simply proportional to the depth (or a 1-exp(-d) sort of function for better looking absorbance). In your application you have a texture, but the result of integration will still depend as much on depth of integration as on the data being integrated.
 
Except that now you can get around any precision problems by limiting the amount of material that is near other objects in the scene.
 
Chalnoth said:
If the fire has some object in it...
That doesn't sound like you can have any control of how close particles are in a 3D texture to the objects in the scene, especially if they can move. Fire and smoke particles move through the volume, I assume by transforming texture coordinates for the 3D texture. You can easily have a dense portion of the 3D volume wind up near the endpoints of the integration path despite your best efforts to avoid it.

You're either not making any sense, or you're not being very clear about what you're proposing.
 
DeanoC said:
Can't disagree that FP16 will enough for a while for colour data. Except for stupid corner cases (like flying at the sun and then in deep space) thats enough range. To be honest we would probably need spectral rendering (>3 colour channels) before we need >FP16

Its like 32 bit depth, you can construct some cases that want more but in reality its a good balance enough for 95%.

This is what I suspected, but it is always nice to get your 'facts' from a developer who look at things from the practical point of view. Thanks Deano.
 
Mintmaster said:
That doesn't sound like you can have any control of how close particles are in a 3D texture to the objects in the scene, especially if they can move. Fire and smoke particles move through the volume, I assume by transforming texture coordinates for the 3D texture. You can easily have a dense portion of the 3D volume wind up near the endpoints of the integration path despite your best efforts to avoid it.

You're either not making any sense, or you're not being very clear about what you're proposing.
The idea I had in mind was that your animation technique (whatever that might be) would be dependent upon world geometry. Think Humus' fire demo (though that used particles, I don't see why one couldn't do the animation in a 3D texture). If this is the case, then it's a simple matter of collision detection to keep things a little bit away from the surface (if the precision issue can't be managed in other ways).
 
So now your rendering 3D particles into a 3D texture dynamically? That'll be extremely slow. You have to go layer by layer through the 3D texture, drawing the appropriate slice from each individual particle.

You might as well just render all these particles straight to the scene. You want to do collision detection, render into a 3D texture, then do volume fog, and use the depth now to integrate the 3D texture? I don't see what you're getting out of doing this.

FP16 blending will be just fine for smoke particles. We're not doing so badly with 8-bit, so FP16 will be great. It's just not enough for non-colour data, as NVidia mentions themselves many times.
 
Mintmaster said:
So now your rendering 3D particles into a 3D texture dynamically? That'll be extremely slow. You have to go layer by layer through the 3D texture, drawing the appropriate slice from each individual particle.
No, certainly not. You'd be using a completely different algorithm for the animation of the fire/smoke/whatever. If you're using a 3D texture, you'd use some sort of procedural algorithm to determine how that texture changes in time. You could do this a couple of different ways:

1. Have a base texture, a time parameter, and a way to go from the base texture to an arbitrary time. This would require that your time evolution has high locality over large times.

2. Iterate through the texture at each timestep with some simple algorithm that animates the texture.

Anyway, yes, if FP16 is enough for the particular particles you're rendering, then there would be no reason to not use particles.
 
Back
Top