Why does smoke slow down rendering so much?

digitalwanderer said:
Smoke doesn't really slow everything down, it just makes everything seem to slow down.

Happens to me all the time when I'm baked, just relax and enjoy it. :cool:

LOL:smile:
 
While TBDRs can alleviate the massive bandwidth penalty of piling lots of layers of alpha-blended smoke on top of each other, they are still slammed with large fillrate requirements.

As for the traffic being read-modify-write, I don't think that is really that much of a problem in itself (other than the raw bandwidth); you can always batch up large amounts of writes and that way drive down the effects of DRAM bus turnaround as far as you want.
 
Razor1 said:
Z-sort should never be used to begin with it has an exponetial calucation effect depending on objects that use it on the screen, an algo based on radix would be a much better option.
What are you talking about here? Common sorting algorithms have a worst-case complexity of n^2, which is definitely not exponential, but polynomial. Radix, of course, has linear complexity, but it's inefficient unless you have very large datasets.
 
Mate Kovacs said:
What are you talking about here? Common sorting algorithms have a worst-case complexity of n^2, which is definitely not exponential, but polynomial. Radix, of course, has linear complexity, but it's inefficient unless you have very large datasets.
I think he is talking about "exponential fog", where the transparency falls off as an exponential function of the number of layers drawn. Exponential fog can be done with multiple alpha layers, with no requirement for any pre-sorting - provided that all the fog has the same color.
 
The problem of smoke (and similar transparency effects) ruining performance in games isn't a hardware problem, it's a software problem. Or rather I should say it isn't a problem that can be solved in hardware, so it's up to software developers to think of clever ways to eliminate it.

The main problem is that for smoke, fire, and explosions to look good, you need to display multiple layers of transparency. But you can't display many layers of transparency across an entire frame without dramatically dropping the framerate (note that you always draw the transparent objects last, so this is added rendering time atop normal rendering).

The only way to really combat this problem would be to reduce the amount of overdraw in the transparent effect as the effect takes up more of the screen, or move to an entirely different method of generating the fire, smoke, explosion, etc.
 
I'm not even sure if this makes sense, but can you output one alpha-blended smoke poly and "fake" detail in it akin to parallax occlusion mapping bricks? Rather than multiple alpha layers, just one in front that fakes what's behind it according to the camera.

I guess screen res--actually, DPI--is still too low to simulate smoke with particles?

I've just indulged in too much caffeine and sugar, so don't laugh too hard.
 
It's all about the CPU. CPU - CPU - CPU. :)

For the time being we'll ignore the abso-f'in-lutely amazing all-GPU D3D10 particle system and focus on current stuff.

Simulating many particles tends to get pulled to the CPU - even the simple stuff like direction/velocity etc... let alone any funky distortion or colouring effects (fire!). Now, as a general rule of thumb, for optimum performance you want to seperate the CPU and GPU - make the most of parallelism.

So, if the CPU has to remain involved then you're tying the CPU to the GPU much more tightly than for "other" rendering - which is one reason for a slow-down.

The alpha blending issue isn't so much of an issue. I don't know many developers that don't do a simple bucket sort to split the dependent passes then disable Z-write and just throw a bunch of particles at the GPU. If you follow through the mathematics, spliting opaque and transparent objects is all that's necessary.

However, that being said - despite the fact that you free the CPU from delivering particles in projection depth order (ouch!!) you're still going to be submitting a huge number of pixels for rendering. Even your "fill rate monsters" are likely to choke on that ;)

hth
Jack
 
Pete said:
I'm not even sure if this makes sense, but can you output one alpha-blended smoke poly and "fake" detail in it akin to parallax occlusion mapping bricks? Rather than multiple alpha layers, just one in front that fakes what's behind it according to the camera.
More details please?

I really don't understand how you're mixing offset/parallax helps in this scenario - typically it's used to mimmick detail that doesn't exist on a *surface*. Combining it with fresnel reflection/refraction might look pretty damn cool... but show me a GPU that can handle that in real-time ;)

Jack
 
fellix said:
I remember, that it is possible in PS3.0 to "cancel" group of pixels/fragments, so the heavy blending duty to be minimized. Anyone more on this?

There's nothing ps3.0 can do to help the standard "bunch of quads" smoke technique. Basically you take a bunch of textured quads and blend them. The quads typically are just textured and nothing else. If you'd be able to save anything you'd have to be able to skip sampling the texture. But you need the alpha for blending anyway.
 
digitalwanderer said:
How many transparent layers are we talking about for most fog/smoke?

Maybe 5-20 normally. For a small particle system it's not a problem, but when you blow up a dozen fullscreen layers when a grenade explodes close to the character the framerate is going to drop.
 
Pete said:
I'm not even sure if this makes sense, but can you output one alpha-blended smoke poly and "fake" detail in it akin to parallax occlusion mapping bricks? Rather than multiple alpha layers, just one in front that fakes what's behind it according to the camera.

I guess screen res--actually, DPI--is still too low to simulate smoke with particles?

I've just indulged in too much caffeine and sugar, so don't laugh too hard.

Actually, that's not bad at all. I'd like more developers look into alternatives to this aged technique. It's of course more intuitive to come up with techniques concerning real surfaces than phenomena that happen in volumes, but it's certainly possible to do. See for instance my volumetric lighting demo; the volumetricy doesn't rely on any screen aligned quads or anything like that, but projects the light onto the ray between the surface and the eye. For a smoke effect you could probably do something similar, where you project the smoke onto the surfaces behind it.
 
Humus said:
Maybe 5-20 normally. For a small particle system it's not a problem, but when you blow up a dozen fullscreen layers when a grenade explodes close to the character the framerate is going to drop.

I've seen effects that exceed 30x overdraw in worst case scenarios (basically standing almost on the emitter).
 
There was a thread a while back in which someone with access to a PS2 performance tool stated the most overdraw they'd seen was around 40. I believe it was one of The Lord of the Rings games. I suspected The Return of the King since there's a lot of fog in one level.
 
JHoxley said:
More details please?

I really don't understand how you're mixing offset/parallax helps in this scenario - typically it's used to mimmick detail that doesn't exist on a *surface*. Combining it with fresnel reflection/refraction might look pretty damn cool... but show me a GPU that can handle that in real-time ;)
TBH, Jack, neither do I. I'm a bit out of my depth, for the moment. I was just guessing that newer, more ALU-heavy GPUs might prefer (to use Humus' terms, of which I'm only loosely confident in using) a single screen-aligned alpha-blended quad onto which it projects what's happening inside the fog volume, rather than just layering a bunch of alphas and maybe getting bottlenecked in other, "older" ways (fillrate? memory bandwidth?). Perhaps this is stretching the term "surface detail." :)

Humus, thanks, I'll look into that demo.
 
Well, one method of fog rendering that's relatively easy on fillrate is one that measures the thickness of the fog analytically. One way to do this would be as follows:

1. First, bound the fog volume with geometry.
2. Render the depth values of the front-facing polygons to a texture (this sets the depth to the nearest portion of the fog).
3. Render the back-facing fog polygons and the front-facing world polygons to a second texture (this sets the depth to the furthest).
4. When rendering the full scene, use the difference between these two rendered textures to calculate the depth of the fog along the line of sight at every pixel on the screen. One would clamp is depth value to be positive definite (negative values would mean that the front of the fog is occluded by some foreground object).

This technique would do very well for fog, or any transparent object that has relatively well-defined edges, that doesn't overlap itself. There may be a way to do a variation of this method that works well both with world objects intersecting the fog, and fog that overlaps itself (it'd be easy to make a method that doesn't do well with intersections, but does well with overlapping transparent layers, by incrementing and decrementing the rendered texture, but I'm not clear on how well this would work with solid objects intersecting transparent ones).
 
Chalnoth, if you look it up, you'll see that method is pretty robust if you have the precision.
http://download.nvidia.com/develope...ogPolygonVolumes3/docs/FogPolygonVolumes3.pdf

In a separate texture, render the depth of each object potentially in the fog. When doing the adding and subracting of the frontfaces and backfaces, always take the min of the depth in that texture and the depth of your fog. This will handle overlapping faces as well, so you don't need a convex fog shape. You can even have inverted geometry inside your fog to represent "holes".

This technique is pretty neat, but it's not exactly light on bandwidth either, especially if you want more complicated volumes. But hopefully it will allow you to cap your overdraw at a reasonable amount, eliminating the extreme cases ERP and 3dcgi are mentioning.
 
Back
Top