Why does smoke slow down rendering so much?

arjan de lumens said:
I think he is talking about "exponential fog", where the transparency falls off as an exponential function of the number of layers drawn. Exponential fog can be done with multiple alpha layers, with no requirement for any pre-sorting - provided that all the fog has the same color.
Exponential fog used to be done "for free".
 
Last edited by a moderator:
Humus said:
...
For a smoke effect you could probably do something similar, where you project the smoke onto the surfaces behind it.

Which is AFAIR exactly what the original Unreal did.
(Compute distance inside fog volume for a given surface.)
 
Is there a simple way to determine how much memory bandwidth is needed? (other than infinite ;) )

JHoxley said:
It's all about the CPU. CPU - CPU - CPU. :)

I dunno... it seems to be about:
Developers-Developers-Developers-Developers-Developers-Developers-Developers-Developers-Developers.

Developers-Developers-Developers-Developers-Developers...
 
Alstrong said:
Is there a simple way to determine how much memory bandwidth is needed? (other than infinite ;) )
Well you can't easily guess how much bandwidth will *actually* be used, but you can make a prediction based on a worst-case scenario. Effectively call it "Big-Oh" for bandwidth...

Various performance tools might be able to give you a clearer picture of real usage.

Alstrong said:
it seems to be about:
Developers-Developers-Developers-Developers-Developers-Developers-Developers-Developers-Developers.

Developers-Developers-Developers-Developers-Developers...
LOL. SteveB changed it to "MVP's - MVP's - MVP's - MVP's" during his summit keynote. He does seem to enjoy repeating words over-and-over(-and-over-and-over-and-over).

Jack
 
Ingenu said:
Which is AFAIR exactly what the original Unreal did.
(Compute distance inside fog volume for a given surface.)

Yes, they did it per vertex though (not much of an option back in the days) but it looked pretty good. There were some cases where you could see it shift intensity as you move around due to linear interpolation, but I'm amazed they got a per-vertex implementation to work that good even in that low-poly environment.
 
Smoke is relatively bland and featureless. It will look pretty much the same if you render it at 640x480 and scale it up bilinearly to 2048x1536 as it would have done rendered at 2048x1536 from the beginning.

Why can't you cheat by rendering all geometry that contributes to depth values first and then use a minified version of the depth buffer to render all your big sprites at 640x480. Upscale to the resolution desired and render as a single overlay. Obviously this will mean a slight interpenetration into walls where the fog shouldn't really be visible(especially high frequency variations in depth as from a fence done with alpha test), but is it really that unacceptable for the abillity to do easily several times as much smoke as before?
 
The whole up-sampling/down-sampling type thing has a nasty habit of introducing visual artifacts. Even with linear filtering you can still get a very "blocky" appearance.

The HDRI pipeline sample I wrote recently has an aggressive 1/8th downsampling for the post-processing, and I had to do a lot of "work around" hacks to get it looking nice ;)

I could quite easily see two things happening with your idea:
1. It looks rubbish and people complain about your "old skool" graphics
2. It looks rubbish and you implement work-arounds to get it looking nice. Ends up looking a little less rubbish but still takes about the same amount of performance as the original version - you just moved the work elsewhere.

hth
Jack
 
JHoxley said:
The whole up-sampling/down-sampling type thing has a nasty habit of introducing visual artifacts. Even with linear filtering you can still get a very "blocky" appearance.

The HDRI pipeline sample I wrote recently has an aggressive 1/8th downsampling for the post-processing, and I had to do a lot of "work around" hacks to get it looking nice ;)

I could quite easily see two things happening with your idea:
1. It looks rubbish and people complain about your "old skool" graphics
2. It looks rubbish and you implement work-arounds to get it looking nice. Ends up looking a little less rubbish but still takes about the same amount of performance as the original version - you just moved the work elsewhere.

hth
Jack

What if you only do it on smoke in a small bounding box around the player? That tends to just look like grey mess anyway and that's the bit that really hurts from a performance perspective..
 
soylent said:
What if you only do it on smoke in a small bounding box around the player? That tends to just look like grey mess anyway and that's the bit that really hurts from a performance perspective..
I don't see how this solves anything. The trouble with smoke is how many pixels it affects, not how far from the camera it is.
 
OpenGL guy said:
I don't see how this solves anything. The trouble with smoke is how many pixels it affects, not how far from the camera it is.

But that is usally equivalent. It affects more pixels when it is close to the camera and covering most of your screen. The performance issues from smoke in my experience is from being close to a smoke source and having lots of almost fullscreen quads.
 
JHoxley said:
For the time being we'll ignore the abso-f'in-lutely amazing all-GPU D3D10 particle system and focus on current stuff.
I think that's a terrible idea. Please, tell us what will be so amazing regarding DX10 and smoke effects. Is this something that will work on DX9 level hardware?
 
soylent said:
But that is usally equivalent. It affects more pixels when it is close to the camera and covering most of your screen. The performance issues from smoke in my experience is from being close to a smoke source and having lots of almost fullscreen quads.
But in your "bounding box" description, I don't see what you are saving. Also, it doesn't sound like objects in the smoke will be correctly colored.
 
OpenGL guy said:
But in your "bounding box" description, I don't see what you are saving.

The intention was to take a bunch of large smoke sprites that are close to the viewer(within a bounding box around the viewer, not a skin tight one, say 4 meter sides or something). Render them in a low resolution(with a down-sampled z-buffer of the original geometry of the scene so you reject pixels correctly that would not have been rendered in the original buffer) and upscale it back so you don't have to render a bunch of fullscreen quads at say 1600x1200. The reason for the bounding box is to not have blocky fog in the distance and not have artifacts. Only when the fog is up close would you have artifacts but I think those would be less ugly than fog at a distance penetrating a few pixels into walls that are close to the viewer.

This because most of the issues I've seen with smoke in games happen when you are almost inside it.

OpenGL guy said:
Also, it doesn't sound like objects in the smoke will be correctly colored.

Ah right, each pixel in the small buffer has no memory of what alpha the fog should have. Could you give the low res buffer and alpha channel as well and use a pixel shader to give the alpha channel correct alpha when several layers are drawn?
 
Last edited by a moderator:
Let me just de-lurk for a moment with a question...

A lot of things that need to be done in a time-saving manner in non-realtime CG is being done with post effects. And if I'm not mistaken here, that term (post effects) has also been mention in in conjunction with consumer graphics hardware with the advent of programmable shaders. Wouldn't smoke be a candidate for doing this in such a way?

Chris
 
Back
Top