Fluid Demo Movie (RapidMind)

Andrew Lauritzen

Moderator
Moderator
Veteran
Hey guys,

Just a quick post to link a demo that I wrote a few months back doing some simple 2.5D fluid simulation. It's a basic iterative Kass-Miller-like algorithm similar to what they do in this paper.

All of the physics and shading is done on a single GPU using the RapidMind development platform (through the OpenGL backend in this case). Incidentally, it looks like there's a picture of the demo up on the main RapidMind site now as well (we demoed it at SIGGRAPH and Supercomputer this year) :)

Some features:
  • 1024x1024 fluid height-field, several physics iterations per frame
  • High-resolution normals generated dynamically from this height field
  • Geometry optionally down-sampled, but this actually isn't enabled in the video below since the G80 can rip through 2mil dynamic polygons per frame in addition to all of the rest of the physics and shading calculations :)
  • Variance shadow maps (of course!)
  • Ambient occlusion generated at startup (takes about 1 second) for static geometry using VSMs (similar to GPU Gems 2 method)
  • Reflection via cube map
  • Refraction via parallax-mapping-like two-planes approximation
  • Distance desaturation (fog-like)
  • Light and surface shaders are completely disjoint and combined dynamically at runtime using RapidMind shader algebra. This makes things like the AO generation code trivial (two lines long) given the already-written VSM light shader.
  • Terrain and water rendered as a single mesh
  • All physics written from scratch in 2 days
  • Rendering written in about 2 weeks

Anyways here's the video link: Fluids Video (~130MB divx IIRC).

Do note that there are a few visible artifacts due to the refraction approximation (most notable at water edges where they will sometimes appear too dark) which are easily addressed by using something fancier like Parallax Occlusion Mapping, Relief Mapping, Cone Step Mapping, Per-Pixel Displacement Mapping, etc.

Unfortunately I can't post the binary right now as it is dependent on the RapidMind platform DLLs, for which I'm still discussion some licensing details. Hopefully I'll have something worked out soon, but I figured I'd post the video in the mean time.

For now, you'll have to just take my word that it runs at 60+fps on a single 8800GTX at 1680x1050, 4xAA with the full 2mil polygons. The video above was captured live using FRAPS.

Enjoy!
 
Last edited by a moderator:
Wow, great looking demo. I didn't even think much of the ripples until you switched on the wireframe. Any idea how much time is spent just on the river calculations?
 
Do you know what the main bottleneck in the scene is? Fluid calculations? Rendering that many polygons? Or something else?
 
Do you know what the main bottleneck in the scene is? Fluid calculations? Rendering that many polygons? Or something else?
Definitely rendering that many polygons! It wouldn't be hard to set up a good LOD system (or like I said, just downsample the geometry - it looks almost the same) but since the G80 still runs it quite fast, I just didn't bother :)

The fluid calculations take almost no time... if you hit "f" in the demo, you can turn on a "fast mode" where it does 10x the physics updates per frame and you can see that there's no bottleneck there :) One could easily use a higher-resolution grid or an even smaller timestep without affecting performance too much.
 
Nice job Andy. I especially like how the water retreats and leaves wet soil marks behind (especially noticable at about 3:00). Is your next project going to include splashing in the water :D
 
Fantastic water simulation, makes all other water simulations (real-time) look pale in comparision. And the framerate and the resolution aswell as the wet mark soil! :oops:
 
So, if you wrote the physics backend in two days, I'm guessing you really like RapidMind as a dev platform? well I mean I guess you work there so of course you like it in SOME way :p
 
The amplitude of the water seems unrealistic (as if the current is way faster than it actual is, see around 4:00), although I shall say it looks wonderful.

It looks like it has the exact movement as fire licking on the top of the ceiling because it is detained inside a building. Just a thought ;)
 
Last edited by a moderator:
Yes, great video. The edges look more like fog than they do water. :D I'm sure you know the limitations of the demo so far. Hope we can see a finished one soon. :)
 
Nice demo Andy! :)

Could you elaborate a bit on this point:
Finding out which texel to fetch in the refraction map is much like finding which texel to fetch in parallax mapping. You're trying to estimate the first intersection of a ray into a height field, and in its most basic form parallax mapping uses a "two planes" approximation.

The primary difference is that parallax mapping uses the same ray direction as the eye vector, whereas refraction bends the ray by an amount that depends on the surface normal. The good thing is that the artifacts of ordinary parallax mapping aren't visible, since we can't judge correct refraction with the naked eye anyway.
 
Hey all, thanks for the feedback! Let me see if I can respond to each of the questions....

Nice job Andy. I especially like how the water retreats and leaves wet soil marks behind (especially noticable at about 3:00). Is your next project going to include splashing in the water :D
Yes the "wetness" is very simple effect, but adds a lot to the visual appeal of the water flow. I also added some "drying/absorption" as you note to get the waves-on-the-beach effect that can see in the small lagoon.

Regarding splashing water, that would be a lot of fun. It should be noted that you can actually fairly easily couple a particle system to the fluid simulation model that I'm using here, although making a convincing switch from grid->particles for things like waterfalls could be a challenge.

I've been meaning to mess around with some SPH stuff as well as it's quite doable on modern graphics cards (as Simon Green has shown, among others). You still couldn't do anything with this large of an area, but graphics cards will keep getting faster.

So, if you wrote the physics backend in two days, I'm guessing you really like RapidMind as a dev platform? well I mean I guess you work there so of course you like it in SOME way :p
Indeed, it makes stuff like this really easy and fun to do and once you get used to the fact that you have full dynamic code gen at your disposal, you can implement much more complex data structures and algorithms than would otherwise be possible. If you read Aaron Lefohn's thesis on GLIFT you really get a sense of how powerful these sorts of systems can be, and I fully expect them to become more and more common/useful as code gets more complex.

That said I can certainly see the current weaknesses of the RapidMind platform quite clearly as I work to get rid of them part-time as you note :) The current development focus isn't really on graphics, unfortunately, but I expect that graphics-specific features will continue to be supported and implemented as long as we continue to target graphics hardware (which is certainly for the foreseeable future).

The other fun thing that I've been meaning to try is that now that we have a multicore x86 backend, I could as easily be running the fluid simulation there to compare speeds and take the burden off of lower-end graphics cards; the disadvantage of course is that the height-field would then have to be streamed to the GPU for rendering. Never-the-less it's something that is very easy to try with RapidMind and it's on the TODO list!

The amplitude of the water seems unrealistic (as if the current is way faster than it actual is, see around 4:00), although I shall say it looks wonderful.
Yeah the fluid model certainly is a huge simplification and some of the parameters are aren't really physically-based. In particular the friction parameter falls into this category. Much of what you're seeing can be improved with better parameter choices, but I didn't have a lot of time to tweak :)

That said, it's not hard to make the simulation more physically accurate either. I had a version running that also supported full advection (via Jos Stam's neat gather trick applied in 2D), but there were some issues with boundary conditions that I didn't have time to fully work through.

Could you elaborate a bit on this point:
It's pretty much what Mintmaster said:
- refract the eye ray using the water normal
- compute the intersection of the resulting ray with a plane passing through the terrain position at the given grid location, with the plane normal given by the terrain normal at that point
- project back into the height field space, read the terrain normal/height texture and shade

It's quite a rough approximation, but it works well in areas with low curvature. Additionally as Mintmaster noted, it's kind of hard to tell whether refraction is actually "correct", so rough approximations are often appropriate.

The case where it really breaks down is when the camera is low, looking at water edges where the ground may have significant curvature. In these cases the "real" refracted ray intersection may be quite close to the primary ray intersection (i.e. shallow water near the water edge), but the approximation may largely over-estimate (since it has no knowledge that the ground is curving upwards sharply). The most noticeable result is way too much "absorbtion" in the water (since it believes it is deeper than it is), so you get some dark blue "fog" near the water edges where it should be pretty transparent.

As I alluded to, this can be solved by using a more accurate ray-height-field intersection method such as parallax occlusion mapping, relief mapping or cone step mapping... or even just taking a few steps through the height-field. Even Newton iteration might be reasonable using the first approximation computed using the two-planes intersection.

Deferred rendering would also give you access to the depth buffer which would give a better approximation of the "proper" refracted ray length in eye space, assuming not *too* extreme indices of refraction.

That was a bit of brain dump... if anything didn't make sense, please don't hesitate to ask for clarification.

The edges look more like fog than they do water. :D:)
Good call: see above for the explanation of why that is.

Thanks again all for the feedback! I'll see what I can do about getting the water edge artifacts cleaned up and a binary released!
 
The other fun thing that I've been meaning to try is that now that we have a multicore x86 backend, I could as easily be running the fluid simulation there to compare speeds and take the burden off of lower-end graphics cards; the disadvantage of course is that the height-field would then have to be streamed to the GPU for rendering. Never-the-less it's something that is very easy to try with RapidMind and it's on the TODO list!
Can you tell us how well it scales yet? I'm quite curious :)
 
Can you tell us how well it scales yet? I'm quite curious :)
Do you mean specifically for the fluid example? I except it to scale very well, but I'll certainly post results when I get to messing around with it :)

If you mean the x86 backend in general the results are often very good so far... there are a few benchmarks posted on the site IIRC, but it's not uncommon to see something like 2x speedup on one core over typical C++ code, and 10+x speedups on 8 cores.
 
If you mean the x86 backend in general the results are often very good so far... there are a few benchmarks posted on the site IIRC, but it's not uncommon to see something like 2x speedup on one core over typical C++ code, and 10+x speedups on 8 cores.
Wait, what? How does that work? What kind of overhead is in there in the first place, then? (unless I'm missing something obvious, like a whole bunch of rendering code and other stuff on the CPU)
 
Wait, what? How does that work? What kind of overhead is in there in the first place, then? (unless I'm missing something obvious, like a whole bunch of rendering code and other stuff on the CPU)
I'm talking about general RapidMind applications now as I mentioned. The reason we can see speedups like that (2x on one core and more) are numerous:
1) Dynamic code gen can often rip out a lot of the overhead of typical C++ code
2) Optimizers can be more aggressive than C++ which have to deal with pointer aliasing: a real showstopper
3) Use of x86 SIMD instructions
4) Clever memory organization and data access patterns
5) "Auto-tuning"

Certainly there's nothing that RapidMind does that anyone else couldn't do, but there are some jobs that are better suited for a compiler.

Anyways I'll hopefully be able to let you know how the fluids demo runs on x86 soonish, but I have some projects coming up that I should work on first ;)
 
if anything didn't make sense, please don't hesitate to ask for clarification.

Makes sense now. Didn't cut through all the fancy words in the first iteration. But with the ground as a height map, it's not hard to see the connection to parallax mapping for refraction.
 
I've been meaning to mess around with some SPH stuff as well as it's quite doable on modern graphics cards (as Simon Green has shown, among others). You still couldn't do anything with this large of an area, but graphics cards will keep getting faster.

Nice demo! I agree with you that particle-based methods like SPH aren't well-suited to large volumes of water, but you can still get some good results on todays' GPUs.

Here's a video of the latest particles demo from the CUDA SDK:
http://www.youtube.com/watch?v=RqduA7myZok

It's not quite as pretty as yours, but it gives you an idea of the performance possible.

I think the best fluid simulation for games is going to be achieved by combining the strengths of these and other techniques.
 
Nice demo! I agree with you that particle-based methods like SPH aren't well-suited to large volumes of water, but you can still get some good results on todays' GPUs.
Indeed! I was inspired by your demo at NVIDIA U to research the topic futher :) Unfortunately I have to wait until CUDA supports Vista to run the demo.

It's not quite as pretty as yours, but it gives you an idea of the performance possible.
Meh, prettiness is secondary to cool physics for such demos - you should have seen the first version of this one before RapidMind decided to make it a booth demo ;) Looks very cool though and SPH is the sort of idea that will scale really well in the future IMHO.

I think the best fluid simulation for games is going to be achieved by combining the strengths of these and other techniques.
Agreed. Some of the most promising work that I've seen is with hybrid methods, such as doing a 2.5D simulation for the "underwater parts" (like in this demo) and a full 3D/particle simulation for the water surface. These sorts of simulations can be done efficiently and don't really look any different than doing it fully in 3D.

In any case it'll be fun to see what people do in the area in the future. Thanks for the link to the CUDA SPH demo video - I'll check out the "real thing" tomorrow at work (my work machine is XP).
 
Back
Top