New metaballs demo

Zengar said:
Metabals are mathematical concept and do not rely to 3d card at all. This demo is using OpenGL Shading Language for reflections, hence the need of a recent graphics card :)

I'm guessing that's why changing AA or AF settings doesn't seem to affect framerate in this demo?
 
Right, the isosurface generation is completely CPU-bound. You may notice that changing resolution also doesn't affect anything.
 
To make it more GPU bound you can reduce the resolution of the grid from 40x40x40 to something lower, like 16x16x16, by editing this line in Main.cpp and recompiling the app:

metaBalls.setSize(vec3(-150, -150, -150), vec3(150, 150, 150), 40, 40, 40);

Performance scales pretty linearly against the resolution, so reducing it to 32x32x32 roughly doubles performance. Quality will of course be worse though. At 16x16x16 I see some fillrate dependency (running at ~1000fps), but it's getting pretty angular though. It would probably be a good idea to use a lower resolution grid and then use something like n-patches to smooth it.
 
I read it as meatballs as well... what in the wordl does that mean, dyslexia or something else? Maybe we were hungry :p
 
Iron Tiger said:
rwolf said:
Still waiting for Humasmark
But it would *obviously* be ATi biased. At least that's what we'd hear from nVidiots. (With no disrespect to sensible NV owners.)

He does work for ATi, it would be stupid to assume different.

That is like waiting with baited breath for davidkirkmark... of course not relly :p
 
Iron Tiger said:
But it would *obviously* be ATi biased. At least that's what we'd hear from nVidiots. (With no disrespect to sensible NV owners.)
More to the point, it'd be developed and tested on ATI hardware only, since Humus, last I heard, doesn't own any nVidia hardware. I do respect Humus, though, and I'm sure that if any such program ended up ATI-biased (for example, as compared to an average over modern games that use advanced shaders), it wouldn't be on purpose on his part, but rather due to his inability to properly-optimize for nVidia hardware without having said hardware in front of him.
 
There'd only be a really dramatic increase in performance if it sends a lot of data back from the GPU to the CPU.
 
Chalnoth said:
More to the point, it'd be developed and tested on ATI hardware only, since Humus, last I heard, doesn't own any nVidia hardware.

Still true.
Yeah, I'm not going to write any benchmarks simply because I'm not the right guy to do it. The results would get heavily disputed. That said, there is a benchmark mode included in the framework, so all demos can be benched with results written to a file.
 
Back
Top