Ilfirin said:3.0 is a big tease - you can *almost* do a lot of really cool stuff, but not quite. 4.0 addresses these issues
Ilfirin said:Not particularly interesting, but really the uses are pretty minimal with 3.0 since you still can't realistically do any silhouette extractions on the GPU (and hence any modifications you make to the vertex positions will produce incorrect shadowing on anything that requires silhouettes).
Ilfirin said:And displacing a pre-tesselated model with a static displacement mapping is just plain dumb (why not just displace it once at load-time and forget about it?).
krychek said:Ilfirin said:And displacing a pre-tesselated model with a static displacement mapping is just plain dumb (why not just displace it once at load-time and forget about it?).
How about LOD in large terrains? You could have a single flat patch of vertices which are displaced based on the heightmap. Would be helpful if you are bandwidth restricted (during large movements across terrain). Storing as heightmaps also has the advantage that they can be compressed. Noise textures may also be used to combine when the resolution of the heightmap texture becomes too low for thje geometry.
DemoCoder said:My point Ilfrin is that your criticism applies to the very notion of vertex shaders. If dynamic stencil volumes are the absolute priority, than NVidia and ATI should produce cards without vertex shaders since they cannot be used at all for vertex transformations.
Basically, the 'T' part of T&L can't be used.
I think it's too extreme. There is a balance that can be struct being using stencil shadow volumes, and using vertex shaders (including displacements in vertex shaders) For example, large static sections of the world can have static shadow volumes calculated, whereas dynamic models or displaced surfaces can have volumes extruded on the GPU. Or, some game engines will just use shadow buffers.
What do you mean by hardware tesselators? Why would we need that? My point was that storing as heightmap save a lot of space. 1/2 bytes compared to 12/20 bytes per vertex. The x-z info is same for the terrains anyway. Granted that It'll be unnecessarily displacing the same patch in subseqent passes but with this method much more goemetry (heightmaps) can be passed than before so moving through detailed terrain or rather panning a zoomed view on large terrain should be pretty smooth. Maybe SM4.0 will allow to displace once and save the result.Ilfirin said:That'd be fine, if we had hardware tesellators, but we don't.. (and hence you're going to end up building the LODs on the host anyway.. so why not do the displacements while tesellating. Then you're only displacing once, rather than every single pass, on every single frame)
Ilfirin said:Dynamic displacement maps (as, I'm sure, has been mentioned a hundred times) - evaluate some fluid dynamics in the pixel shader, save the result to a texture. Do a per-vertex texture lookup in the next pass, and tada, totally GPU based 3D water.
That's not true in the general case.Guden Oden said:Sorry for replying to such an old post, heh, but it seems to me such an approach would just lead to waves that bob straight through anything placed on them - such as boats, for example, since the GPU isn't running the overall physics model as well...
It's entirely possible. Even if at this time one have to use some trick (like rendering into a texture or vertex buffer .or stuff like that)Guden Oden said:But imagine a game with several speedboats leaving wakes that interact with each other and other boats, or on choppy seas. The boats would have inertia, center of gravity, they would have to react to player input, surface drag, possibe impacts with other boats or buoys or such, or even land/underwater objects.
A wave operator can be evaluated very easily on a GPU, even with weird dynamic boundary conditions.How would you do all this on the GPU?
The real question is: does the game want to know where the wave or the boat is? Maybe..it depenpds on the game.Would the game ever know the precise location of a wave if the wave is being rendered entirely by the GPU?
Humus's flag demo shows compenetration like tons of other cloth simulation demos that run on common CPUs.Humus' waving flag demo clips through itself sometimes, wouldn't similar things happen on a routine basis if we put too much work on the GPU?
krychek said:What do you mean by hardware tesselators? Why would we need that? My point was that storing as heightmap save a lot of space. 1/2 bytes compared to 12/20 bytes per vertex. The x-z info is same for the terrains anyway. Granted that It'll be unnecessarily displacing the same patch in subseqent passes but with this method much more goemetry (heightmaps) can be passed than before so moving through detailed terrain or rather panning a zoomed view on large terrain should be pretty smooth. Maybe SM4.0 will allow to displace once and save the result.
If any compilcated lighting is used for the tarrain.. the vertex shader will be idle anyways so we lose nothing by displacing every frame. Ofcourse this is a very specific use, I agee with your points in general but I felt it does have some advantages in this case.
Guden Oden said:Sorry for replying to such an old post, heh, but it seems to me such an approach would just lead to waves that bob straight through anything placed on them - such as boats, for example, since the GPU isn't running the overall physics model as well...
nAo said:It's entirely possible.Guden Oden said:But imagine a game with several speedboats leaving wakes that interact with each other and other boats, or on choppy seas. The boats would have inertia, center of gravity, they would have to react to player input, surface drag, possibe impacts with other boats or buoys or such, or even land/underwater objects.
The real question is: does the game want to know where the wave or the boat is? Maybe..it depenpds on the game.
The GPU could render all the data the game-logic has to know in a (very small) buffer and the CPU can read it back a frame later.
Humus's flag demo shows compenetration like tons of other cloth simulation demos that run on common CPUs.
Can a GPU handle a more correct simulation? the answer is: yes!
Very weird question..actually GPUs are MUCH better than CPUs in this kind of tasks.Guden Oden said:It's entirely possible to do a realistic water physics model for several boats including accurate collision detection and the application of forces from waves coming in different directions) entirely on the GPU at playable frame rates?
Many games wouldn't want to know. A game can do fast physics simulations via GPU on objects that don't directly interact with gameplay.What game would not want to know?
That's why I wrote that depends upon the application.You'd want to hear splashes and such of course and thunks when you crash into stuff.
Actually all the multiplayer games have those kind of problems, physics simulations on the GPUs don't add additional burden regarding this particular problem (except different precision support on different hw running on different platforms...)Furthermore, it might be a multiplayer title, and then it would be beneficial if all players had the same waves on their screen, and indeed if the boats are in the same positions.
what?! actually a lot of games run a frame 'behind' the graphics!Hm, is it really realistic to expect the game and the player to run a frame *behind* the graphics? That just seems pretty backwards to me.
I don't understand your point. What's basically wrong with that?But how much of a performance penalty will that mean? The flag must do collision detection on itself on a per-polygon basis.
Like in every game a developer should know how to make tradeoffs..how to balance calculations between CPU and GPU.Now, on a flag it probably doesn't matter, I know of no game where there are like millions of flags waving around, but on clothes for example it could have a somewhat larger performance impact.
Do you have 4.0 specifications?Ilfirin said:3.0 is a big tease - you can *almost* do a lot of really cool stuff, but not quite. 4.0 addresses these issues