What is the coolest thing to do with geometry shader + streamout?

As per title.

So far we have some pretty lame examples of geometry shader usage (like shadow volume extrusion when shadow volumes are already out of favour). We have displacement mapping but we know mass tessellation is not going to be feasible.

What are the really cool and feasible shaders that will come out of GS + SO?
 
One usage scenario is render-to-cubemap. Geometry shader emits triangles six times, transforming and setting face index (and viewport) appropriately.

In regards to Stream Output

* Output from Geometry Shader (or pre-tessellation vertex shader) can be saved for re-use
* Data entirely resident in video memory
* Example use: Character skinning, multi-pass lighting with resultant mesh
 
Last 3 algorithm ideas I've had stumbled at one minor technical limitation. Each was solved by the GS.

I forget one :) but the other two were pixel shader AA filtering, and a much *much* better version of this idea I had. Would run fully on the GPU for a start.

All sorts of crazy things will become possible IMO. It's just it's going to take a bit of out-of-box thinking (argh I hate that term) for people to hack them into a working state.
 
You could implement the algorithm from "Efficient Displacement Mapping by Image Warping". (The same as IBM used for the Cell terrain demo BTW.)
 
Last edited by a moderator:
We have displacement mapping but we know mass tessellation is not going to be feasible.

Yep, true for conventional displacement mapping.
Take a look at the displacement mapping demo in the DX SDK. I use a similar approach in my work introduced in this thread. In that context the GS seems quite promising.
 
Yep, true for conventional displacement mapping.
Take a look at the displacement mapping demo in the DX SDK. I use a similar approach in my work introduced in this thread. In that context the GS seems quite promising.

Yeap I did. I don't quite understand the raytrace part though. And does that solution produce correct lighting since no real vertices are generated?
 
Yeap I did. I don't quite understand the raytrace part though. And does that solution produce correct lighting since no real vertices are generated?

View rays are traced and intersected with a heightfield inside a tetrahedra mesh which was generated in the GS from the base mesh. Correct lighting can be achieved by using the heightfield normal at the intersection position.
 
View rays are traced and intersected with a heightfield inside a tetrahedra mesh which was generated in the GS from the base mesh. Correct lighting can be achieved by using the heightfield normal at the intersection position.
It seems to me that we lose multisampling rendering stuff this way..
 
It seems to me that we lose multisampling rendering stuff this way..

Hm, yes, probably. Aliasing can be a problem of these kind of techniques. Maybe a different kind of filtering can be applied. I haven't thought about it yet, but might be working on the problem in the near future.
 
Back
Top