Geometry Shaders

rwolf

Rock Star
Regular
Can someone describe exactly how they are used in the pipeline? Any good reference material?
 
This is just too easy to resist....

1. Open up your favourite web browser.
2. Go to http://www.google.com
3. Type in Geometry Shader or D3D10 Geometry Shader into the search field.
4. Start clicking and reading.

:p ;)
 
Just in case Neeyik's suggestion is too difficult and time consuming...

rwolf said:
Can someone describe exactly how they are used in the pipeline? Any good reference material?
Point your browser at this page and watch the PDC'05 presentation. Most threads discussing the GS end up quoting/referencing from that slide deck :smile:

Jack
 
Neeyik said:
This is just too easy to resist....

1. Open up your favourite web browser.
2. Go to http://www.google.com
3. Type in Geometry Shader or D3D10 Geometry Shader into the search field.
4. Start clicking and reading.

:p ;)

Yes, I deserved that. Thank you very much. :) I knew I was going to get nailed when I posted the thread.

I did google the topic by the way and there wasn't much to go on.
 
Last edited by a moderator:
JHoxley said:
Just in case Neeyik's suggestion is too difficult and time consuming...


Point your browser at this page and watch the PDC'05 presentation. Most threads discussing the GS end up quoting/referencing from that slide deck :smile:

Jack

I already watched the presentation. That is actually what caused me to spawn this thread. There hasn't been a whole lot of discussion about what you can actually do with these things, and the potential looks interesting.
 
I've working on some D3D10 code featuring geometry shaders in between browsing forums today...

I'm not looking at creating cutting edge graphics yet - I'm more interested in the greater level of information available throughout the pipeline. My current GS's have just been computing properties about the geometry and using that to influence rendering/rasterization.

You could always check out the SDK for examples of fancy GS usage. The object space motion blurring sample (new in February) is very neat - and it's documentation has a lot of detail.

hth
Jack
 
MipMap said:
I tried to discuss it a few times in different threads but didn't get any bites...

Nobody seems to want to talk about it. Wierd for a forum like this.
 
I'll try to discuss this topic anyway...

I was just looking at an NVidia demo for the 7800 called luna. The demo contains an extremely realistic looking female character with animated facial features. When you switch wireframe mode on, you can see that hundreds of thousands of polygons are required in the facial area to do this. However, with geometry shading, most of these polygons could be generated on the fly.

A big problem for realistic character animation (especially facial expressions) comes when the radius of a curved surface changes - in current animation this causes the surface to deform unrealistically due to the fixed polygon count used. With geometry shading you could adjust the number of polygons generated for the surface to compensate for this.

In summary, we can expect to see much more realistic character animation using geometry shading
 
Yes, I suppose that will happen...

BUT as I mentioned before, the general consensus seems to be that the GS won't be best suited as a progammable tesselator for HOS. Yes, it can do that, but No, "first gen" hardware probably won't have the performance for it. Rather, the GS allows for topological information to either be retrieved or generated - something that is much less "wow" but still very cool.

Then again, I suppose we'll have to wait for some real hardware samples to see :D

Jack
 
Microsoft WFG 2.0 Presentation said:
Geometry shader stage
“Sees” entire primitive (3 vertices of triangle)
Can have adjacent vertices too (6 vertices total)
Limited amplification
Extrude edges, expand points, generate shells, …
Per-primitive processing
Generate extra per-primitive constant data for pixel shaders
Constant colors, normals, etc
Compute plane equations, barycentric parameters, etc.
Combine with stream output or arrayed resources
Render to cube map
Render multiple shadow maps

Looks like shadow maps can be produced with the shader. That should really speed things up.
 
http://www.gamedev.net/reference/programming/features/d3d10overview/

An interesting part of the new attributes that the IA generates as well as the GS's ability to work at the triangle level is that of GPU-selectable properties. It is quite conceivable that most (if not all) of a material system can be executed directly on the GPU. Consider a case where each triangle is given a Primitive ID by the IA which is used by the GS or PS to look up a set of attributes from an array provided as a set of constants that determines how the pixels are finally rendered. Whether this eliminates the need for material-based sorting in the application won't be known until developers get their hands on some real Direct3D 10 hardware – but it definitely opens up the possibilities.

Looks like the geometry shader is going to be more significant then I thought.
 
rwolf said:
Looks like shadow maps can be produced with the shader. That should really speed things up.
Well the single-pass cube map rendering should be both an interesting quality improvement as well as the obvious performance increase.

It's very common for cubemapped reflection in games to be rendered at much lower resolution and lower detail and less often (e.g. every 3rd frame). Partly just because you can (and people won't notice) as well as trying to offset the pain of re-rendering the scene 6 times.

So far I've seen a lot of people talk about single-pass cubemap rendering as a neat performance optimization. I wonder if it'll also translate into some improved visuals/reflections due to it being a less painful effect...

Generate extra per-primitive constant data for pixel shaders
It's the same thing this line (from the WGF2 list) that I was referring to in my GDNet article about GS's (potentially) being able to do on-the-fly material setup.

Cheers,
Jack
 
Back
Top