Next gen graphics and Vista.

Jawed said:
I'm still trying to get a detailed description of what the Geometry Shader is supposed to do - any hints would be appreciated.
Jawed

I always thought the geometry shader is the only shader that is allowed to create or destroy verticies and triangle lists. But that is just a stab in the dark.

*stab* *stab*
 
My understanding is that tessellation generates vertices. The patent describes, amongst other things, using a tessellation factor.

I presume, for example, that in adaptive tessellation that the first pass computes the tessellation factor based on the "screen size" of the poly/mesh/quad (whatever). On the second pass a different shader program inputs the group of relevant vertex data, the tessellation factor and other gubbins and proceeds to generate the final vertices/triangles.

The patent talks about higher order surfaces, too. Seemingly as a part of two-pass tessellation.

Erm...

So, vertex generation isn't just a feature of the Geometry Shader.

Someone recently described the GS as a "triangle shader" rather than a vertex shader. I'm not sure what that means though...

Jawed
 
Jawed said:
Courtesy of nAo:
I'm still trying to get a detailed description of what the Geometry Shader is supposed to do - any hints would be appreciated.
Sadly, just to be really boring, i'd guess the information is still under NDA. I read a comment from the DX team that they would be either revealing some stuff at PDC or a knock-on of the PDC was that they would be able to reveal more stuff afterwards.

As much as I'd love to say something I'm not going to take any chances with breaking NDA's ;)

I haven't had a chance to look at those PDC slides yet, so maybe there's something in them :)

From the public diagrams I've seen (a couple have been posted here recently), the GS exists further "up" the pipeline from the VS, that is the GS feeds into the VS. As a consequence it is most likely to operate at the tesselation level, allowing things like higher-order-primitives. In fact, it'd be pretty silly if you couldn't generate a HOS :)

HOS were shunned a bit (e.g. ATI TruForm) because they didn't work well with shadow volumes (the other big thing at the time), despite the fact that shadow mapping is more favoured these days a HOS created in/via the GS would get around some (if not all) the issues with the original DX8 HOS and shadow volumes.

hth
Jack
 
JHoxley said:
I haven't had a chance to look at those PDC slides yet, so maybe there's something in them :)

You will find this in the 2 DX Presentations:

Geometry Shader(Per-Primitive Operations)

Operates on entire primitives [with adjacency]
Material selection/setup to reduce # of draw() calls
Set up barycentrics to exceed # of interpolators
Compute edge lengths
Compute plane equations
Compute silhouette edges

Geometry Shader
Geometry Amplification/De-Amplification

Emits new primitives of a specified output type
Limited data amplification/de-amplification: Output 0-1024 values per invocation

No more 1 vertex in, 1 vertex out limit
 
  • Like
Reactions: Geo
From:

http://216.55.183.63/pdc2005/slides/PRS311_Balaz.ppt

b3d31.jpg



b3d32.jpg


Jawed
 
  • Like
Reactions: Geo
I think I spoke to soon :)

I read through those PDC slides from the above link after writing that post.

Previous to that the only information I had was a fairly dense tech.spec - and it's not on this machine atm, so couldn't check it. Oh well.

Jack
 
My impression from this documentation is that the Geometry Shader in DX10 is a logical step in the GPU pipeline, rather than an explicitly physical step.

Though the documentation doesn't relate the VS->GS->PS pipeline to a unified architecture, it seems to me that with both VS and PS unified, GS functions themselves would also run on the same hardware.

The input to the GS seems to consist of a vertex stream and the output seems to consist of one or more vertex streams, with some new datatypes.

Jawed
 
To be nitpicky the input is a primitive stream and the output is a vertex stream. You are correct about the new primitive types as the picture clearly shows them.
 
Oh, it seems to me that the input to the GS is a vertex stream, but the GS interprets the vertex stream as primitives, to perform per-primitive operations :???:

Jawed
 
Ah, I think it's better to think of these input and output streams as "geometry streams", consisting of vertices, primitives and "objects".

The geometry shader works on primitives (tessellation) and I suppose (multiple) higher order surfaces are defined per object and the geometry shader combines tessellation and HOS processing at the same time.

The initial stream, presumably, consists of vertices (position, colour), primitives (quantity, normals, lighting) and objects (set of HOS).

After the first pass through GS, objects have been translated into screen space and tessellation factors for the correct LOD/HOS have been determined. The output stream now consists of just vertices and primitives.

The second pass implements the tessellation/HOS and outputs a vertex/primitive stream for rasterisation.

Hope I'm making progress :neutral:

Jawed
 
Ideally, it would have been nice to have subdivision surfaces or at least high order Bezier triangles. Unfortunately, the geometry shader data shown above (i.e. a triangle + 3 additional vertices) may only be sufficient for quadratic Bezier triangles, which'd be worse than N-Patches. I suppose you might be able to use the surface normals of the triangle AND the other 3 points....but I have my doubts about the continuity you'd achieve.
 
Simon F said:
Ideally, it would have been nice to have subdivision surfaces or at least high order Bezier triangles. Unfortunately, the geometry shader data shown above (i.e. a triangle + 3 additional vertices) may only be sufficient for quadratic Bezier triangles, which'd be worse than N-Patches. I suppose you might be able to use the surface normals of the triangle AND the other 3 points....but I have my doubts about the continuity you'd achieve.
Well, you can get a lot more than just three additional vertices per vertex as input into the vertex shader, so it may be that the geometry shader's only purpose is tesellation: you could do the appropriate deforming of the geometry in the vertex shader.
 
3dcgi said:
Kind of makes you think Microsoft had something other than tessellation in mind doesn't it.
Oh I know some things you can do with it because it's a subset of some stuff I was designing, but it could have been better (IMHO).
 
Dave Baumann said:
Not according to David Kirk, in numerous past questioning with him (from myself and others) - IIRC it wasn't so long ago that he was saying that even under a unified shader API the balance of operations between the VS and PS were such that units dedicated and tuned to those tasks would, in their opinion, still be of more importance. Now, in more recent interviews that stance appears to have softened to the point where, IIRC again, he said unified probably would be a necessity at some point in time, which I why I think they will go that route eventually, but given the pevious comments not for their first iteration of DX10 hardware.

fwiw the last thing i saw from nv regarding this (taken from AT E3 interview):

Their stance continues to be that at every GPU generation they design and test features like unified shader model, embedded DRAM, RDRAM, tiling rendering architectures, etc... and evaluate their usefulness. They have apparently done a unified shader model design and the performance just didn't make sense for their architecture.

NVIDIA isn't saying that a unified shader architecture doesn't make sense, but at this point in time, for NVIDIA GPUs, it isn't the best call. From NVIDIA's standpoint, a unified shader architecture offers higher peak performance (e.g. all pixel instructions, or all vertex instructions) but getting good performance in more balanced scenarios is more difficult. The other issue is that the instruction mix for pixel and vertex shaders are very different, so the optimal functional units required for each are going to be different. The final issue is that a unified shader architecture, from NVIDIA's standpoint, requires a much more complex design, which will in turn increase die area.

NVIDIA stated that they will eventually do a unified shader GPU, but before then there are a number of other GPU enhancements that they are looking to implement. Potentially things like a programmable ROP, programmable rasterization, programmable texturing, etc...
 
I remember Kirks maybe most famous or rather "infamous" comment about the R300 having a 256-bit bus. it was something inline with "Ahh, but how does 1000Mhz DDR2 sound to you"... Google it up Jawed.. And then some months later, FX5900.. :)
 
Sadly (cos the drama looks like it was fun!) I only caught mere glimmers of the whole 2002/03 tussle between ATI/NV as it was happening :cry:

But yeah, I guess 1000MHz 128-bit DDR2 didn't save the day, did it? Hahahaha.

Though the last laugh would appear to be 6600GT which manages quite happily with what is roughly 5800U memory.

Edit: I know what you're saying, though, about NV doing an about turn on the memory, going with 256-bit.

Jawed
 
Last edited by a moderator:
Back
Top