Softening edges automatically

OpenGL guy said:
The vertex shader only works on a single vertex at a time, so it can't be used since it doesn't know anything about edges.

The pixel shader works on a single pixel at a time and also has no concept of edge.

Thats not 100% true, in fact for several years a variety of demos and articles have accessed edge and primitive data per vertex (and potentially per-pixel but I haven't tried it yet).

It involves altering the way you represent your data to the GPU but still it works. I'll have to remember to send my ShaderX2 article to Dave this weekend, so I can just point you at that in future.

In a nutshell upload your edge data and/or primitive data into constant ram (or texture if you have texture access) and then send in via the vertex streams indices and barycentric coodinates. Then per vertex use the vertex stream indices and barycentric coodinates to reconstruct your current primitive/edge etc and then do what ever you like with the data.

Main limitation is batch size but with geometry texturing on the way, that limitation may be reduced as well.
 
DeanoC said:
OpenGL guy said:
The vertex shader only works on a single vertex at a time, so it can't be used since it doesn't know anything about edges.

The pixel shader works on a single pixel at a time and also has no concept of edge.

Thats not 100% true, in fact for several years a variety of demos and articles have accessed edge and primitive data per vertex (and potentially per-pixel but I haven't tried it yet).

It involves altering the way you represent your data to the GPU but still it works. I'll have to remember to send my ShaderX2 article to Dave this weekend, so I can just point you at that in future.

In a nutshell upload your edge data and/or primitive data into constant ram (or texture if you have texture access) and then send in via the vertex streams indices and barycentric coodinates. Then per vertex use the vertex stream indices and barycentric coodinates to reconstruct your current primitive/edge etc and then do what ever you like with the data.

Main limitation is batch size but with geometry texturing on the way, that limitation may be reduced as well.

I only understand about half of what you say. But I don't let that bother me too much.

:D

I also saw some demos that use edges. Like the NPRHatching demo from ATi, which shows very nice what they do. And I understand, that you know how it can be done.

Two other questions:

- Could you make such a shader that can be activated for games that don't use DX9, to improve the looks?

- When I activate 16*AF with some games, the 'textures' that show dynamic effects like lightning (probably done with shader programs), become very ugly. Is that because the driver tries to use a texture that isn't present? Or is there another reason why that doesn't seem to work?

The last question is meant to see if the same thing might happen when smoothing edges.
 
Dave B(TotalVR) said:
Lets not forget about DX9 displacement mapping.....

Ohh go on, what if I say please :)

Displacement mapping is dead, long live Geometry Texturing.

Displacement mapping is just one very boring aspect of the most important thing to happen to vertex units since the move from software to hardware.

Combined with unified buffers (texture/render-targets/vertex/index buffers) a whole new world will be opened. If we had been distracted by displacement mapping technology it may have delayed the revolution by several years.
 
DiGuru said:
I only understand about half of what you say. But I don't let that bother me too much.

:D

Don't worry I get that a lot, only problem is I think its because I talk a lot of rubbish :D

DiGuru said:
I also saw some demos that use edges. Like the NPRHatching demo from ATi, which shows very nice what they do. And I understand, that you know how it can be done.
Oddly enough the NPRHatching demo doesn't work on edges in the way you probably think. In fact NPRHatching doesn't work on edges at all, it works on pixels... Its a post-process that performs operations that extract edges in a similar mannor to photo-shop edge detection. The actual polygon edges aren't used in any special way at all (thats not strictly true, by using object colouring they improve the edge detection rate but thats not really relevant).

DiGuru said:
- Could you make such a shader that can be activated for games that don't use DX9, to improve the looks?
The problem of any 'automatic' technique is that it will alway make some art look worse (classic examples are NPatch on things that are meant to be sharp or flat but the normals aren't).
But ignoring that issue, then in theory yes but it would probably use an awesome amount of GPU and/or CPU power. If we are talking purely theory though, it should be possible to capture the vertex and index streams, upload them to constant memory (or geomety texture) and then run a subdivision surface vertex shader on the model. Skinning might be tricky... but you could then choose how far you wanted to push the model towards the limit surface.
Wether it would look good enough to be worth it I couldn't say. Its the kind of thing I might consider doing if I were Sony for PS1 games on PS3 though...

DiGuru said:
- When I activate 16*AF with some games, the 'textures' that show dynamic effects like lightning (probably done with shader programs), become very ugly. Is that because the driver tries to use a texture that isn't present? Or is there another reason why that doesn't seem to work?

The last question is meant to see if the same thing might happen when smoothing edges.
I'm not sure what the error is your seeing with 16*AF but any 'auto' improvement system usually has a few cases where it actually makes the art look worse.
 
Back
Top