First Killzone screenshot/details? So says USAToday..

The problem with most of the existing algorithms I've seen is that they don't look good in motion. Small perturbations in object position or camera position cause instability and result in lots of popping in and out of detail. One moment, the left hand of the character has fingers, but the right hand is just a box, the next moment, the left hand shifts to a box and the right hand gets fingers. :)-

Exactly! The algorithms in graphics books are not necessarily well suited to gaming. Has any game to date done this successfully? As far as I know, most games still use the traditional methods. Personally, I think this is one of those features thats begging to be done in hardware.

V3 said:
With the soccer ball can't you just billboard that geometry with distributed polygons and just rotate the maps around it to simulate its rotations with respect of the camera.

well I picked a ball because it was an easy example to make a point. You can imagine the same principal with a human arm, a statue, a car, etc...
 
Exactly! The algorithms in graphics books are not necessarily well suited to gaming. Has any game to date done this successfully? As far as I know, most games still use the traditional methods. Personally, I think this is one of those features thats begging to be done in hardware.

This is so interesting, how would you suggest to support this in hardware? Through a pre-tesselator or by killing primitives in a geometry shader-like unit?

I remember an article by Deano about storing connectivity info in the mesh data sent down to the GPU: you could send a high poly mesh and kill a primitive in a geometry shader if it's not a silhouette poly based on some heuristics, then use this connectivity info to fill that gap somehow, effectively rendering less polygons on flat regions of the mesh pointing towards the camera.

I wonder if it'd play well with the hardware and you are not wasting too much bandwidth to send down the high poly mesh plus more info. Hmmm... have to try some day.
Could also play well with a Cell and memexport on the 360, but there are several memory footprint issues here in storing those big meshes.

And it's a bit OT :D
 
Note that I know really rather little about 3D graphics hardware and such so if this is silly then just ignore this post.

Would it be possible to create hardware that can draw curved surfaces efficiently? Couldn't you add angle values and such to your polys' coordinates that indicate to the hardware with what curves the surface of the polygon should be drawn? (similar to how arcs are defined in vector graphics software). In a sense you could even introduce shader like formulas and texture like patterns that create complex surface structures.

I'm sure this has been tested already, and it doesn't necessarily make rendering graphics less work to calculate, probably doesn't make shadows and lighting in general less hard either (you'd still porbably need something close to raytracing or raycasting, though maybe you could assume an approximation and only calculate roughly the light that reaches that particular poly and then take that as a fixed value for the local calculations).

But one problem that it could solve is that this could greatly reduce complexity of the polygon models, and it could maybe combine with the idea of focussing on the outline of a shape for rendering mentioned above.

Anyway, I'm sure this is probably a well discussed thing, but if it's an interesting discussion I'm sure we can split off this thread.
 
This is so interesting, how would you suggest to support this in hardware? Through a pre-tesselator or by killing primitives in a geometry shader-like unit?

I remember an article by Deano about storing connectivity info in the mesh data sent down to the GPU: you could send a high poly mesh and kill a primitive in a geometry shader if it's not a silhouette poly based on some heuristics, then use this connectivity info to fill that gap somehow, effectively rendering less polygons on flat regions of the mesh pointing towards the camera.

I wonder if it'd play well with the hardware and you are not wasting too much bandwidth to send down the high poly mesh plus more info. Hmmm... have to try some day.
Could also play well with a Cell and memexport on the 360, but there are several memory footprint issues here in storing those big meshes.

And it's a bit OT :D


Could you briefly describe what an algorithm looks like which determines whether a polygon is a silhouette polygon?
 
Could you briefly describe what an algorithm looks like which determines whether a polygon is a silhouette polygon?
check the angle (you can do this with a dot product) between its normal and the view vector, this angle is close to pi/2 for silhouette polygons
 
check the angle (you can do this with a dot product) between its normal and the view vector, this angle is close to pi/2 for silhouette polygons

This mostly works, but not always. Well, it works for objects with volume, but it can fail for objects that don't have volume. For example, foliage. Image the camera is looking right at a palm tree leaf. There are no 'side polys' to this leaf, so the above algorithm would miss this case because the silhouette polys on this leaf have normals looking right at the camera. Although you could just add some geometric 'volume' to the leaf edges to make it work again.
 
This mostly works, but not always. Well, it works for objects with volume, but it can fail for objects that don't have volume. For example, foliage. Image the camera is looking right at a palm tree leaf. There are no 'side polys' to this leaf, so the above algorithm would miss this case because the silhouette polys on this leaf have normals looking right at the camera. Although you could just add some geometric 'volume' to the leaf edges to make it work again.

In the case of leaves I would just not use this idea; I find leaves are their special breed of beast that need their own care and love anyway.
Thanks mate, your early post simply gave me so many ideas, i'm now toying with this thing of adding geometry on the silhouette using a geometry shader and morph/blend/dowhatever into POM per-fragment. Silhouette problem solved. It's just a matter of implementing it now.. ehm.
 
This mostly works, but not always. Well, it works for objects with volume, but it can fail for objects that don't have volume. For example, foliage. Image the camera is looking right at a palm tree leaf. There are no 'side polys' to this leaf, so the above algorithm would miss this case because the silhouette polys on this leaf have normals looking right at the camera. Although you could just add some geometric 'volume' to the leaf edges to make it work again.
Usually that stuff is rendered as billboards, in this case you don't have a poligonal edge to begin with as it's probably being 'killed' by alphatest/alphablend or whatever you use to remove billboards outer areas, moreover you can fake a volume appereance with a simple billboarded normal map.
 
This mostly works, but not always. Well, it works for objects with volume, but it can fail for objects that don't have volume. For example, foliage. Image the camera is looking right at a palm tree leaf. There are no 'side polys' to this leaf, so the above algorithm would miss this case because the silhouette polys on this leaf have normals looking right at the camera. Although you could just add some geometric 'volume' to the leaf edges to make it work again.
Thanks, that was my thought too, checking the angle is quite a naive way to approach this matter.
 
The problem with most of the existing algorithms I've seen is that they don't look good in motion. Small perturbations in object position or camera position cause instability and result in lots of popping in and out of detail. One moment, the left hand of the character has fingers, but the right hand is just a box, the next moment, the left hand shifts to a box and the right hand gets fingers. :)
-
This is not particularly hard, actually. There was a demo on the Geforce3 that did continuous deformation that looked pretty decent, and Black and White was a game that continuous dynamic deformation (though they kept the poly counts a little too low, making the transitions rather obvious).

Basically, at a certain threshold before more detail is necessary (using whatever metric you can), subdivide a triangle by keeping all new vertices inside the original plane. Then continuously displace them as your metric tells you more detail is necessary.

The popping problem is something that affects a lot of algorithms, though. Most papers that attempt soft shadowing (smoothies, penumbra wedges, etc.) ignore this artifact, and there you don't have an easy fix.
 
This is so interesting, how would you suggest to support this in hardware?

I'm kind of against supplying 'connecting info' for two reasons. First, it's error prone if it's human generated. And second it can be extremely labor intensive, more so if geometry changes over the course of the project. It would require 'locking down' geometry at some point, otherwise some poor shmuck will have to redo the connecting info. I remember at PS3Devcon one of the talkers describing progressive mesh and saying how an artist would have to flag important edges. Wow, I don't want to be that poor artist! Now if someone has an algorithm that can create all this connecting info correctly, then we're in business.

As a first step, I'd be plenty happy with a future gpu that just discard triangles that it deems unnecessary and restitches the rest together. So when you feed it a triangle list, it assumes these triangles are all related and can be tossed/tweaked amongst each other. This doesn't really solve the silhouette issue since you'd have to crank up the triangle count art side and hence kill bandwidth ;( But, it would be useful just as a means to get the triangle count dramatically down with existing poly counts.

As a second step though, I think the only real way is for the gpu to generate the filler verts it needs on the silhouette on the fly, that way you don't have the bandwidth penalty. I mean, you can do all this cpu side as well, but I don't think thats a good solution because you'd still need uber high poly models that would need to be created/loaded/processed, and hence you'd be using memory and eating system bandwidth anyways.

In my ideal pipe dream world, our geometry can all be of a certain minimum density, and the detail triangles are all generated right at draw time by the gpu. So less memory used, faster load times, less system bandwidth, and artists that don't want to kill you.
 
Thanks, that was my thought too, checking the angle is quite a naive way to approach this matter.
Checking the angle is the only meaningful way to do it, foliage is a completely different matter as it does NOT have a polygonal edge to begin with, if you are trying to enhance poligonal detail on the silhouette who cares about foliage?
I mean..if you still render foliage as not billboarded geometry..well, good luck with that.
 
Guys, you're making it much more complicated than it is, multiresolution representations have been invented ages ago, what you need is to use those representations plus adpative tesselation. Normal maps are still useable but you have to tweak your tesselation metric in order to account of the extra details given by the normal map on facing or quasi facing triangles.
edit: fixed typos
 
Usually that stuff is rendered as billboards, in this case you don't have a poligonal edge to begin with as it's probably being 'killed' by alphatest/alphablend or whatever you use to remove billboards outer areas, moreover you can fake a volume appereance with a simple billboarded normal map.

Whoa, billboards are evil! Well maybe not evil, but kinda old gen. I'm thinking more of Crysis type foliage which are 3d, and can move, twist, and be shot up and fall to the ground realistically.
 
Checking the angle is the only meaningful way to do it, foliage is a completely different matter as it does NOT have a polygonal edge to begin with, if you are trying to enhance poligonal detail on the silhouette who cares about foliage?
I mean..if you still render foliage as not billboarded geometry..well, good luck with that.

While the angle would be close to 90° on high poly models, it may look completely different on object with not so many polys, imagine checking it for a cube. I was hoping for a more totalistic algorithm.
 
Whoa, billboards are evil! Well maybe not evil, but kinda old gen. I'm thinking more of Crysis type foliage which are 3d, and can move, twist, and be shot up and fall to the ground realistically.
Emh...this really does not change things that much, if you do tesselation you just build a metric that accounts for that and you don't need to compute where your silhouette is as a non manifold mesh like that would have non shared edges already pre-tagged anyway to have tesselation working properly.
Have you ever tried to tesselate anything like that?
If your tesselation algorithm only works on manifolds than your foliage has volume anyway ;)
 
Last edited:
While the angle would be close to 90° on high poly models, it may look completely different on object with not so many polys, imagine checking it for a cube. I was hoping for a more totalistic algorithm.
What are you talking about? that algorithm works perfectly for a cube
 
Emh...this really does not change things that much, if you do tesselation you just build a metric that accounts for that and you don't need to compute where your silhouette is as a non manifold mesh like that would have non shared edges already pre-tagged anyway to have tesselation working properly.
Have you ever tried to tesselate anything like that?
If your tesselation algorithm only works on manifolds than your foliage has volume anyway ;)

We haven't here as our title has little to no foliage in it at all. Makes that easy ;) Here's the problem though. There's always talk of "hey, just account for that in your tessellation code", but I have yet to see this done on a shipping game. Even in all the latest and greatest console games, take a peek at screen shots and you still see edges on silhouettes. This leads me to believe that:

a) no one has figured out how to smooth silhouettes
b) people have figured it out, and but cant get it working real time
c) people have figured it out and shipped a game with it, but their technique is ineffective

I mean, if it's really that easy and/or solved in books everywhere, you'd think it would have made its way into a shipped title by now! Given that it hasn't leads me to believe that it's not a trivial problem to properly solve. If there is a game out there that does successfully do this though, I'd love to see it in action! Anyone know of any?
 
What are you talking about? that algorithm works perfectly for a cube
A cube is probably a bad example. It doesn't need subdivision.

I think he's referring so something like, say, a cylinder. When you view it from the end, you'll see polygon edges even when the angle between the vertex normal and view vector is far from 90 degrees. Of course, there aren't any ways of looking at a cylinder to avoid round edges, so angle dependent subdivision is pretty useless there anyway.

For anything that can benefit from angle dependent subdivision, your idea is fine IMO. The harder part is subdividing effectively.
 
...So when you feed it a triangle list, it assumes these triangles are all related and can be tossed/tweaked amongst each other.
It might be worth looking at this from a lateral POV, rather than following on from the status quo. Perhaps there's a number of places to handle the matter differently? For example, the idea of 'triangle lists'. If GPUs did away with triangle lists and every triangle was submitted as three vertices, or some other representation, you wouldn't have to worry about building usable lists from processed meshes. Yes, that has other issues associated with it, but every idea should be considered if you're trying something new - even 'unthinkable' ideas! How's about a HOS silhouette mesh, or even just hardware HOS implementations perhaps that render per pixel without tessellation?

This could be a good area for PS3 Cell-rendering experimentation. There's no hardware limit to 3D representation, so people can try a whole host of model representation and rendering strategies.
 
Back
Top