Subdivision surfaces for characters, what's the hurdle?

V3

Veteran
So here we are in 08 with multicores CPU like Cell, I thought subdiv surfaces for characters would be more widespread. Why is subdivsion surfaces not used at least for cinematic models? What's stopping developers ?

NVIDIA has demonstrated subdivision surfaces on GPU with the Timbury demo in their NVDemo engine. Why are developers not using this? What's the shortcoming of this technique?

In this old 05 thread here, nAo hinted Cell to work well with subdiv, so what's wrong with Cell that hinder the adoption of subdiv surfaces ?

If subdiv isn't possible this gen, what's needed for next gen hardware for the adoption of subdiv? Or are low polygon characters with normal mapping as good as games are going to be?
 
Laa-Yosh is going to chime in any minute now about art asset workflow... Those damned artists! :)
 
Subdivs are pretty ideal for us, practically every pipeline has been tailored around them for years... Geri's game and Toy Story 2 were the beginning at Pixar, Weta transitioned on Two Towers, even ILM replaced NURBS a few years ago.

The thing is, why would subdivs be better on their own, without displacement? All they can do is smooth out curves to create unnatural surfaces and silhouettes, and they usually add to the modeling time because of extra requirements. Also, tiny polygons will ruin the efficiency of the quad pipelines, and subdivs can't provide continous LOD for terrain and such either.
So it's not a clear win-win situation, and that's probably the reason why they aren't really used yet. Once we have adaptive tesselation and displacement, they'll probably pick up some momentum, but it needs another large increase in processing power and memory to get there...
 
Once we have adaptive tesselation and displacement, they'll probably pick up some momentum, but it needs another large increase in processing power and memory to get there...

The NVDemo is already doing adaptive tesselation and displacement mapping. I am not sure if Timbury demo uses displacement mapping, but the GPU gems article described the option to use displacement mapping with it.
 
Short answer: Compared to just baking down to boring, static, precalculated data, generating data on the fly is usually slower and always a lot more complicated.

That said, things are getting better. As this-gen's engines mature, it becomes a lot more feasible to explore new techniques rather than "Just do the simplest possible thing so we can ship this game before the money runs out!" Also, this gen's hardware really is a lot better suited to curved surfaces than the last.

However, dynamic tessellation of curved surfaces really only helps when those curves are each fairly large on the screen. In every scheme I've seen, the vert/second rate for tessellated meshes is lower than that of equivalent static meshes for the first few splits. Remember that the poly count usually goes up by a factor of 4 for each split. So unless you really need a single curve that takes up enough screen space to reasonably utilize 64 polys, you are probably better off up-rezing the mesh offline. Tessellaton does help in the cinematic close-ups, but it can't replace traditional LOD for the more common need of lower rez models in the distance. When I turn on wireframe mode, most of our characters already look almost solid-filled from the normal gameplay camera POV. Very few of those triangles form of smooth curves that could be replaced by a regular subd. Displaced subd's are a lot more interesting, but I haven't had the time to experiment with them -much less work out an art pipeline. Too busy shipping the basics (...or slightly better ;) ).

I'm sure there are games taking advantage of the Xenos tessellation at least to do pn-triangles. I'd be surprised if there aren't already racing games on the PS3 doing tessellation. Can anyone cite with certainty specific examples?
 
It doesn't help that most of today's game art - thanks to the coming of Zbrush - has amazing amounts of geometry detail in the normal maps, which would take insane amounts of polygons to replicate in order to get accurate silhouettes and such. Simple, smooth curved surfaces are reserved for cars, pipes and all the stuff in the kitchen...
 
However, dynamic tessellation of curved surfaces really only helps when those curves are each fairly large on the screen. In every scheme I've seen, the vert/second rate for tessellated meshes is lower than that of equivalent static meshes for the first few splits. Remember that the poly count usually goes up by a factor of 4 for each split.

Why would vert/second be any slower if tessellation is done on CPU, and GPU is the real bottleneck?

It doesn't help that most of today's game art - thanks to the coming of Zbrush - has amazing amounts of geometry detail in the normal maps, which would take insane amounts of polygons to replicate in order to get accurate silhouettes and such.

Can't one use normal-maps for better adaptive subdivision around silhouette edges?
It doesn't have to be perfect, it just needs to look better than low poly edges we see in all those games.
 
What does a normal map has to do with silhouette edges??? It's a shading trick, it can not modify the geometry...
 
Previous discussion on the 360's tessellator:

http://forum.beyond3d.com/showthread.php?t=45240&highlight=tesselator

I'm sure there are games taking advantage of the Xenos tessellation at least to do pn-triangles. I'd be surprised if there aren't already racing games on the PS3 doing tessellation. Can anyone cite with certainty specific examples?

Off-hand, the only presentation I've seen is on Viva Pinata - they use tessellation on the terrain. I would hazard a guess that the bikes in MotoGP 2006 use tessellation as well, but.. just a guess.
 
Why would vert/second be any slower if tessellation is done on CPU, and GPU is the real bottleneck?

To answer this, we can splt tessellation schemes into 2 groups:

GPU-only
Can be done on the 360. Slower for the first few splits, then it catches up.

CPU making data for the GPU
Can be done on either, but it's more suited for the PS3. On the GPU side, this can't be faster and is probably slower than just having a baked mesh already sitting in RAM. On the CPU side, doing some work is always slower than not doing any work. Therefore, this is always slower. This technique is more about saving memory at the expense of time.

Can't one use normal-maps for better adaptive subdivision around silhouette edges?
It doesn't have to be perfect, it just needs to look better than low poly edges we see in all those games.

Normal maps can't help the silhouette, but displacement maps can. Unfortunately, neither the SPUs nor the PS3 vertex shaders are all that great at reading textures. On the SPUs, you could probably set up the displacements in a nice, linear array if you only did a fixed subdivision pattern. Jax&Daxter did something like that on the PS2. On the GPU, you might be able to carve out a silhoutte using relief-mapping style pixel shader techniques.
 
To answer this, we can splt tessellation schemes into 2 groups:

GPU-only
Can be done on the 360. Slower for the first few splits, then it catches up.

CPU making data for the GPU
Can be done on either, but it's more suited for the PS3. On the GPU side, this can't be faster and is probably slower than just having a baked mesh already sitting in RAM.
Looking at EDGE results, the additional cost of mesh processing every frame seems insignificant.
On the CPU side, doing some work is always slower than not doing any work. Therefore, this is always slower.
Not that I agree with that statement, but I'm only talking about GPU bound case.
This technique is more about saving memory at the expense of time.
For fixed tessellation maybe, but otherwise I wouldn't think so.
Normal maps can't help the silhouette, but displacement maps can.
Yeah, I meant (had in mind at least) something with depth information.
Unfortunately, neither the SPUs nor the PS3 vertex shaders are all that great at reading textures. On the SPUs, you could probably set up the displacements in a nice, linear array if you only did a fixed subdivision pattern. Jax&Daxter did something like that on the PS2. On the GPU, you might be able to carve out a silhoutte using relief-mapping style pixel shader techniques.
Thanks for the info.
 
Looking at EDGE results, the additional cost of mesh processing every frame seems insignificant.

You're right. The EDGE CPU-side polygon processing can result in significantly reduced GPU work. Extending that idea to culling whole surfaces before tessellation instead of polys after tessellation might be even better. I retract my "Can't be faster"assertion.
 
Last edited by a moderator:
Short answer: Compared to just baking down to boring, static, precalculated data, generating data on the fly is usually slower and always a lot more complicated.

Ahh I see so most developers are still just getting the hang of the new hardware to try anything new.

I understand that it’s better to pre tessellate the object for better performance. But what about for characters for close up cinematic where tessellating the model until it looks smooth is going to make your riggers cry. Currently most real time cut scenes I've seen involving characters have a lot visible poly edges and going high definition those poly edges just stands out a lot more.

I am just frustrated to see developers went to all the trouble developing separate model for cutscenes but having low poly model ruins it all. Where subdiv will take care of the problem, plus having the options of displacement mapping for even more detail, for eye candy.

I was hoping that this gen 360 with its tessellator, PS3 with Cell, we would see at least smooth looking curve at close up. But here we are 08 and developers will only use subdiv or other high order scheme if there is performance gain or smaller memory footprint. Not eye candy. 360 tesselator isn't flexible enough to be used all over the place and Cell is babysitting RSX. Really disappointed.

I am glad to hear though that subdiv might get some use for cinematic close up.

For next gen I hope the console makers really aim for console that's capable of running an engine that do subdiv and displacement mapping on all surfaces just so we can see the increase in detail/eye candy so its worth the jump. Can that kind of console be achieved through smart architecture or does it just requires raw power?
 
smaller memory footprint. Not eye candy.
I thought that using it would give you the benifit of both. A smaller memory footprint should give you more room for textures or other data shouldn't it because the models themselves don't end up using as much ram?
 
Your right. The EDGE CPU-side polygon processing can result in significantly reduced GPU work. Extending that idea to culling whole surfaces before tessellation instead of polys after tessellation might be even better. I retract my "Can't be faster"assertion.

I'm also looking at subdivision at a higher locality than others I assume: Not the whole mesh but around silhouette edges only, for better dynamic distribution of GPU polygon budget in realtime.
That's why I said it's potential benefits are more than lesser memory footprint.
And the displacement maps can be of much lower resolution for this purpose, making it more SPU friendly.
 
Back
Top