"Yes, but how many polygons?" An artist blog entry with interesting numbers

Those tech demo models are usually built from subdivision surfaces, a type of HOS. The trouble is that they can't be tesselated on the GPU so they're relying on the CPU to do this for every frame. It's OK to doso, unless you're trying to run a game at the same time...

They can be build by subdiv surfaces or other high order surfaces, once you tesselated the model into triangles, that's it. You don't need CPU to tesselate it at run time and certainly not every frame. Though you could if you want to.

Anyway, cinematic cutscenes are like tech demo anyway. So no you aren't trying to run a game.

In the future I like to see some CPU cores dedicated for tesselating subdiv or higher order surfaces so we don't see poly edges in characters or objects or levels.
 
That tesselator doesn't support Catmull-Clark subdivision surfaces as far as I know.
SPEs might be OK for this but they have a lot else to do, and tesselated mesh data takes a LOT of space. You have to include all the UV coordinates, vertex colors etc. and usually you want to tesselate after skinning, so it has to be done on the CPU as well.

Crikey... so the gist of it is:
  • memory
  • CPU processing
  • shader power
:?:

And of course it has to be repeated for each shadow casting lightsource when using shadow buffers.

Is it compatible with deferred shading/lighting? On the other hand, the amount of memory for anything else would dwindle even further from the use of multiple render targets anyway. :p

(1GiB framebuffers, here we go!)
 
They can be build by subdiv surfaces or other high order surfaces, once you tesselated the model into triangles, that's it. You don't need CPU to tesselate it at run time and certainly not every frame.

If you want to skin and animate the character, then you'd better give an untesselated mesh to your rigger, otherwise he'll be quite unhappy. You also don't want to store too much geometry, neither on disk nor in memory.

So you have to work with the control mesh throughout the pipeline, and only tesselate for rendering. And you can not tesselate on the GPU for most kinds of HOS.
 
Is it compatible with deferred shading/lighting?

Of course it is. If you want correct shadows then you have to render the tesselated model into the shadow buffer(s), which means a lot of vertex processing.

Each level of subdivision usually increases the poly count by 4. You usually want at least two levels which is a 16x increase.

Another problem is that you get lots of very small polygons - but that's what you want, poly edges should be short enough so that they disappear - which ruins the efficiency of the pixel pipelines. This is another issue that you don't really care about in a tech demo where you don't have a lot of things going on...
 
If you want to skin and animate the character, then you'd better give an untesselated mesh to your rigger, otherwise he'll be quite unhappy. You also don't want to store too much geometry, neither on disk nor in memory.

So you have to work with the control mesh throughout the pipeline, and only tesselate for rendering. And you can not tesselate on the GPU for most kinds of HOS.

Riggers can rigged the subdiv and convert them to poly, the weights should transfer but riggers may need to touch on it some more. Don't let disgruntled riggers be the Achilees' heel.

Of course, its better to handle the tesselation in game, that way you get LOD too. Still tesselation is the domain of CPU. GPU already has its hand full, it just need to be able to handle the polys.
 
Another problem is that you get lots of very small polygons - but that's what you want, poly edges should be short enough so that they disappear - which ruins the efficiency of the pixel pipelines. This is another issue that you don't really care about in a tech demo where you don't have a lot of things going on...

Shouldn't the deferred rendering take care of the pixel shader pipelines inefficiency for rendering small polygons. Because you are basically rendering just one quad.
 
Riggers can rigged the subdiv and convert them to poly, the weights should transfer but riggers may need to touch on it some more. Don't let disgruntled riggers be the Achilees' heel.

Huh? The problem is that the rigger doesn't want to work with a few hundred thousand vertices, because it's 1. a lot more work and a lot more difficult 2. it'll be incredibly slow on any kind of machine.
You can't work with such detailed geometry, you have to rig the control mesh. There are reasons why these techniques become the industry standard...


Shouldn't the deferred rendering take care of the pixel shader pipelines inefficiency for rendering small polygons.

Rendering the G-buffer will still be pretty inefficient.
 
Huh? The problem is that the rigger doesn't want to work with a few hundred thousand vertices, because it's 1. a lot more work and a lot more difficult 2. it'll be incredibly slow on any kind of machine.
You can't work with such detailed geometry, you have to rig the control mesh. There are reasons why these techniques become the industry standard...

Rigged the control mesh, tesselate after that the weight of the vertices should transfer, it won't be as good, but it will be good enough.

I am suprise that game companies haven't invested in technology to automate rigging. Especially in rigging human characters.

That NV Dawn are stored as polygons, if that artists had done it, I don't see why other artists can't do it.

You talk about computer slowness and stuff, can't you work by parts I mean the arms aren't going to influence the legs, and the left and right side are generally independent. The face is independent of the body. So when you break it all down you can work in parts to speed things up.

Rendering the G-buffer will still be pretty inefficient.

Is the pixel shading inefficiency really a problem in G-buffer creation ?
 
My point was regarding that character showing insufficient polygons in several areas despite not being that buffed. Other characters can be more buffed without diluting the point.

I'm not quite should what you mean by this... But I think you're saying that since he is a simpler design, his polygon model should look better, right?

If a design is simpler to create with polygons, fewer polygons will be put into it.

Here are polygon counts for Virtua Fighter 1 on the Model 1 board:

Akira ~2300
Jeffrey ~2000
Pai ~2000
Sarah ~1900
Lau ~1900
Wolf ~1800
Kage ~1500
Jacky ~1500

Jacky has a much simpler design than Akira, so he gets fewer polygons.

Edit: Rereading this, I noticed I put "your" instead of "you're".
 
Last edited by a moderator:
I'm not quite should what you mean by this... But I think your saying that since he is a simpler design, his polygon model should look better, right?

If a design is simpler to create with polygons, fewer polygons will be put into it.

Here are polygon counts for Virtua Fighter 1 on the Model 1 board:

Akira ~2300
Jeffrey ~2000
Pai ~2000
Sarah ~1900
Lau ~1900
Wolf ~1800
Kage ~1500
Jacky ~1500

Jacky has a much simpler design than Akira, so he gets fewer polygons.
I guess they'd try to make characters comparable in graphics and use less polygons with simpler designs, It didn't occur to me at the time. But It would still be a recent fighter character's polygon count.
 
Rigged the control mesh, tesselate after that the weight of the vertices should transfer, it won't be as good, but it will be good enough.

No, it will look completely different, and usually worse.

I am suprise that game companies haven't invested in technology to automate rigging. Especially in rigging human characters.

You can't automate the skin weighting part. I've said this many times before, no computer can replace a human's artistic sense for many more years to come.

That NV Dawn are stored as polygons, if that artists had done it, I don't see why other artists can't do it.

I recall that all those Nvidia demos had subdivs, with skinning and tesselation performed by the CPU. Can you provide some links to disclaim this?

You talk about computer slowness and stuff, can't you work by parts I mean the arms aren't going to influence the legs, and the left and right side are generally independent. The face is independent of the body. So when you break it all down you can work in parts to speed things up.

You can't cut up the model, skin the individual parts and then stich it back together seamlessly. And you don't have to either, because you can apply tesselation after the skinning.

And as I've said, implementing this on current GPUs in any way wouldn't be to good anyway as it'd ruin the pixel pipeline's efficiency. So it doesn't make any sense at this time.

Is the pixel shading inefficiency really a problem in G-buffer creation ?

Who knows, you'd have to ask a programmer about that...
 
Rendering the G-buffer will still be pretty inefficient.

The 2x2 pixel shader units would still show their inefficiency when rendering G-buffers, however, since there's so little pixel shader load during the G-buffer rendering, it shouldn't matter much - you are probably bottlenecked elsewhere during that pass, be it framebuffer bandwidth or vertex processing.

Now that I think of it, theoretically on Xenos you'll have enough bandwidth thanks to the EDRAM, and the GPU can reassign more ALUs to geometry processing... it all "depends", and you have to profile a real-world situation.

Which is what you probably already knew before asking a programmer :)
 
Yeah, but the 360 would have a little problem with storing the G-buffer in the EDRAM, or accessing it from the DDR memory. But I expect deferred rendering to become more common with the next gen consoles...
 
No, it will look completely different, and usually worse.

You can't automate the skin weighting part. I've said this many times before, no computer can replace a human's artistic sense for many more years to come.

I am not saying it will look better to human's artistic sense. I am just saying it should look good enough and the artist can touch on it, if it isn't good enough. I am talking about high poly here.

I recall that all those Nvidia demos had subdivs, with skinning and tesselation performed by the CPU. Can you provide some links to disclaim this?

Looking at the mesh, I am sure that Dawn was created using subdiv, but you can read it on this presentation:

ftp://download.nvidia.com/developer/presentations/2004/GPU_Jackpot/Secrets_of_the_Demoteam.pdf

on page 14, Intro:
*Ambient Occlusion (per-vertex data) from
custom raytracing tool

on page 15, Animation Details:

*Skinning also computed in a vertex shader on
the GPU

So they must have the vertex weight somehow. I don't know how they generate the vertex weight but they did.

You can't cut up the model, skin the individual parts and then stich it back together seamlessly. And you don't have to either, because you can apply tesselation after the skinning.

I am not talking about cutting them up, just hide them to speed up the viewport if it is too slow with all the vertices. Beside rigged mesh are normally cut up for vertex shaders.

Who knows, you'd have to ask a programmer about that...

Well from my understanding deferred shading mean the load of pixel shading is deferred until the end. So during the creation of G-buffer, the pixel shading should be lightly loaded. But when the pixel shading is working hard going through the G-buffer, its very efficient.
 
I am not saying it will look better to human's artistic sense. I am just saying it should look good enough and the artist can touch on it, if it isn't good enough. I am talking about high poly here.

Have you done any character rigging, skinning or that sort of work? I did many times, and I don't think it can be automated at all...

Looking at the mesh, I am sure that Dawn was created using subdiv, but you can read it on this presentation:
*Skinning also computed in a vertex shader on
the GPU
So they must have the vertex weight somehow. I don't know how they generate the vertex weight but they did.

I'm quite curious about that, too...


I am not talking about cutting them up, just hide them to speed up the viewport if it is too slow with all the vertices. Beside rigged mesh are normally cut up for vertex shaders.

Unfortunately hiding won't help you, but I'd rather not get into any lengthy discussions and explanations about how the skinning workflow is implemented in current 3D packages...


Well from my understanding deferred shading mean the load of pixel shading is deferred until the end. So during the creation of G-buffer, the pixel shading should be lightly loaded. But when the pixel shading is working hard going through the G-buffer, its very efficient.

Again, ask a programmer about this.
 
Have you done any character rigging, skinning or that sort of work? I did many times, and I don't think it can be automated at all...

I did some when I was helping my friend finished his project. I was doing it roughly though and he created all the corrections for the joints and stuff. At the time I thought I could write a code to do it roughly.

I'm quite curious about that, too...

Well I look it up in the GPU Gems. The rigging and animation was done in Maya. 180k triangles with 98 bones and each vertex has upto 4 influences. The face animation is from 50 blenshapes of 27k triangles.

Is there anything in Maya that would help rigging high poly character ? or is the artist is just really skilled ? After all he was the guy that did Final Fantasy TSW movie.

BTW I looked through GPU Gems 2, There is an article, "Adaptive Tesselation of Subdivision Surfaces with Displacement Mapping" by M. Bunnell. He has a work around to tesselate Catmull-Clark subdivision surfaces on the GPU. So it can be done on GPU.
 
Skinning with vertexshader on the GPU is really standard stuff.
--
edit: removed content should have read the PDF

--
A lot of it (poly stuff ) is overkill for the result we see .
Dawn was probably created using proxy modeling and skinning done on the proxy too.Anyway the skinning isn't really great.
 
Last edited by a moderator:
I would have been really curious as to what EA Chicago could have done with a second generation Fight Night on these consoles. :(
 
Skinning with vertexshader on the GPU is really standard stuff.
--
edit: removed content should have read the PDF

--
A lot of it (poly stuff ) is overkill for the result we see .
Dawn was probably created using proxy modeling and skinning done on the proxy too.Anyway the skinning isn't really great.

Well we weren't discussing how great the skinning was. We were discussing about poly edges on close up of low poly characters (around 10k). Compare that to Dawn where poly edges are harder to spot.

My question was, why aren’t cinematic models having poly count like Dawn to remove the visible poly edges. Reason given was the riggers would hate you. Anyway we were arguing that high poly models are impractical without resorting to subdiv surfaces because of skinning.

Dawn seems to be stored as polygons with all the ambient occlusion, vertex weights and UVs for every vertex, instead of subdiv surfaces like NV Timbury demo. So we were wondering how Dawn was skinned. With a lot of patience is the best we came up with but if you know a practical way of skinning 180k poly characters please share with us.

Anyway subdiv, deferred shading and how future CPUs, GPUs handle them are an interesting topic. I might make a thread about them.
 
Back
Top