Graphical effects rarely seen this gen that you expect/hope become standard next-gen

No, subdividing a cube (made from six polygons) will get you a sphere :) Which is why the base model needs to get more polygon detail to control the derived surface.

Now for DX11 compatible tesselation, that's a completely different issue. Complex mechanical surfaces, like guns or combat armor, require very precise curves and corners and such. Reproducing these with tesselation and displacement mapping isn't going to look good enough unless they use an insane number of polygons (subpixel sized). So it's far more efficient to use good old modeling and normal mapping.

Look at the elbow armor piece on Marcus on this image:
gears-of-war-2-screenshot-9.jpg


You can see some pixelation there from the normal map. It isn't really disturbing though because the texture resolution is pretty high - most Gears characters use two 2K maps, and even mirror as many parts as they can (so they only store textures for one arm and leg).
Reproducing just this detail using a displacement map would then require a little more then 2K + 2K texels' worth of geometry, which is already about 8 million polygons.
Otherwise, the model could become quite jagged and noisy, not looking too smooth at all.

It's not something that can't be solved, though, by being careful about what to put into the geometry and what to use normal maps for, and carefully distributing polygons on the base mesh. Basically, all the main pieces of the armor had to be modeled. There's a reason the AvP demo does not try to use displacement on the colonial marine and goes for the more organic aliens instead... So tesselation and displacement in its current form is just not fast and efficient enough to produce details like that on its own, and what little it can add on current consoles might not be worth the effort.
 
No, subdividing a cube (made from six polygons) will get you a sphere :) Which is why the base model needs to get more polygon detail to control the derived surface.
Last time I checked it should end up as cube with more polygons in it? :???:
 
Tuning games for minimum 30fps rather than average 30fps should be mandatory. Probably wishful thinking, unfortunately. Throw in a dynamic framebuffer to boost the pixel count in less demanding scenes if you like.
 
Last time I checked it should end up as cube with more polygons in it? :???:

If that were the case tesselation would have no benefit whatsoever - every polygon in the model would just end up as several smaller planar polygons :smile:

The algorithm needs to know which edges are meant to be in the model and which are only present because of the polygonal approximation - and are therefore the ones the process is designed to mitigate.
 
Tuning games for minimum 30fps rather than average 30fps should be mandatory. Probably wishful thinking, unfortunately. Throw in a dynamic framebuffer to boost the pixel count in less demanding scenes if you like.

Dynamic framebuffer helps with framerate only if you're pixel-bound, which not games are.
 
If that were the case tesselation would have no benefit whatsoever - every polygon in the model would just end up as several smaller planar polygons :smile:

The algorithm needs to know which edges are meant to be in the model and which are only present because of the polygonal approximation - and are therefore the ones the process is designed to mitigate.

Tessellation doesn't seem to be of any benefit on this alien's tail.

And what's probably the worst demonstration of tesselation, even with its sub-pixel-sized polys, it's failing to increase detail in the areas my eyes are naturally drawn to.
 
Last edited by a moderator:
If that were the case tesselation would have no benefit whatsoever - every polygon in the model would just end up as several smaller planar polygons :smile:

The algorithm needs to know which edges are meant to be in the model and which are only present because of the polygonal approximation - and are therefore the ones the process is designed to mitigate.

In the most generic of terms he's correct, but if we assume Catmull-Clark - Basically the geometry is divided, and the model is smoothed. That just gives us a baloon-like model with lots of polygons. To control what retains its shape you have to define your edges, edge-loops, hard details, etcetera a little better, and that means more geometry in your base mesh (which you have to model). Suppose at some point this won't be such a PITA, but we're still a while off from micropoly, HUGE base asset stuff in realtime.


edit:
I think Laa-Yosh beat me by a page...
 
Tessellation doesn't seem to be of any benefit on this alien's tail.

Yeah, it'll take some time before artists get the hang of this...

On 360, dynamic adaptive tessellation or animated tessellated objects are necessarily a two-pass algorithm IIRC (gotta check some papers again for specifics). If I'm not being daft at the moment, this might not be particularly bad if other parts of the rendering require multiple passes... e.g. Light-indexed deferred rendering a.k.a. light pre-pass, shadows, reflections... etc

Terrain is fairly straight forward though because it's static.

At any rate, it would be interesting for Bungie, particularly considering Natalya Tartarchuk is now with them...




And what's probably the worst demonstration of tesselation, even with its sub-pixel-sized polys, it's failing to increase detail in the areas my eyes are naturally drawn to.
/waits for Team Ninja to show the industry how it's done.
 
In the most generic of terms he's correct, but if we assume Catmull-Clark - Basically the geometry is divided, and the model is smoothed. That just gives us a baloon-like model with lots of polygons. To control what retains its shape you have to define your edges, edge-loops, hard details, etcetera a little better, and that means more geometry in your base mesh (which you have to model).
Well, it doesn't really have to be more geometry. You can also parameterise edge "hardness" -- I think most modeling software supports this these days. Of course, I don't know of any game engine designed to work with such information, or really any kind of per-edge parameter.
 
I'd like to see the end of poorly-done features merely for the sake of saying you have them, but that's not going to happen. I'm playing Dead Space on my PS3, and the glitchy, jumpy self-shadowing on the character model is downright distracting at times. It looks terrible, and they should have just left the effect out IMO.
 
Even I feel that simple blacking out of areas not receiving light as seen in FNR4 works better than having traditional self shadowing which is full of artifacts. Till now there have been very few games which have satisfied me with their good quality Self Shadowing.
 
I'd like to see the end of poorly-done features merely for the sake of saying you have them, but that's not going to happen. I'm playing Dead Space on my PS3, and the glitchy, jumpy self-shadowing on the character model is downright distracting at times. It looks terrible, and they should have just left the effect out IMO.

Wouldn't having better self-shadowing better than not having self-shadowing at all? Especially for a game that emphasizes so much on light and shadow, not having self-shadowing at all would make the lighting and shadowing inconsistent, that would break the immersion more.
 
Even I feel that simple blacking out of areas not receiving light as seen in FNR4 works better than having traditional self shadowing which is full of artifacts.

Finding out which areas don't receive light is exactly what shadowing algorithms does, and it's far from simple. It's like saying that simply moving through the air will work better than all that traditional and complex airplane machinery.
 
not having self-shadowing at all would make the lighting and shadowing inconsistent, that would break the immersion more.
Not true. Glitchy effects are distracting. This is what many developers don't understand--they think any special effect they can add automatically makes the game "look better" (probably because reviewers swoon over new bells & whistles). Last gen, normal maps did not automatically make a game look better, not if too much geometry was sacrificed, or if they overdid specularity and gave everything the "wet plastic" look--yet no matter how ugly an OXbox game was, normal maps were almost an automatic 10 for graphics. The self-shadowing in Dead Space PS3 looks terrible. It has random, jaggedy, squarish patches that constantly flicker on and off and makes the game look inconsistent, especially on small ridges on your armor, where the precision is too low to draw the shadow with any kind of accuracy. If it weren't there, I probably wouldn't notice it, since the lighting on Isaac looks fine. But since it is there, and since it looks so bad, I'm noticing it all the time.

Or another example, the self-shadowing in Rogue Leader looked like garbage. There shimmering lines of triangles all over things. I thought it looked awful at the time, and I probably wouldn't have thought anything if it weren't there. Rebel Strike fixed vehicle shadows, but the character shadows were terrible. Again...they should have left the effect out rather than have random dark patches jumping all over the models.

So I would prefer that next gen, if they invent a new effect that they can't do without it looking like utter crap, they shouldn't use it.
 
Not true. Glitchy effects are distracting. This is what many developers don't understand--they think any special effect they can add automatically makes the game "look better" (probably because reviewers swoon over new bells & whistles). Last gen, normal maps did not automatically make a game look better, not if too much geometry was sacrificed, or if they overdid specularity and gave everything the "wet plastic" look--yet no matter how ugly an OXbox game was, normal maps were almost an automatic 10 for graphics. The self-shadowing in Dead Space PS3 looks terrible. It has random, jaggedy, squarish patches that constantly flicker on and off and makes the game look inconsistent, especially on small ridges on your armor, where the precision is too low to draw the shadow with any kind of accuracy. If it weren't there, I probably wouldn't notice it, since the lighting on Isaac looks fine. But since it is there, and since it looks so bad, I'm noticing it all the time.

Or another example, the self-shadowing in Rogue Leader looked like garbage. There shimmering lines of triangles all over things. I thought it looked awful at the time, and I probably wouldn't have thought anything if it weren't there. Rebel Strike fixed vehicle shadows, but the character shadows were terrible. Again...they should have left the effect out rather than have random dark patches jumping all over the models.

So I would prefer that next gen, if they invent a new effect that they can't do without it looking like utter crap, they shouldn't use it.

The implementation is improving all the time, you're never going to get it perfect the first time, they can't really improve on something if they don't try it at first, we ARE seeing better and better self-shadowing and it wouldn't be possible if everybody just kind of stays away from it instead of tries it and then learns from each other's mistakes.
 
Finding out which areas don't receive light is exactly what shadowing algorithms does, and it's far from simple. It's like saying that simply moving through the air will work better than all that traditional and complex airplane machinery.

Yes I know about that...but the thing used in FNR4 is a bit different & more basic.
I donno what they are called hence I cudnt put it up in words there...but it was surely not traditional Self Shadowing.
 
I would like to see more image reuse in next generation games. Subsequent frames in games (especially 60 fps games) have lots and lots of almost identical pixels in the previous frame compared to the newly rendered frame. The more pixels we can reuse, the less work we have to do every frame. If we can for example reuse half the pixels, we can either double the frame rate, or double the graphics quality. This is an area of graphics rendering that hasn't been researched enough.

Or another example, the self-shadowing in Rogue Leader looked like garbage. There shimmering lines of triangles all over things. I thought it looked awful at the time, and I probably wouldn't have thought anything if it weren't there. Rebel Strike fixed vehicle shadows, but the character shadows were terrible. Again...they should have left the effect out rather than have random dark patches jumping all over the models.
It's often easier to shadow the entire scene instead of exclude some shadows from some objects. You have to render the character's shadow to the sunlight's shadow map, otherwise there will be no shadow in the ground and on other objects below the character. However if you render the character to the shadow map, then the character also receives it's own shadow. Naturally you can use extra shadow map texture for each character to solve this problem, but this costs extra performance and memory. Or you can render the characters last to the shadow map, and pinpoint between two render targets until your characters have been rendered without their own shadows. Render target switching costs performance and if you are using a deferred renderer doing it like this is impossible. They most likely wanted to save some performance, and though the self shadowing artifacts weren't critical.
 
Last edited by a moderator:
Back
Top