The Game Technology discussion thread *Read first post before posting*

Getting back to a previous issue, here's a very nice tech demo / benchmark that features DX11 tesselation and displacement mapping.

http://unigine.com/press-releases/091022-heaven_benchmark/

Couple of notes:

They should really dial the bloom and DOF back.

They should also note that tesselation alone is not enough for this detail, they're using displacement maps on top of it (extra memory cost! although only 16 bits per pixel is needed).

Very high quality artwork there, and the dragon was a good choice to showcase the tech. I'm not that much of an expert but shadows look very high quality too, and it seems to use some kind of deferred lighting for the nighttime scenes; and looks like some nice AO is there, too.

Now this runs on a DX11 PC, and I absolutely don't expect any current gen console to have a reasonable implementation of the tech, the performance is not there IMHO. So it's more like a glimpse of the future, now that the tech finally seems to get ironed out, it should really become a standard for the next PS/Xbox.

I also think that the art pipeline may be able to get away with a relatively small upgrade - highres source models will still be created in Zbrush/Mudbox, textures don't need any change, it's just that in addition to the normal map (which will have to remain!) the tools will need to generate a displacement map; and the modelers will also have to get preview tools for the tesselation part.

Level creation might end up a bigger problem though... particularly collision detection with the scale of displacement in this demo. Sometimes those bricks move around as much as 20-30cm, that'll be a problem. Not to mention raycasting around the scene; view-dependent methods can confuse the system and more polygons mean more calculations for the AI. Interesting problem for sure.


Anyway, it's finally a very good demonstration of what to expect.

Nice find!

Surely, not everything in a scene is required to be tesselated though ;)

And then there's also adaptive tesselation...:devilish:
 
Tesselation

Getting back to a previous issue, here's a very nice tech demo / benchmark that features DX11 tesselation and displacement mapping.

http://unigine.com/press-releases/091022-heaven_benchmark/

Couple of notes:

They should really dial the bloom and DOF back.

They should also note that tesselation alone is not enough for this detail, they're using displacement maps on top of it (extra memory cost! although only 16 bits per pixel is needed).

Very high quality artwork there, and the dragon was a good choice to showcase the tech. I'm not that much of an expert but shadows look very high quality too, and it seems to use some kind of deferred lighting for the nighttime scenes; and looks like some nice AO is there, too.

Now this runs on a DX11 PC, and I absolutely don't expect any current gen console to have a reasonable implementation of the tech, the performance is not there IMHO. So it's more like a glimpse of the future, now that the tech finally seems to get ironed out, it should really become a standard for the next PS/Xbox.

I also think that the art pipeline may be able to get away with a relatively small upgrade - highres source models will still be created in Zbrush/Mudbox, textures don't need any change, it's just that in addition to the normal map (which will have to remain!) the tools will need to generate a displacement map; and the modelers will also have to get preview tools for the tesselation part.

Level creation might end up a bigger problem though... particularly collision detection with the scale of displacement in this demo. Sometimes those bricks move around as much as 20-30cm, that'll be a problem. Not to mention raycasting around the scene; view-dependent methods can confuse the system and more polygons mean more calculations for the AI. Interesting problem for sure.


Anyway, it's finally a very good demonstration of what to expect.

The scene where you approach the dragon: you see that the "spikes" grow while approaching him, which seems to be a smooth process.

Is this just to show the different possible "detail levels" of the adaptive mesh? Or is this how tesselation works in general?

Because it is kind of odd to see such "large details" (as the spikes) come from nowhere while approaching the dragon. I mean, you should see his spikes even from far away??

Maybe you can enlighten me :idea:
 
The guy was asking to use Cell with alpha-blending

Yeah, but the way I interpreted it seemed to indicate lots of reads/writes into the framebuffer, causing bandwith trouble. Using local storage for that seems... problematic to me.

Raycasting against tesselation/subdivision surfaces is actually not hard

It's not really the actual programming that might become an issue. I'm thinking about cases like, say, an fps game where you can try to hide behind a rock. If displacement can push vertices around a lot then it does make a difference if you have adaptive, view-dependent tesselation. Say my opponent I'm trying to snipe is not fully covered from my point of view because the displacement at that distance haven't kicked in yet.

Actually, with an AI agent it might not be a problem as it's probably raycasting in the same scene that I'm in. But for an online mutliplayer match we might get into a case where "if I can't see you then you can't see me either" is no longer true. Might not even take something like a rock, just some displaced road or hillside, like in that DX11 demo.
Obviously you can restrict the amount of displacement artists can use, though, but it's still not a clear case. What about detail like the spikes on that dragon that you would never model into the base (untesselated) mesh? It's not something that seems to have a clear solution.
 
The scene where you approach the dragon: you see that the "spikes" grow while approaching him, which seems to be a smooth process.

Is this just to show the different possible "detail levels" of the adaptive mesh? Or is this how tesselation works in general?

Because it is kind of odd to see such "large details" (as the spikes) come from nowhere while approaching the dragon. I mean, you should see his spikes even from far away??

When you view the thing from afar, there's no tesselation at all and the base mesh (pre tesselation) does not have enough geometry to displace those spikes.

The first solution would be to manually model all the spikes into the base mesh, which would seriously increase the polygon count and kinda defeat the very purpose of adaptive tesselation and displacement.
One could also try to adjust the settings of the tesselation to kick in sooner, but that could cause aliasing, and generate too much geometry for the GPU to handle.

Also see my comments above, regarding visibility and collision related issues.

So as exciting it sounds on paper, it does introduce some troubles in practice, which is one of the reasons why it's not used that much yet.
 
And then there's also adaptive tesselation...:devilish:

It is using adaptive, view-dependent tesselation, look at the wireframe parts. Detail only kicks in at a certain distance, and as the object moves closer, more and more vertices are added.
 
I think this deserves its own thread on the PC forum.

We've already discussed tesselation and displacement in general here, so there's room for it. Maybe the actual demo and it's related issues could move to a general 3D tech forum but I wanted to show what we've been talking about regarding the X360's abilities.
 
The scene where you approach the dragon: you see that the "spikes" grow while approaching him, which seems to be a smooth process.

Is this just to show the different possible "detail levels" of the adaptive mesh? Or is this how tesselation works in general?

Because it is kind of odd to see such "large details" (as the spikes) come from nowhere while approaching the dragon. I mean, you should see his spikes even from far away??

Maybe you can enlighten me :idea:

But it is becouse of the approach here. basically a flat texture (displacement texture) gets real geometry to define the objets shape. Practically the dragon in a game would have a low poly mesh with spikes and when player camera gets closer more geometry is added. And due to distance the difference between low and ultra high mesh detail would barely be visible as is in other games with 'pop-in' when switching LODs.
 
When you view the thing from afar, there's no tesselation at all and the base mesh (pre tesselation) does not have enough geometry to displace those spikes.

The first solution would be to manually model all the spikes into the base mesh, which would seriously increase the polygon count and kinda defeat the very purpose of adaptive tesselation and displacement.
One could also try to adjust the settings of the tesselation to kick in sooner, but that could cause aliasing, and generate too much geometry for the GPU to handle.

Also see my comments above, regarding visibility and collision related issues.

So as exciting it sounds on paper, it does introduce some troubles in practice, which is one of the reasons why it's not used that much yet.

Thanks! I just the watched the 2nd movie...I think you can see sometimes scenes where tesselation kicks in (rather sharp transition) and "large detail" (I have noticed it while watching the bricks in the wall) pops in.

So tesselation means refining the mesh, i.e. using more tris.

What about moving vertices? Without increasing the overall number of tris, you could concentrate them where they are needed by a simple linear mapping of the vertices to a new position...moving the vertices can be done "smoother" (I guess), in contrast to increasing the tris, which seems to be an inherent non-smooth process. Is something like this used in practise?
 
But it is becouse of the approach here. basically a flat texture (displacement texture) gets real geometry to define the objets shape. Practically the dragon in a game would have a low poly mesh with spikes and when player camera gets closer more geometry is added. And due to distance the difference between low and ultra high mesh detail would barely be visible as is in other games with 'pop-in' when switching LODs.
Yes, thanks...I understand it while watching the second movie!
 
What about moving vertices? Without increasing the overall number of tris, you could concentrate them where they are needed by a simple linear mapping of the vertices to a new position...moving the vertices can be done "smoother" (I guess), in contrast to increasing the tris, which seems to be an inherent non-smooth process. Is something like this used in practise?

You mean, localize new vertices to the part of the mesh that gets more displacement?

There may be tesselation schemes that support that, I'm no expert on this one, but it seems to me that DX11 doesn't support this. Seems to be far too complicated and inefficient.
 
Here is an example that Insomniac solves the problem of transparent objects.
R&C:ACiT is at 60fps and most of the objects in the games are transparent.



The game also has excellent shadows too.

Does anybody know how Insomniac solve this overdraw problem?
 
Here is an example that Insomniac solves the problem of transparent objects.
R&C:ACiT is at 60fps and most of the objects in the games are transparent.

http://img210.imageshack.us/img210/5492/dsc00848f.jpg

There's very little transparent overdraw there, it's a very different case.

Particle effects mean hundreds of transparent quads on top of each other which creates huge amounts of overdraw, dozens of layers, and that's what causes problems for the PS3 at full res.
 
There's very little transparent overdraw there, it's a very different case.

Particle effects mean hundreds of transparent quads on top of each other which creates huge amounts of overdraw, dozens of layers, and that's what causes problems for the PS3 at full res.

But that is not the original question that Joker454 raised. He said an object was in between the camera and the main target (monster) that that object became transparency so the player still can see the main target, thus created the overdraw problem. Like in ghostbuster game.

The particle effects was and is always the problem with modern graphic processors. Christer Ericsson has a long list of solutions on how to solve them problem on his blog.
 
Here is an example that Insomniac solves the problem of transparent objects.
R&C:ACiT is at 60fps and most of the objects in the games are transparent.

The game also has excellent shadows too.

Does anybody know how Insomniac solve this overdraw problem?

You will find here a presentation about how their deferred renderer (in fact a pre ligh pass renderer based on the work of M.Engel you may want to read one of his presentation too ;) )

I should read the presentation again byt if memory serves right transparencies are handles in a forward rendering fashion (I don't think you have choice anyway).
 
But that is not the original question that Joker454 raised. He said an object was in between the camera and the main target (monster) that that object became transparency so the player still can see the main target, thus created the overdraw problem. Like in ghostbuster game.

Yeah, that compounds the problem because of all the other effects and this extra layer of transparency that affects a significant portion of the screen ("huge dudes"). So now they have to do the alpha blending on a much larger scale. It's not the isolated effect that's the issue.
 
Yeah, but the way I interpreted it seemed to indicate lots of reads/writes into the framebuffer, causing bandwith trouble. Using local storage for that seems... problematic to me.
If your particle graphic fits nicely into the LS, bandwidth would be a matter of (optimal case) one full screen read, one full screen write, with tiles of the framebuffer being loaded and blended with particles.
 
There's very little transparent overdraw there, it's a very different case.

Particle effects mean hundreds of transparent quads on top of each other which creates huge amounts of overdraw, dozens of layers, and that's what causes problems for the PS3 at full res.

Are there any games with full res particles on either console so far? I don't have a working 360 anymore, but I don't think Bayonetta even has any. Half the time, I was distracted by the PS3 port's many other glaring issues so why I played the 360 one, I wasn't paying attention to something like that.

Capcoms' games have been great looking so far and even the latest demo of Lost Planet 2 still has low res particles and that's capped at 30Hz (at least on the 360)
 
Laa-Yosh said:
Using local storage for that seems... problematic to me.
Think tiling or something analogous to it.

It's not something that seems to have a clear solution.
Raycasting should work camera independant - you start tesselating only once you hit the "likely" (eg. the leaf-bounding box in an OBB tree), so you get accurate hits.

As for the visualization part of the problem (ie. I can "see" you, but my shots can't reach you) - that's not new, classic LOD solutions also remove/fade-out objects in distance that should sometimes still be visible. I don't think a reliable heuristic exists that can deal with it yet. But manual adjustment worked so far - I don't think adaptive tesselation will change much there.
 
Back
Top