The Game Technology discussion thread *Read first post before posting*

Capcoms' games have been great looking so far and even the latest demo of Lost Planet 2 still has low res particles and that's capped at 30Hz (at least on the 360)

That's a fillrate issue, using 1/4 res (check back to the original LP/Framework article). Also, they do make use of 4xMSAA to mitigate some artifacts with the particles.
 
Practically the dragon in a game would have a low poly mesh with spikes and when player camera gets closer more geometry is added. And due to distance the difference between low and ultra high mesh detail would barely be visible as is in other games with 'pop-in' when switching LODs.

I'm not sure if you get this right; tesselation and displacement is very different from swapping between discrete LOD models.

You have a base mesh with a fixed number of triangles. When you turn tesselation on, each of these triangles gets broken down into more triangles but the position of the newly created vertices can not be defined, it's mathematically derived from the position of the original 3 vertices of the triangle.

If you want to have 300 spikes on a dragon, you can tesselate the mesh enough so that you can displace them. But if the dragon gets beyond the distance where tesselation is activated, then the vertices disappear as well and you won't get the spikes.
You can model 300 spikes into the base mesh, but then it'll be quite a lot of polygons and will cause a lot of edge aliasing at a distance - unless you implement discrete LOD models as well...
 
If your particle graphic fits nicely into the LS, bandwidth would be a matter of (optimal case) one full screen read, one full screen write, with tiles of the framebuffer being loaded and blended with particles.

And can you render all your particles into one 'particle graphic' and blend with the actual framebuffer only when it's all done? I don't think so, although this might be the case.
I mean, you need to do reads and compares with the Z-buffer of the scene geometry already rendered into the framebuffer while rendering the particle system, you can't just do it in complete isolation...

And even then, how'd you render into a full resolution buffer without running into RSX's bandwith problems? All on the SPUs? That seems way too complicated with large textured quads, you'd have to store both the code, the texture, the vertex data, and the part of the existing Z + framebuffer in that 256k; that'd leave room for what, 32x32 tiles? This would result in a LOT of vertex processing to bin and split those large quads... sounds very inefficient to me.
 
Yeah, that compounds the problem because of all the other effects and this extra layer of transparency that affects a significant portion of the screen ("huge dudes"). So now they have to do the alpha blending on a much larger scale. It's not the isolated effect that's the issue.

Here is another pic with particle effects (transparent itself) and transparent objects.
I can throw a time bomb into that picture too to have the whole screen with transparent objects but I turn off the console already.

 
Think tiling or something analogous to it.

See my reply to Shifty - sounds way too complicated to work in practice.


Raycasting should work camera independant - you start tesselating only once you hit the "likely" (eg. the leaf-bounding box in an OBB tree), so you get accurate hits.

As for the visualization part of the problem (ie. I can "see" you, but my shots can't reach you) - that's not new, classic LOD solutions also remove/fade-out objects in distance that should sometimes still be visible. I don't think a reliable heuristic exists that can deal with it yet. But manual adjustment worked so far - I don't think adaptive tesselation will change much there.

Well, I'll trust your opinion on this one ;)
 
Here is another pic with particle effects (transparent itself) and transparent objects.
I can throw a time bomb into that picture too to have the whole screen with transparent objects but I turn off the console already.


There's not much transparent overdraw here...

And btw, can you use the thumbnail img link next time? We don't need to download such big image files everytime the forum page loads as there is other discussion...
 
..., it's mathematically derived from the position of the original 3 vertices of the triangle.
Question:
Do they really only take the position of the 3 vertices of the tri (which is broken down) into acount?

Or do they additionally use the normals to "reconstruct" the value of the new vertex?

Or do they use (instead of the normals) additional vertices of neighboring tris to "reconstruct" the value of the new vertex?
 
There's not much transparent overdraw here...

And btw, can you use the thumbnail img link next time? We don't need to download such big image files everytime the forum page loads as there is other discussion...

If you look to the glass door on the far right, there are a lot of spaceships flying around the building, each and every spaceship should cause an overdraw issue, no?
 
Z-culling will help with opaque objects and fillrate, but I mean transparent overdraw where Z-culling won't help at all. :) So it's the multiple layers of transparencies that will be problematic.
 
Question:
Do they really only take the position of the 3 vertices of the tri (which is broken down) into acount?

Or do they additionally use the normals to "reconstruct" the value of the new vertex?

Or do they use (instead of the normals) additional vertices of neighboring tris to "reconstruct" the value of the new vertex?

Depending on the scheme they could theoretically use any of these, although for performance reasons it's probably better to only use the actual 3 vertex normals and no neighboring vertices.

The important thing is that tesselation on its own does not add any detail, it can only smooth out the existing curvature.
 
Depending on the scheme they could theoretically use any of these, although for performance reasons it's probably better to only use the actual 3 vertex normals and no neighboring vertices.

The important thing is that tesselation on its own does not add any detail, it can only smooth out the existing curvature.
A little question are displacement maps heavy or they can be packed in reasonable sized textures?
 
A little question are displacement maps heavy or they can be packed in reasonable sized textures?

16 bit greyscale is the suggested quality, that should be the same as a normal map with only two channels stored, and most likely it can be compressed too.
Depending on the tesselation quality you may even downsize it once without any loss of detail... Assuming you keep the normal map as well, for smaller details, but that's a no-brainer IMHO.
 
Depending on the scheme they could theoretically use any of these, although for performance reasons it's probably better to only use the actual 3 vertex normals and no neighboring vertices.

The important thing is that tesselation on its own does not add any detail, it can only smooth out the existing curvature.

Thank you very much for the explanation! I get it now :)grin:): no additional detail, but smoothing out curvature via reconstruction of additional vertices starting from a given base mesh
 
Er, yeah, although smoothing the resulting mesh isn't always necessary as far as I know (sometimes, on very low res base meshes, it'd cause more trouble for the displacement). I may have to re-read the ATI docs ;)
 
There's not much transparent overdraw here...

And btw, can you use the thumbnail img link next time? We don't need to download such big image files everytime the forum page loads as there is other discussion...

But you can go up to those glass doors and fill the entire screen with transparency. :rolleyes:
 
One layer isn't going to be bad... I don't think you're getting the point or reading my posts...

So roll eyes to yourself, thank you and leave the attitude for other forums.
 
Does anybody know how Insomniac solve this overdraw problem?

Insomniac (like every PS3 studio) renders the transparency pass into a low resolution buffer, then they blend it back into the main scene. Look at Resistance 2, the artifacts from this process are easily visible to me, although I suspect most users won't notice them which is why this technique is feasible. The idea of rendering to a low res buffer is something that Sony themselves were suggesting when porting 360 games to PS3 to deal with this huge hardware bandwidth difference, they themselves mentioned it at a PS3Devcon many years back. It's common practice now.


And even then, how'd you render into a full resolution buffer without running into RSX's bandwidth problems? All on the SPUs? That seems way too complicated with large textured quads, you'd have to store both the code, the texture, the vertex data, and the part of the existing Z + framebuffer in that 256k; that'd leave room for what, 32x32 tiles? This would result in a LOT of vertex processing to bin and split those large quads... sounds very inefficient to me.

I actually considered a software renderer for transparency pass on PS3 back in 2007. Yeah, it would have been a complex task :) What ultimately made us kill the idea though was that we didn't expect the idling spus situation to last (they weren't used as much in 2007), and we figured spending spus for transparencies was not a good solution given everything else that was going to be thrown at them in the future. This turned out to be a good choice as spus are heavily used now, even though most studios won't publicly cheerleader that fact. Given the hardware, I still believe the correct solution is to design the issue away when you can, and use low res buffers everywhere else.

Incidentally some 360 games do this also when transparency is extreme. I worked on one game where the transparencies were 1/4 sized on the 360 version (I 1/16th them on the PS3 version) which sounds extreme but there was so little detail in these transparencies that no one was gonna notice even when we did many screen fulls of overdraw. I think sebbbi's awesome 360 game Trials HD does it as well, I believe I see artifacts on the explosions. More often than not though, alpha can be considered free on 360 for 'typical' gaming alpha needs.
 
Just a bit on that topic, I also found Bungie's presentation on sfx quite interesting:
http://www.bungie.net/images/Inside/publications/presentations/halo3_fx.zip

For example they're using only one grayscale texture and an 8-bit palette for all of the weapon muzzle effects to conserve memory, and this is their approach for most of the other effects too. Doesn't talk about programming related issues though ;)

Yeah, even knowing that they do it, I still don't notice it at all. :LOL: Not a bad trade-off for memory there.
edit: the comparison pics in the pptx are quite telling of how effective it was for differentiation.

edit 2: A bit interesting for ODST is that they've noticeably increased the distance fall-off rate for the screen effect on explosions (from what I've seen).
 
Back
Top