Is tesselation possible on the PS3?

Butta

Regular
Is there any way that tesselation can be supported through the PS3 using the SPUs? As I understand it, tesselation would send more verts to the GPU (which is already vert bound) so it would not be easily supported. Is there any way through trickery or such that these additional verts could not be sent to the GPU? Is it even worth the effort?
 
PhyreEngine uses tessellated geometry in their LOD system:
http://www.technology.scee.net/files/presentations/cedec2008/PhyreEngine_CEDEC2008Speech_e.pdf

Basic Performance

Now I’ll talk a little about our performance.
To give an idea of the scene complexity, there are typically about 20 patches visible on screen in our terrain. If we increased the scale of our terrain, we might have more patches, but all the extra ones would be in the distance, and thus of the lowest LOD and wouldn’t massively affect performance.
This gives us around 70,000 triangles on-screen. Note that we’re not including any geometry which is rendered multiple times, such as for rendering a show pass.
At runtime, the tessellation of the terrain takes only half of one SPU - this time including processing the terrain for the shadow-map generation. Considering that the terrain accounts for much of the visible scene and contains a lot of geometry, this is a reasonably good use of resources.
Our decompression is more variable and tends to use up whatever SPU time is left over after the geometry and other operations (such as skinning, animation, etc.) The system expects a small amount of lag in the streaming, and in practice it happens quickly enough that it isn’t visible to the player.
 
Realtime tesselation isn't meant to increase the polygon budget.

Depends on how much you use it and where...

If you have a single polygon ground plane and tesselate it for display then yes, it isn't meant to increase anything beyond normal. In this case you get some version of continous LOD on the terrain.

But if you have an alien in AvP and want to use displacement mapping on the character then no, tesselation is used exactly for that, to create enough polygons for displacement.
 
Realtime tesselation isn't meant to increase the polygon budget.

Of course it isn't, its meant to increase the total number of polygons on a (low complexity) model. However, the question was can the SPEs do tessellation, and they can. However, RSX has to process all those new vertices and it isn't well suited to a massive poly increase.

If we go on a tangent we can talk about using de-/tessellation to drop the per model poly count in the distance and increase in the foreground. Or we can talk about using SPEs for early Z cull, which I believe is already happening. Tessellation is only really useful if you aren't already hitting a vertex processing wall AND artists have created relatively simple models. Then you're at least saving I/O bandwidth with loading small models.
 
Realtime tesselation isn't meant to increase the polygon budget.
I'm not sure I get what you mean. OK tesselation can be seen as trading memory for computation (with nice bonus that's where Joker job seems heading) but recent ATI vid shows that (at least on their last hardware) generating a lot of polygons can be less costly than using bump/normal/parralax displacement mapping extensively.
Clearly to me tesselation goal is indeed to increase polygon budget as late as you can within the graphic pipeline thus you only pay its cost on a restricted part of your pipeline as whole (I mean Memory usage, cpu work, vertex computations, geometry shaders it happens just before rasterisation). Its goal is to make you pay the increased polygon budget as late as you can and by as few as possible. I'm searching last ATI vids without much success but it seems that the trade off of dealing with extra polygon instead displacement mapping can also be a winning trade off.
 
Yes, this (EDIT: what patsu posted) is what I thought and what I thought was missing from joker's post. You can use tesselation to keep the same amount of polygons when zooming in on an environment and vice versa (again, depending on the kind of geometry - again particularly curved surfaces lend themselves to near infinite tesselation I think). Ultimately you can't use tesselation to infinitely increase the amount of detail in a given scene - no matter the target hardware or where the first bottleneck turns up, there will be a limit of how much it can handle, or for that matter, how much you need.

In theory I have a feeling that if on a 1280x720p screen you have just under 1 million pixels, then under the most optimised circumstances you would never need more than 1 million vertices either, and you can create a near perfect looking scene as long as you don't need it to move / zoom in, right?

So much more than being able to deal with 30 million vertices to make sure that when you zoom in the detail is still there, engines should work to optimise themselves so that they can keep showing the optimum detail for the specified resolution.

As an educational proces I've been thinking about designing a pixel based renderer, that per-pixel tries to find the intersection in the geometry so that you always know exactly the amount of information/detail you need to inform that one pixel. I have no idea whether that's technically possible/feasible/original or whatever, but hey, I'm a beginner. ;)

I've also tried to think about setting up geometry with the minimum amount of information, so that you can calculate the details from a formula based on the pixel window and its intersection with what is defined by the (as I call them) information points. These information points also include information useful for physics, stuff that belongs to shaders, etc. The idea is that you don't inform every pixel but instead interpolate between the two points that have information, so you can create near analog transitions with only minimum data.

(Don't laugh, or at least not too hard ok? ;) )
 
As an educational proces I've been thinking about designing a pixel based renderer, that per-pixel tries to find the intersection in the geometry so that you always know exactly the amount of information/detail you need to inform that one pixel. I have no idea whether that's technically possible/feasible/original or whatever, but hey, I'm a beginner. ;)
Ray tracing...

I've also tried to think about setting up geometry with the minimum amount of information, so that you can calculate the details from a formula based on the pixel window and its intersection with what is defined by the (as I call them) information points. These information points also include information useful for physics, stuff that belongs to shaders, etc. The idea is that you don't inform every pixel but instead interpolate between the two points that have information, so you can create near analog transitions with only minimum data.
Sounds like vertices on HOS to me. ;) The most efficient represention is CSG, and these can be interpolated/rendered to pixel and sub-pixel accuracy. An engine built solely around rendering spheres and cuboid could be jaggy free.
 
Of course there would be a limit to how many polys you could handle but in ATI case that limit would be on GPU side I think. I was under the impression that now the limit is also on the bandwidth side. Send a bunch of High poly models to the GPU eats up bandwidth but sending low poly models that consume less bandwidth and having them sub-divided into something with more geomentry on the GPU sounds like an excellent solution. At least that is what I took away from one of Microsoft's presentation. If I'm wrong please correct me.
 
Yes, this (EDIT: what patsu posted) is what I thought and what I thought was missing from joker's post. You can use tesselation to keep the same amount of polygons when zooming in on an environment and vice versa (again, depending on the kind of geometry - again particularly curved surfaces lend themselves to near infinite tesselation I think). Ultimately you can't use tesselation to infinitely increase the amount of detail in a given scene - no matter the target hardware or where the first bottleneck turns up, there will be a limit of how much it can handle, or for that matter, how much you need.
If we look at the directx 11 pipeline (we put aside hull and domain shader computations) the part that take the main hit is the rasteriser as tesselation is the last part of the pipeline that use vertex, then the pipeline move to fragment. (Beteen there is discussion here about the possible inclusion by ATi of a second rasterizer in their new GPU).
In theory I have a feeling that if on a 1280x720p screen you have just under 1 million pixels, then under the most optimised circumstances you would never need more than 1 million vertices either, and you can create a near perfect looking scene as long as you don't need it to move / zoom in, right?
So much more than being able to deal with 30 million vertices to make sure that when you zoom in the detail is still there, engines should work to optimise themselves so that they can keep showing the optimum detail for the specified resolution.
I guess it depends on the technic you really for you renderer, Epic seems to be looking after something close REYES for their next engine if this presentation is to be believed. That imply micro polygons thus more than one vertice per pixel. As right now, I'm iffy I've no the technical back up to have certainty but say you leave normal map out of the picture how much polygons would you need to achieve the look of say Epic gears models? Normal map are based on super complex model.
New ATI card will be there soon, it will be interesting to see the perf trade off between rasterize a lot of triangle and rely extensively on "relief" mapping. I feel like devs (if there is a wide adoption of the tech and so proper content development tools catch up) will have to do balancing between vertices and normal maps. I guess high amount of vertice won't in every cases make up for normal maps based on even higher number of polygons.
 
If we go on a tangent we can talk about using de-/tessellation to drop the per model poly count in the distance and increase in the foreground. Or we can talk about using SPEs for early Z cull, which I believe is already happening. Tessellation is only really useful if you aren't already hitting a vertex processing wall AND artists have created relatively simple models. Then you're at least saving I/O bandwidth with loading small models.
Actually in the direct x tesselation is the last stage of the vertex processing pipeline, most of the processing is done on a not expanded set of vertices which make sense.
Culling has already happen so most likely only visible geometry is expanded tesselated.
If port to the PS3 SPU culling will happen before SPU tesselation.


Of course there would be a limit to how many polys you could handle but in ATI case that limit would be on GPU side I think. I was under the impression that now the limit is also on the bandwidth side. Send a bunch of High poly models to the GPU eats up bandwidth but sending low poly models that consume less bandwidth and having them sub-divided into something with more geomentry on the GPU sounds like an excellent solution. At least that is what I took away from one of Microsoft's presentation. If I'm wrong please correct me.
Same here tesselation happens late, I don't know the perf cost of hull and domain shader calculation but if I understand properly the rasterizer take most of the hit. More vertices won't result in more fragments/pixels so I clearly see why some members here speculate about a second rasterizer in the HD58xx serie.
 
As right now, I'm iffy I've no the technical back up to have certainty but say you leave normal map out of the picture how much polygons would you need to achieve the look of say Epic gears models?
More than you can happily fit in RAM! The future of tesselation is adding 3D detail through displacement maps, having a simple, RAM-light model and adding vertices and displacing according to a map to add true 3D detail. Models at that level of detail will just be too big an cumbersome.
 
Actually in the direct x tesselation is the last stage of the vertex processing pipeline, most of the processing is done on a not expanded set of vertices which make sense.
Culling has already happen so most likely only visible geometry is expanded tesselated.
If port to the PS3 SPU culling will happen before SPU tesselation.

DirectX and SPE are completely different things. On a DX11 part, yes, I guess it happens post vertex pass. RSX doesn't have hardware tessellation though, so you have to do it pre-vertex pass.

I'm not sure you can do z-cull before tessellation anyway. New geometry could easily expose previously hidden polygons. Someone smarter will have to chime in on that.
 
More than you can happily fit in RAM! The future of tesselation is adding 3D detail through displacement maps, having a simple, RAM-light model and adding vertices and displacing according to a map to add true 3D detail. Models at that level of detail will just be too big an cumbersome.
That what I was thinking, I don't remember the figure but normap map are based on impressive number of polygons.
I wonder if tesselator could be push to the micro/sub pixel polygons to avoid the memory cost and handle transformation to that many vertex. It possible that both techniques will indeed cohabit.

DirectX and SPE are completely different things. On a DX11 part, yes, I guess it happens post vertex pass. RSX doesn't have hardware tessellation though, so you have to do it pre-vertex pass.
Yes the same SPU culling happens.
I'm not sure you can do z-cull before tessellation anyway. New geometry could easily expose previously hidden polygons. Someone smarter will have to chime in on that.
In some case I can imagine that this could happen but I think it's worse the trade off, tesselate expanded the whole geometry would increase the impact of the technique.
It's not displacement it's about maxing the detail not magnifying the height difference between two vertex. For example this from an previous ATI presentation is tesselation different displacement
ploygons.jpg

this is tesselation only
IMG0019986.jpg

It doesn't make things bumpier but smoother indeed.
Here too the difference between displacement and tesselation is clear:
tesselation.jpg


Actually more insight is needed but what we speak could be true with or without tesselation, if you do some displacement and have culled supposedly non visible geometry in the end some details that should have been displayed get lost in the wind.
 
Last edited by a moderator:
As right now, I'm iffy I've no the technical back up to have certainty but say you leave normal map out of the picture how much polygons would you need to achieve the look of say Epic gears models? Normal map are based on super complex model.

Their source models for characters are around 20-30 million polygons. Some 50-70% of it can actually be optimized away without noticeable loss of detail but it'll create very messy geometry... and you'd want to do some HOS + displacement mapping to store them anyway.

By the way I don't think tesselation would make normal maps obsolete. We regularly use both, because you'd need insane amounts of vertices to replicate the shading detail, quite a lot more then 1 per pixel (at least 4, preferably). But it's overkill... so you should go with a low-res displacement map and a high-res normal map combo, and as you move the object away from the camera, you can completely drop displacement but keep the (MIP mapped) normal map for nice shading.
 
Their source models for characters are around 20-30 million polygons. Some 50-70% of it can actually be optimized away without noticeable loss of detail but it'll create very messy geometry... and you'd want to do some HOS + displacement mapping to store them anyway.

By the way I don't think tesselation would make normal maps obsolete. We regularly use both, because you'd need insane amounts of vertices to replicate the shading detail, quite a lot more then 1 per pixel (at least 4, preferably). But it's overkill... so you should go with a low-res displacement map and a high-res normal map combo, and as you move the object away from the camera, you can completely drop displacement but keep the (MIP mapped) normal map for nice shading.

Cool. How would you deal with hair?...or is that a dumb question.
 
Just curious, I thought normal maps are supposed to replace some of the polygon detail in order to produce a high detailed model without the use of complex geometry that will take too much performance otherwise (ie in Gears the models in game are feasible by the hardware, yet the source model would have been impossible to run in real time without killing performance). Tessellation seems to do the opposite. Its as if it is adding polygons to increase detail. Doesmt this mean that we need much more powerful hardware to do that?
 
Just curious, I thought normal maps are supposed to replace some of the polygon detail in order to produce a high detailed model without the use of complex geometry that will take too much performance otherwise (ie in Gears the models in game are feasible by the hardware, yet the source model would have been impossible to run in real time without killing performance). Tessellation seems to do the opposite. Its as if it is adding polygons to increase detail. Doesmt this mean that we need much more powerful hardware to do that?

Nah, normal maps don't store poly detail, they store the lighting normals of the high-res mesh, so the low-res model appears properly lit/shadowed. What tesselation does is takes a low-res cage, and subdivides/smoothes the geometry, effectively building a high-res cage in the process. That cage can then be displaced, and normal mapped itself to look much closer to what today we'd expect from source art. (that was a major over-simplification, but you get the point)

One benefit of hardware tailored to allow for this, is it should require less bandwidth overhead needed for meshes, despite allowing for much greater image complexity. It's sort of a trade-off, though, because while you may need less memory bandwidth, you need to draw more geometry.
 
Last edited by a moderator:
milan616 said:
If we go on a tangent we can talk about using de-/tessellation to drop the per model poly count in the distance and increase in the foreground
That's kinda my point - we can rebalance the budget, not increase it. Ie. in theoretical ideal you'd have adaptive tesselation on everything, and get details only where they are actually visible.
Though the way things are going we might have hw that tesselates to subpixel levels "for free" soon enough to make the whole discussion moot.

liolio said:
Culling has already happen so most likely only visible geometry is expanded tesselated.
If that was true, it would make most of the interesting uses of tesselation impossible (displacement, volume expansion, non-linear shadow projections etc.).

Anyway - putting tesselation late enough in pipeline does give scenarios(on certain hardware) where you can pump polygons out faster then regular vertex pipeline (many of us have done this in some fashion or another back in PS2 days), but unless hw can bypass setup stage alltogether and go straight to pixels you are still facing the same poly-budget overall.
 
Last edited by a moderator:
Back
Top