Tiles rendering on PS3

Sorry, I can explain it better: "a full description of a scene" means "a capture of all the geometry you're going to rasterize in a given frame" or "some other representation ot the geometry you're going to render".
When you say:
f you can insure a good-enough front-to-back rendering, I believe you can: at some point, you known that nothing will mask what you are about to send.
That point comes when you can analyze all your data.
What you can do is check bboxes or convex hulls instead of final primitves but then you can't tell nothing about single pixels anymore.
 
Last edited:
nAo said:
What you can do is check bboxes or convex hulls instead of final primitves but then you can't then tell nothing about single pixels anymore.

I get your point. And more generally, I already agreed that the per-pixel stuff was insane anyway (I didn't find any smiley with a big joint in the editor :D )
We now only need raw approximation to safely send some triangle pack to the GPU, which seems to be quite feasible.
 
MrWibble said:
Again, that's likely to be a lot quicker and easier than spending CPU resources implementing a secondary rasterisation.
Actually ideally NVidia would give us the spec and access to the H-Z, so we could copy it over to SPU for doing adaptive stuff.

Of course who am I kidding - we should consider ourselves lucky NVidia tells us such fine details as there being letters G, P and U in the thing...
But I disgresss.

purpledog said:
The idea is simple: before rendering the wanted precision, a lower version is first check against our SPU depth buffer.
While it could work, I have to question if it would be worth the while. Maybe cuz I've considered variations of this back in PS2 days that I doubt it more... but anyway.
Perhaps if you progressively refined your mesh on visibility basis only when the other adaptive parameters(distance, screen space coverage etc) already tell you it needs very fine detail - but then we're also back to argument that more conservative occlusion queries with just bounding box analysis could be enough.
But then I noticed the discussion already came to that conclusion on other issues as well.

Note: because adding detail to a triangle depends on its position, all the mesh deformers are processed before, therefore in the SPUs (no skinning in GPU).
Oh this goes without saying - if I am to do any kind of tesselation, last thing I want to do is animate after it. :)

nAo said:
To be fair I remeber it was far complex than that..there were others tests but I don't remember was I was exactly doing anymore, I'm getting old!
Well the big picture matters ;) (do people only say that when they get old?) At any rate it's a lot like what I was doing, aside for having finebox checks on CPU side.

btw PS2 ROCKS!! emh..sorry
Yea, they say you never forget your first :LOL:
 
Fafalada said:
Actually ideally NVidia would give us the spec and access to the H-Z, so we could copy it over to SPU for doing adaptive stuff.

Of course who am I kidding - we should consider ourselves lucky NVidia tells us such fine details as there being letters G, P and U in the thing...
But I disgresss.

But Faf, could you imagine if they did? There would be developer's... literally, drowning in data to think about. The ones without Mountain Dew or Nutella would die. Forum's would ignite and we're have yet more horror stories about how hard PlayStation is to program. It would be positively horriffic. Actually, it would be more like PS2; but abstraction is king! May the copy 'n paste programmer rise!
 
Fafalada said:
While it could work, I have to question if it would be worth the while. Maybe cuz I've considered variations of this back in PS2 days that I doubt it more... but anyway.
Perhaps if you progressively refined your mesh on visibility basis only when the other adaptive parameters(distance, screen space coverage etc) already tell you it needs very fine detail - but then we're also back to argument that more conservative occlusion queries with just bounding box analysis could be enough.
But then I noticed the discussion already came to that conclusion on other issues as well.

I don't understand your point here :cry:

First, the context: a very detailed scene described with a progressive mesh. Btw, there's a lot of different progressive mesh description outthere, but let's stick with a general definition: a hierarchical definition where you add more and more details while going deeper and deeper in the tree.

Then obviously, you want to cull the tree, and to evaluate only a small portion of it. Again, to simplify, let's forget about any time-coherence optimisation.
Here are the different culling-criteria you want to apply:
- frustrum
- backface
- precision
- visibility

Ideally, if these culling are well done, you only render the triangles needed to cover the screen, and no more.
Obviously, you don't want to generate everything to cull it afterward: much better to use some "branch&bound-like" methods where you directly approximate an entire sub-tree and check it against the 4 culling-criteria.

Ok, you obviously know all that. I just wanted to be sure we were talking about the same thing.

So what's you're point here...
Are you saying that visibility culling is too costly and should be ignored ?
Are you talking about the nature of the BBox, and saying that AABBox are good enough in most cases, even for visibility ?
Or are you saying that the order and occurence of culling tests are decisive and should be optimised ?

Please discuss :smile:
 
purpledog said:
Please discuss
Well to be honest, I wasn't really thinking of a scene described by a single progressive mesh - that's kind of pathological case, but anyway.

What I was getting at is that detailed dynamic occlusion IMO has two defining properties - one, it's either costly or unreliable, and two, the effect within same scene can change from great to completely useless(or even counter-effective) literally from frame to frame.
So I would argue that in hierarchy from your example - occlusion step should be skipped until we already travelled deep enough in the tree that we know asociated detail will outbalance the cost of the occlusion queries, whether they fail or not.
 
Fafalada said:
What I was getting at is that detailed dynamic occlusion IMO has two defining properties - one, it's either costly or unreliable, and two, the effect within same scene can change from great to completely useless(or even counter-effective) literally from frame to frame..

agreed

Fafalada said:
So I would argue that in hierarchy from your example - occlusion step should be skipped until we already travelled deep enough in the tree that we know asociated detail will outbalance the cost of the occlusion queries, whether they fail or not.

Hum... Something like that:

hier.jpg


the hierarchy (the stuff on the right) is more precise (deep) when the camera is close.
- Zone A: it is varying to much, so no need to to occlusion here, better waiting for zone B
- Zone B: good place to test stuff, bounding box not to big (low fill-rate), pretty good chance of killing a big chunk of the hierarchy
- zone C: to close to the leaf, better to render directly without bothering doing test which will not save a lot of time anyway.

Am I interpreting correctly here ?

I guess the next question is: how do you know in which zone you are. An simple answer is: "with time coherency, base on the previous frame". But AFAIK, keeping a memory of a the mesh isn't something easy, and pure streaming method are often better.

Basically: is the scene re-generate every frame, or is it just slightly modified to match the new camera position ?
 
Ok now I need some explanation help - I'm having trouble figuring out that diagram you drew :p

Anyway,
Basically: is the scene re-generate every frame, or is it just slightly modified to match the new camera position ?
Actually I think this could be geometry type dependant. While in ideal world we'd want to dynamically sample detail for everything, I just don't think that'll be feasible in any near future.
One way might be to just do a simple split - sample static geometry at discrete detail levels (caching data rather then recreating it every frame) and do continous detail for dynamic stuff (at least most of it).
 
Fafalada said:
Ok now I need some explanation help - I'm having trouble figuring out that diagram you drew :p .

Well, when I drew it it was very clear, but now I see it again, I've some problem myself... ;)
What you see is a tree (acyclic graph) and the root (the coarsest object) is at the top. This tree is representing a view-dependant progressive mesh, so it is more detailed in some area. This is what the camera was suppose to describe: because the camera is on the the left of the tree, the left part of the tree is more detailed. You can imagine geometry instead of the tree, it might help (coming from Lindstrom website):
tvcg2002.jpg



Fafalada said:
Actually I think this could be geometry type dependant. While in ideal world we'd want to dynamically sample detail for everything, I just don't think that'll be feasible in any near future.

I also think that "pure" progressive mesh won't be widely used soon. It require quite complex per-vertex computation which do not apply very well to current GPUs, or even SPUs.

But static geometry seems to be out of date now, let's face it. There must be something in the middle which is till efficient a bit more flexible...
 
Current games

Fafalada said:
sample static geometry at discrete detail levels (caching data rather then recreating it every frame) and do continous detail for dynamic stuff (at least most of it).

Some current gen games do this no? If I guess it seems Burnout 4 and RSC2 have this feature but I am not certain.
 
Back
Top