First Killzone screenshot/details? So says USAToday..

There have been arguments on this forum that generally a huge increase in phsyics processing would have to be made for noticable diffrance to be seen in game. Given the sheer power of the Cell eventually would it be possible that coding will be efficient enough that game code (AI, Phyiscs, Sound) might be restricted to the PPE and 2 SPE's), allowing 4 SPE's to contrabute very heavily towards complex polygons?
Cell has much more uses than polygon processing..
 
Out of intrest do you think that the Cell with its 750 thousand triangle rendering @ 60fps (from edge tools) per SPE. Freeing up the RSX to do more shading?

There have been arguments on this forum that generally a huge increase in phsyics processing would have to be made for noticable diffrance to be seen in game. Given the sheer power of the Cell eventually would it be possible that coding will be efficient enough that game code (AI, Phyiscs, Sound) might be restricted to the PPE and 2 SPE's), allowing 4 SPE's to contrabute very heavily towards complex polygons?

screw polygons, use SPE's for vector shading, they are "great" at it.
 
Yes I realise it could be use for more than polygon processing (and culling etc). However theoreitcally lets say with a game like Wipeout HD which is a last generation game in terms of AI, Physics, animation etc. For these types of games which are all about speed and looking pretty then rather than let the Cell's SPE's be under utilised in this case the Cell could be used for polygon processing.

In addition if your a 3rd party dev without the resrouces into doing cutting edge animation, physics, AI and the like. Using the Cell for pre processing and pollygon processing would surely give these 3rd party devs a use for resources that might otherwise go to waiste?

(After all using say 3 or 4 SPE's to do pollygon prcessing work would surely be more cost efficient than doing procedural animation as seen in Uncharted.)

Please note that I am not advocating dumming down of HG games. One of the brilliant things about KZ 2 is the diffrent techniques and the cutting edge rag doll physics, and use of lighting etc.

However all I am saying is that you haven't got the resources that some of the first party teams have got. then surely using the Cell to help with graphics may be an answer to making more graphically appealing games for short tame gains (until 3rd party devs understand enough or have the budget to start developing their own cutting edge physics, animation etc).

screw polygons, use SPE's for vector shading, they are "great" at it.

I didn't realise that I thought that SPE's were only good at gemotry types of graphics (and poor at any kind of shading!).
 
Last edited by a moderator:
In the polygon vs shader thing, it doesn't really matter. You are looking at a two dimensional display that is trying to convince you there is something three dimensional in it. People will use whatever method allows them to display that lie most convincingly, and right now that is shaders. And since using shaders is more efficient than using pure geometry, I can't see that changing anytime soon.
 
(After all using say 3 or 4 SPE's to do pollygon prcessing work would surely be more cost efficient than doing procedural animation as seen in Uncharted.)
More cost efficient in terms of what?

What exactly do you intend to do when your processing all those vertices on upto 4 hardware threads at 3.2GHz?

Your comparing two very different concepts here where one is a function of hardware utilisation and another is a algorithmic "idea" which can be implemented in many different ways.. (also procedural animation could come under polygon processing depending on what your definition is since it's purpose is still fundamentally a method of transforming points in space..)
 
More cost efficient in terms of what?

What exactly do you intend to do when your processing all those vertices on upto 4 hardware threads at 3.2GHz?

Your comparing two very different concepts here where one is a function of hardware utilisation and another is a algorithmic "idea" which can be implemented in many different ways.. (also procedural animation could come under polygon processing depending on what your definition is since it's purpose is still fundamentally a method of transforming points in space..)

Well I was thinking in terms of development cost!! Pushing forwards new tech, researching new tech over using resrouces for creating graphics development cost wise. Sorry if I didn't explain that well (for 3rd partys etc).
 
Well I was thinking in terms of development cost!! Pushing forwards new tech, researching new tech over using resrouces for creating graphics development cost wise. Sorry if I didn't explain that well (for 3rd partys etc).

Ah!

Well in that case your points were all pretty valid..

(pele.. :oops: )
 
The future is geometry, not faking shaders.
Not at all, for various reasons:
1) GPUs shade quads of pixels (2x2 pixels) at time, which is the minimum amounto of pixels being shaded, if you have a lot of small triangles you end up having a lot of partially covered quads and a lot of ALUs will shade nothing
2) GPUs are very good at compressing color and depth in presence of gradients, add a lot of small triangles and you destroy these gradients -> slow MSAA performance
3) we are already rendering A LOT of triangles per frame (tipically 2-4 milions in next gen games) to fully cover a 720p image with 1-4 pixels sized triangles (on average), do we you really need more than that at this given res? no we don't!
What we need is to actually dynamically distribute triangles in a better way, in order to given polygonal details only where it's needed

In conclusion we don't need to render more triangles but smarter triangles.
 
Last edited:
I expect he means that "we" need to *model* more triangles, not necessarily *render* more of them. Personally, I'd be happy if shoulders, heads, backs, and the bits on the arms and legs that should be rounded actually are....

:shrug:
-Dave
 
Well, artists already try to distribute the polybudgets to where they have the most impact (e.g. face). The simplest example of what nAo is talking about is using more triangles for regions closest to the camera. Imagine a terrain rendering engine showing mountainy terrain. If you don't allocate triangles smartly, you end up using too many triangles on far away mountains (whose size is probably <1pixel, wasting GPU resources), and you don't use enough on very close hills.

So what you want to do is dynamically tessellate in this instance, based on distance. Of course, this is a canned example, the prototypical height map algorithm. To do this on a global scene level is alot harder, especially with non-procedurally generated models. Remember Shiny's Messiah?

What most developers do is simply create many different LODs for models, however, this often leads to "pop" as the engine switches from one model to the next. What you'd want is a kind of linear interpolation between two LODs. There are tons of progressive mesh schemes, subdivision surfaces, high-order-surface techniques, but I think because of the art pipeline, most devs have avoided them. Not to mention that there's still artifacting.

Then there's the problem that tessellating a low res model doesn't really add detail like a normal map does. You want the opposite actually, to somehow start out with a uber-detailed model, uber-compressed, and have the GPU be able to decimate it down based on what LOD is needed. Unfortunately, every method I've seen of doing this in real time makes tons of mistakes.

Another solution could be using normal maps as input to a hardware tessellator. So instead of just subdividing tris based on nearby mesh gradients/interpolated normals, you subdivide tris using normals fetched from a normalmap.

Then you can use the normal GPU trilinear/aniso filtering HW to smoothly transition between LOD levels instead of trying some fancy progressive mesh scheme.

Just thinking stream of consciousness.
 
I expect he means that "we" need to *model* more triangles, not necessarily *render* more of them. Personally, I'd be happy if shoulders, heads, backs, and the bits on the arms and legs that should be rounded actually are....

:shrug:
-Dave
Artits already make milion triangles models for games, so it's not an artistic issue, but a tech one
 
Read it again. It has nothing to with the textures. Its the lightning engine, Guerilla (spelling?) hasn't put on some kind of special coating on the textures to make the light look more realistic, they have a very good lightning engine, the textures themselves are normal ****ty ones.

And that changes the previous point how?
 
So what you want to do is dynamically tessellate in this instance, based on distance.

This and some of the other stuff you mention has been done here and there, we need to go beyond these more traditional techniques though. Grabbing nAo's 'smarter triangles' as an example, the place where you get the best benefit from more dense geometry is at an objects silhouette.

Take a soccer ball as an example that has 100k polys uniformly distributed about its sphere. When you look at that ball, alot of the geometry near the middle of the ball is mostly wasted, you can simulate the 3d-ness of that part in the pixel shaders. But the edge of the ball needs that extra level of geometric detail so that it appears nice and round since it represents the silhouette.

Now rotate the camera 90 degrees about the vertical axis. That edge that was previously on the silhouette, and hence needed that extra geometry, is now near the middle of the ball. So, now that section of the ball does not need that extra geometry anymore and its kind of wasted. However, the previous middle section which didn't need all that detail before, needs it now since the current camera view places it at the silouette.

Going forwards, we'll need a way to dynamically redistribute triangles (based on the current view) to where they are needed the most.
 
Joker, computer graphics literature is full of algorithms that do what you're looking for :)
 
The problem with most of the existing algorithms I've seen is that they don't look good in motion. Small perturbations in object position or camera position cause instability and result in lots of popping in and out of detail. One moment, the left hand of the character has fingers, but the right hand is just a box, the next moment, the left hand shifts to a box and the right hand gets fingers. :)
-
 
I thought another issue is that the data structures involved in maintaining the ability to scale up or down are typically not so good for GPU efficiency. E.g. you end up generating some not so efficient triangle strips, so that the savings you get from adaptively tesselating are lost to some degree. I guess there's also the issue of GPUs traditionally being unable to run the tesselation algorithms, so you have overhead in transfering geometry to the GPU. Are these things still true?
 
This and some of the other stuff you mention has been done here and there, we need to go beyond these more traditional techniques though. Grabbing nAo's 'smarter triangles' as an example, the place where you get the best benefit from more dense geometry is at an objects silhouette.

Take a soccer ball as an example that has 100k polys uniformly distributed about its sphere. When you look at that ball, alot of the geometry near the middle of the ball is mostly wasted, you can simulate the 3d-ness of that part in the pixel shaders. But the edge of the ball needs that extra level of geometric detail so that it appears nice and round since it represents the silhouette.

Now rotate the camera 90 degrees about the vertical axis. That edge that was previously on the silhouette, and hence needed that extra geometry, is now near the middle of the ball. So, now that section of the ball does not need that extra geometry anymore and its kind of wasted. However, the previous middle section which didn't need all that detail before, needs it now since the current camera view places it at the silouette.

Going forwards, we'll need a way to dynamically redistribute triangles (based on the current view) to where they are needed the most.

With the soccer ball can't you just billboard that geometry with distributed polygons and just rotate the maps around it to simulate its rotations with respect of the camera.
 
Back
Top