New dev tools ? PlayStation 3 Edge

I work with Joker and have had over 20 years of experience in the games industry (from the 8-bit home computers all the way up to the PS3/360).

His comments on the PS3 are fairly close to the truth and the fact that we have the SPU cores to help make up for the inadequacies of the vertex pipeline on the PS3 doesn't change the fact that unaided, the Xenos has much more raw vertex processing power available to it.

Take into consideration that triangle setup time and post-vtx transform caches are relatively slow (even when you optimize your data for them). Unless you offset that by utilizing more complex pixel shaders (and the case he's talking about uses fairly simple ones), you're going to be bottlenecked by the vertex pipe. I have, in fact, rewritten some of our more complex 360 shaders to move some of the burden from the vertex to the pixel pipe (which would seem to be backward thinking!).

Utilizing one of the SPU's to precull your geometry means passing less data up the pipe to the RSX, and therefore less vertices for it to process (which can only be a good thing).

Another thing to note is that if you design this precull code properly, and assuming you have the space to store them, you can effectively pretransform all your geoemtry too - reducing the instruction count in your vertex shaders even further.

One example of this would be a cloth simulation, where you have to pre-skin your data in order (amongst other things) for the collision objects to be in the correct orientation. Why at this stage shouldn't we fold the pretransform *and* the rejection into the cloth transform code? If you have a ~5000 poly cloth sim model, and you precull all the back faces (thus not sending half of the triangles to be rejected by the RSX), then precull all faces not currently visible in the view frustum, the index list sent to the RSX can go down in size anywhere from 45% to near on 100%
 
Alpha testing + alpha coverage = no need for alpha blending on vegetation = no sorting hell = pub
Exactly what I was thinking.. and it runs even faster :) quite suprised to hear that some renders trees with alpha blending.
 
One example of this would be a cloth simulation, where you have to pre-skin your data in order (amongst other things) for the collision objects to be in the correct orientation. Why at this stage shouldn't we fold the pretransform *and* the rejection into the cloth transform code? If you have a ~5000 poly cloth sim model, and you precull all the back faces (thus not sending half of the triangles to be rejected by the RSX), then precull all faces not currently visible in the view frustum, the index list sent to the RSX can go down in size anywhere from 45% to near on 100%

Hmm, so does this mean that PS3 was designed for having the vertex shading + game physics handled as close together on the CELL as possible and the aim is to allow for more dynamic environments, ex. a jungle with bending grass and leafs rustling in the wind or a wibbly wobbly spider web, where RSX's role is to "skin" the polygons? I would a "radical" difference from the traditional way in PC games.
:oops: I know nothing about graphics rendering.
 
Exactly what I was thinking.. and it runs even faster :) quite suprised to hear that some renders trees with alpha blending.
I've had arguments with artists who claim that the only good trees use alpha blended... That was 5 years ago on PC (and we know how good they were at alpha blending...) and i'm sure they would argue the same now...
 
I've had arguments with artists who claim that the only good trees use alpha blended... That was 5 years ago on PC (and we know how good they were at alpha blending...) and i'm sure they would argue the same now...

The problem with joining an already in-production title and porting it to another console is that you tend to be stuck with very limited art resources, and therefore have to stick with whatever assets are available. Using Alpha test + coverage is indeed faster than using alpha blend, but none of the resources were set up to do this.

I agree with DeanoC though, I had the same 'discussion' regarding alpha blend with our art director about three months ago, and he said exactly that :(
 
I think the notion of alpha coverage isn't really all that well known to artists, and so they simply assume that the hard pixelated edges you get with alpha test as is are just incurable by any means other than straight alpha blending. Well, in theory you can also get a few other "feathering" type of effects with alpha blend that you can't get with alpha test+coverage, but I don't think vegetation really fits in this group most of the time. Moreover, if hardware multisampling isn't really in the cards, it's a different ball game anyway.
 
I just hope that Edge will help eliminate these horrible looking low res shadow maps used in games lately (i.e. Motorstorm, Heavenly Sword, ect.). I am not a owner of a 360, is there also a problem with low res shadow maps on that system?
 
I just hope that Edge will help eliminate these horrible looking low res shadow maps used in games lately (i.e. Motorstorm, Heavenly Sword, ect.). I am not a owner of a 360, is there also a problem with low res shadow maps on that system?

So far all the Edge tech seems to be relating to use of SPUs, or monitoring performance. These might have a knock-on effect in helping to improve things like rendering shadows but otherwise it's not relevant. Good shadowing is just one of those issues that so far doesn't have a good general solution that works well in the kind of dynamic environment most games have. Almost every technique known at the moment has significant drawbacks and tradeoffs..

I suppose if SCE implement a nicer looking technique than current games use, they could package it up as a new Edge component and release it, though more likely a short paper on how to do it and possibly some example shader code would suffice.

Alternatively perhaps there's a cute way to make SPUs generate good looking shadows instead of the GPU, but it seems like the wrong kind of task to throw at them to me - generally speaking there's going to be a whole lot of rasterising going on, and the GPU is going to be better at that.
 
Alternatively perhaps there's a cute way to make SPUs generate good looking shadows instead of the GPU, but it seems like the wrong kind of task to throw at them to me - generally speaking there's going to be a whole lot of rasterising going on, and the GPU is going to be better at that.
How's about ray-traced shadows? Per pixel and geometry perfect for sharp shadows, with no worry about texture fetching which limits RTs overal suitability for rendering. You'd need some clever process to keep geometry processing to a minimum in complex scenes. Not sure how exactly, but my fuzzy brain has a fuzzy-solution that leads me to think it's doable.
 
How's about ray-traced shadows? Per pixel and geometry perfect, for sharp shadows. You'd need some clever process to keep geometry processing to a minimum in complex scenes. Not sure how exactly, but my fuzzy brain has a fuzzy-solution that leads me to think it's doable.
Who wants geometry perfect hard shadows? :) me not!....
 
nAo said:
Exactly what I was thinking.. and it runs even faster
Now if only someone illuminated Japanese devs to the existance of alpha to coverage, maybe we wouldn't have to suffer PS2-esque aliasing on trees&vegetation in certain demo/upcoming game.

Who wants geometry perfect hard shadows? me not!....
Of course you could go one step back from raytracing, and then they needn't be hard. :p
 
Back
Top