GPU<->CPU interconnect...what's possible?

Shifty Geezer said:
EA have used up 3 SPU's worth of float on the grahics and have reached the limits of the fillrate (in this one game with little experience of the hardware too). That leaves 4 SPe's for other stuff. If the GPU wasn't fillrate limited at this point, and wasn't running at the limit of the fillrate until 7 SPE's were used on graphics, there'd be nothing left.

Bare in mind that any comment EA have made on fillrates would be based on development baords which are at best 7800s. Since no developers (including EA) have an RSX, or even know what the spec will be yet, it a bit early to comment on potential bottlenecks.
 
Any coder here should be able to give insight to the number of extra vertex shader instructions involved in skinning. The key thing is that if you skin a tesselated model, these extra instructions beyond regular T&L will be multiplied as well, which means a huge increase in processing time, compared to tesselating an already skinned model. It'd also effect the weighting process (setting bone weights for each vertex) - artists would kill if they have to work with a very dense model, and the weights also add to the memory and bandwith requirements of the model. You'd not be able to skin an adaptively tesselated model this way either, unless the tesselator would also interpolate the bone weights. Bottom line is: you absolutely want to skin before tesselation.
Among the biggest hits you take with skinning in the shader is not so much the execution itself, but the constant changing of shader states. There's only a few constant regusters, and it's simply not enough to hold every bone transform for a complete skeleton on a complex character. Which basically means having to resend more bone transforms every few packets or so.

It's actually not uncommon on the PC to just skin in software because it means less of an explosion of shaders and less data sent across to the card per vertex and fewer stream formats.

EA have used up 3 SPU's worth of float on the grahics and have reached the limits of the fillrate (in this one game with little experience of the hardware too). That leaves 4 SPe's for other stuff. If the GPU wasn't fillrate limited at this point, and wasn't running at the limit of the fillrate until 7 SPE's were used on graphics, there'd be nothing left.
It's also not entirely clear why. Is it pixel fillrate or texel fillrate? Are they doing shadow volumes or shadowmaps or cubemap shadows? Are they rendering a whole lot of reflective surfaces? Is lighting computed using a straight BRDF or is it a blend of projected textures? Are the pixel shaders just exceedingly huge? Visibility issues? Uncompressed textures? It's entirely possible that there's a good amount of breathing room given some other changes. Or for that matter, that they probably have some hardware below spec against a genuine RSX, though I doubt there will be that much difference from that alone.
 
Fill-rate bound is good

!eVo!-X Ant UK said:
Fill Rate bound this early on??? thats quite bad no..??? Or is it the codeing?? In fact whats is RSX's Fillrate.

If RSX is just 550mhz G70, then we know RSX has very very high fill-rate. Therefore, if RSX is fill-rate bound before bandwidth other hardware aspect, especially if 3 full SPE is dedicated to graphics, then this is optimal situation.

I am curious if they are making 720P or 1080P and also about implementation of AA, HDR, AF.
 
ihamoitc2005 said:
If RSX is just 550mhz G70, then we know RSX has very very high fill-rate. Therefore, if RSX is fill-rate bound before bandwidth other hardware aspect, especially if 3 full SPE is dedicated to graphics, then this is optimal situation.

I am curious if they are making 720P or 1080P and also about implementation of AA, HDR, AF.

Good questions. I guess we will see next year.
 
Back
Top