DUALDISASTER said:
The question was in the thread...How are they suppose to function?
The PS2 has a very simplistic rasteriser - you get a colour at each vertex, interpolated across the polygon (non-perspective correct at that) and one texure co-ordinate per-vertex, fixed point (which means you get wobbly textures if you don't keep your co-ordinates in a sensible range), again interpolated - though at least the texture is perspective correct.
For each pixel it can look up a texture at the interpolated co-ordinate (the most you get in filtering is trilinear, but typically you would use bilinear with nearest-level mipmapping), modulate the texel with the interpolated colour, and then blend that with whatever was in the display using a fairly basic set of alpha-blend modes (you don't even get a modulated blend, except with an alpha value... most you can do with colour is add or substract).
Oh, you also get a fog value per-vertex, again non-perspective correct, and that just interpolates the colour of the pixel towards a fixed fog-colour before blending - you take a 50% performance hit for that though.
No fancy shaders or anything. It's exceedingly basic, but for the vintage, very fast.
Anything that looks like a fancy pixel shader on PS2 will generally be:
1) faked with palletised textures
2) faked with multipass rendering
3) faxed with render to texture
all of which the PS2 is good it. "faked" is a relative term - a lot of pixel shaders aren't do a whole lot more than a fixed blend operation anyway... you could compile a lot of them down to multiple simple passes anyway.