transparency questions

What are the differences and advantages between alpha blending, destination alpha, and color layering (super nes' style transparencies)?

Is the Super NES method technically possibly with today's 3D rasterization model? If so, then would transparent textures rendered by that method still show aliasing?
 
ROP units will do the following when pixels exit pipeline: SRC_COLOR * SRC_FACTOR + DST_COLOR * DST_FACTOR.
Blending would generally mean SRC/DST factors are rgba colors, inverse colors, 1 or 0 constants,...
Alpha blending would generally refer to SRC/DST factors being alpha component of SRC/DST COLOR.
And destination blending is when SRC/DST factors come from alpha component of DST_COLOR.

What Super NES did with regards to transparency is that it treated color index 0 as being fully transparent. You can do that easily on ANY hardware (think Voodoo 1) using alpha test or texkill.
And yes this still shows aliasing.

You are asking the wrong questions.
 
What are the differences and advantages between alpha blending, destination alpha, and color layering (super nes' style transparencies)?

Is the Super NES method technically possibly with today's 3D rasterization model? If so, then would transparent textures rendered by that method still show aliasing?

- Alpha blending is a combination of two colors with an alpha value so that Result = (1 - Alpha) * Value0 + Alpha * Value1. If you assign a different alpha value for each pixel (ie. with a proper texture), you don't get any aliasing.

- I don't know what you mean by "destination alpha". AFAIK, the destination alpha is a component of alpha blending, not a technique on its own.

- I'm not sure what you mean by color layering and I don't know the SNES hardware but back to these days, transparency was done by hacks related to indexed colors. Since we now use true colors, these hacks are essentialy useless and can't be done with current hardware (actually you could kinda emulate the thing with some shaders but that would be pointless). By the way, I don't see how this technique relate to AA at all.

I guess you are confusing alpha blending and alpha testing. With alpha testing, you only draw pixels with an alpha greater than a threshold. This is more a masking operation than a transparency one, and it is often used to draw grids (more generaly, textures which are opaque except for some "holes"). Since it's a binary operation (you either draw a pixel or you don't) it leads to aliasing. And since the aliasing takes place in the interior of the triangles, MSAA don't take care of it, although SSAA do.

We could always use alpha blending instead of alpha testing since alpha testing is just some kind of special case of alpha blending. This would avoid some aliasing and all would be fine.

But if alpha testing is still quite used it's because it is much faster than alpha blending.
 
Alpha testing (clipmapping) is useful as it works much better with the hardware z-buffer as alpha blending. When the pixel is culled out (with texkill) in addition to skipping the color buffer write the z-buffer write is also skipped. You can render any amount of alpha tested geometry on top of each other (front to back like you render all opaque geometry), and the result is correct and both z-buffer culling and hierarchial z-buffer culling work (speeding up the rendering performance significantly).

However with alpha blended (translucent) geometry, the background is also partially visible on each pixel and both background and foreground must be rendered (no z clipping and hi-z clipping can be done). This basically doubles the pixel shader usage if there is one transparent geometry layer on top of the opaque geometry. For more complex transparencies the performance cost can be much higher. This performace cost is often lowered down by using much simpler shaders on transparent surfaces (particles for example usually use just one texture layer).

However the pixel shader performance cost is not the biggest issue on alpha blended geometry. Current graphics hardware (except PowerVR chips) cannot do alpha sorting on hardware. To get error free results you have to sort all your alpha blended polygons from back to front (or front to back if you use destination alpha). Sorting large amount of polygons every frame costs a lot of performance. Many games sort alpha blended geometry on object by object basis to speed up the sorting process (and to keep the vertex/index buffers static for best performance). This causes some visual rendering errors. But with proper care these errors can be minimized.
 
However the pixel shader performance cost is not the biggest issue on alpha blended geometry. Current graphics hardware (except PowerVR chips) cannot do alpha sorting on hardware.

That should read "(except some PowerVR chips)". As I've mentioned before, the feature wasn't being exercised on the PC so was taken out of mainstream chips. :cry:
 
That should read "(except some PowerVR chips)". As I've mentioned before, the feature wasn't being exercised on the PC so was taken out of mainstream chips. :cry:

Would it have been a problem scaling it all up to mdoern SGX chips? Would you have lost some Watt/mm^2 advantage over competitors by keeping it even though it went unused (by some devs)?
 
Would you have lost some Watt/mm^2 advantage over competitors by keeping it even though it went unused (by some devs)?
I'd doubt power is an issue. It was removed because any functionality costs area and area (even if it is small) cost money. That functionality was not being used so it was not cost effective. <shrug>
 
Back
Top