The old "Edge Anti Aliasing"

Hello there!

I am interested in old Anti Aliasing techniques, before AA became actually usable and before MSAA was used.
Of course we had and have SSAA and there is plenty of information on that;

but I am interested in the methods which were known as "Edge Anti Aliasing" on early PC 3D graphic cards and consoles, back in the 90s.


I find it really, really hard to get information on how exactly that worked - and how well it worked.



There seem to be two different methods here.
One that was used on Rendition Vérité seems to need a specifically designed engine with back-to-front-sorting of polygons, tagging the polygon edges that were to be anti-aliased, then sending the data back to the CPU and let the CPU do the actual AA work.
So it was merely a software based solution? What was the hardware support here, at all?
On the other hand, the Vérité seemed to have a kind of programmable rasterizing pipeline, so it may have worked differently.

And then there seems to be another approach, like in OpenOG, which involved transparent pixels on polygon edges. That seemed to have problems on intersecting polygons.
But I have no clue if that is how it was done.

Can anybody explain?

Did it even make sense to use this or was it just too slow?



Also, some documents claim Dreamcast's PowerVR2 supported this. But I have never seen any game use AA (aside from a handful doing SSAA).
Who knows more?


It is well known how it works on N64, tho.
 
From my memory (I could be wrong as it's 20 years ago), it works like what you described. It's generating alpha pixels on polygon edges and you need to render from back to front with alpha blending in order to work correctly. This of course as you said does not work when polygons intersect with each other.

It actually worse than that because you can't really "sort" triangles as triangles can be at weird positions and no sorting can handle all possible situations. For example, if you sort using the center of a triangle, it's possible a very long triangle could be behind another triangle near its tip and got sorted as in front of that triangle, causing incorrect rendering results.

PowerVR can do FSAA relatively cheap due to its block deferred rendering architecture, thus it can keep the multisampling buffer inside the chip, saving a lot of memory bandwidth.

If you can control your scenes to carefully not intersect polygons and all objects are relatively well defined, it's possible to do edge antialiasing, and that's how some very old games do that.
 
Idea is basically rendering wu antialiased lines at edges of polygons.
This most likely increases polygon size with additional pixel at edges.

Sorting polygons were needed as always with transparency.

During times of software rasterization most games already sorted them, so it was not such a big problem.
 
Last edited:
Thanks for your replies!


Do you have more info/sources on how exactly that works?
Are both methods I described the same?
Was it even a feasible thing to do, or was it just too slow as the CPU had to do it?

I read that it was more of a "checklist"-feature back then, just to claim your card supported Anti Aliasing...



Hm yeah PowerVR's SSAA is relatively well known.
To my knowledge, the Tile Accelerator in Dreamcast practically limited it to 1280x480 since it only buffered 600 tiles at a 40x15 configuration, and as each tile was 32x32, you ended up at 1280x480.

BUT in some of the hardware documentation manuals, Edge Anti Aliasing is mentioned for the Dreamcast, without explaining it further.
 
No, the "GPU" (they weren't called that way back then) generates those alpha pixels. Since the transformations were done on the CPU side, it's possible for the CPU to sort the triangles after transformation. The CPU then sends those triangles to the GPU with appropriate alpha blending mode to make those triangles look smooth.
 
On the CPU side the extra sorting will cost some performance, but depends on how you do that it can be minimized. For example if you can make sure objects won't intersect each other you can do it by sorting objects instead of all triangles.
On the GPU side, on older GPUs (where this technique were more commonly used), some GPU take more time to generate these alpha pixes. Some also become slower when alpha blending is enabled. On GPU with early Z rejection (probably not very common during the days of edge AA), rendering from back to front can be much more slower than rendering from front to back.
 
The driver collection at VOGONS includes Rendition Vérité and its SDK, so if you want to look at it's instruction set and hardware, it's right there. It's been a while since I looked at it, but IIRC, the original V1000 basically just has the custom CPU do everything. It's not like the N64, where a CPU does setup (RSP) and hardware does the actual pixel pushing (RDP), but more like the SuperFX, where there are some special instructions to help with rendering (like a DDA step instruction) but otherwise it's all just software. The later V2000 adds much more dedicated rendering hardware, making it N64-like.

There's a lot of wrong information on the DC. A lot of places (like the Sega Retro wiki) say that the PVR in the DC can do things that are exclusive to the Neon 250, like have 2D acceleration or display resolutions like 1600x1200. Also, people hadn't really settled on what exactly certain terms mean. In the 90s, you might see people calling bilinear filtering a form of antialiasing, which isn't entirely inaccurate, but that's not how people would describe it now. SSAA does indeed reduce aliasing on edges, so someone in the 90s might call it edge antialiasing (even though it does more, like help with texture/shader aliasing,) but now people would only use the term edge antialiasing for something that ONLY works on edges.

The Dreamcast has nothing that we would now call hardware edge antialiasing. It just has supersampling. You could do other kinds of AA with software assistance, like drawing alpha blended lines on polygon edges, and it's possible to do a 3dfx t-buffer or OpenGL accumulation buffer like effects, but nothing built into the hardware.

It is possible for the PVR to render at resolutions greater than 1280x480, but it requires software workarounds that make it less efficicent. I've done 640x480 with 4x SSAA (1280x960) by rendering the screen in two halves, top and bottom, but this requires submitting geometry twice. One minor issue with vertical downsampling is that the PVR can't do a 2 pixel box filter, it always does a 3 pixel filter, so there's some slight additional vertical blur, but it's not bad. The user can specify one weight for the center sample, and one weight for the upper and lower sample. Horizontal downsample is always a box filter.

The attached image shows 4x SSAA on the DC. For testing, I deliberately used a different color for the background on both halves to show which render each half belongs to, but it would be seamless if I used the same colors. IIRC, the vertical downsample uses a 50% weight for the center sample, and 25% for the top and bottom samples.
 

Attachments

  • DC 4x SSAA.png
    DC 4x SSAA.png
    116.5 KB · Views: 8

In your linked article, it says:
"Several other architectures from this generation supported Edge AA as well, but I doubt they could come close to the speed of Vérité."
Would that be because of the CPU in front of the pipeline?

I mean if I understand correctly, it would require a game engine with specific adjustments, polygon sorting, it would not work on all polygon edges, would cost some fillrate (due to transparencies involved) and change the look of the polygons (as it made the lines thicker). Right?
So this is why it was rarely done back then?


BTW, here is another interesting read:
"Many cards claim support for anti-aliasing by implementing "edge" anti-aliasing or anti-aliasing through "oversampling." Edge anti-aliasing is accomplished by tagging which polygons are an edge and then going back and letting the CPU perform anti-aliasing on these edges after the scene is rendered. In order for a game to support this, it has to be designed with this in mind as the edges have to be tagged. [...] In other words, it's useless for games, but are implemented for OEM "checklists" and improving 3D Winbench quality scores."

This is what I meant in my original posting with the other method. This would be done after the scene is rendered and doing it this way seems pretty much useless.
 
Very interesting Read TampaM!

There's a lot of wrong information on the DC. A lot of places (like the Sega Retro wiki) say that the PVR in the DC can do things that are exclusive to the Neon 250, like have 2D acceleration or display resolutions like 1600x1200.

I can confirm there is a lot of misinformation on segaretro, they seem to be known to be somewhat unreliable.

A lot of people take this for granted, that's why in a lot of youtube-videos and comments you read stuff like "DC natively renders everything at 1600x1200 internally and then just outputs it at 640x480" and more like this.
1600x1200 is a PC resolution and absurdely high for a gaming console from 1998. The only source segaretro gives is a web archive containing an age-olde article from the website "sega technical pages", which I used to read back then and which is down for centuries now. So it makes sense they would confuse it with Neon 250, whose specs were altered a lot from the PowerVR CLX2 used in the Dreamcast.

But I think using a bit of logical thinking is enough to disprove the "renders at 1600x1200 internally".
If that was true - why should the VGA-Signal of the Dreamcast output only 640x480?
The VGA-Box was made to connect to PC-monitors, which were perfectly capable of displaying resolutions a lot higher than 640x480 in the DC's days. Most of them going up to 1280x1024, some even 1600x1200.
So why limit VGA-output to 640x480 and go through all the hassle of using an absurdely high resolution which costs a ton of resources, then going through the process of downsampling it, and then only output it as 640x480? Makes no sense at all.
Also, one can see it with own eyes easily when using a VGA-Box that there is no AA at all - aside from 4 games I know of (excluding maybe homebrew) that do horizontal 2xSSAA in the form of 1280x480.

1600x1200 is not even an integer multiple of 640x480 (as 1280x960 would be). You would get artifacts downsampling that resolution to 640x480.


BTW, segaretro has a lot of other things wrong.
The claim that DC does up to 5 Million polygons per second ingame, for example. Only source given: An age-old article written for a game-magazine that claims (!) that the developers claim (!) the game does 5 Million.
I mean what is the average size of memory required for a polygon in the display list? 32 bytes? ~1 vertex per polygon, UV coordinates and what not going into the display lists...
And to my knowledge after reading some of the manuals, you would double store it in VRAM in a way that the Tile Accelerator writes, while the rendering core reads in parallel, so you got twice the memory usage if I got that right.
So running at 30fps, that alone would use more than 5 MB of the DC's 8 MB VRAM. Plus the 1.17 MB double framebuffers, you have only 1.83 MB left for everything else. Texture data alone is way more than that (VQ texure compression already considered), so 5 Million just can't be true for this game.

Dreamcast was and is a marvellous piece of hardware, so that I think is the reason why people just take that for granted and spread it all around:
They just WANT it to be true (and DC besting the PS2). ; D


Another example for misinformation on segaretro would be the Sega Model 3 technical specs.
It says that Model 3 Step 2, for expamle, has 6 (!) GPUs in the form 6 Real3D-pro 1000 GPUs (chip number 315-6060).
Model 3 does not even have a GPU. GPUs were a later thing, the Real3D-pro is an image generating system spanning several PCBs with a dozend of specialized chips for different functions, not a GPU. And Model 3 does not use 6 (!) Real 3D image generators, each consisting of several boards. Model 3 instead IS a (slightly modified) Real 3D-pro.
They cannot even count right. There are no 6 chips with number 315-6060 present, you can tell if you just look at the Model 3 Step 2 PCB! There are only 4, and those are not GPUs, but texturing units!
There is a lot of more wrong info in that article, I looked through all the source material, which simply does not support those claims. It does not add up.

So a lot of misinformation here.


people hadn't really settled on what exactly certain terms mean. In the 90s, you might see people calling bilinear filtering a form of antialiasing, which isn't entirely inaccurate, but that's not how people would describe it now.

Yes, in the early 90s, texture filtering as also called "texture antialiasing" in some cases. I mean you could call it that, but only a short time after, nobody would.

SSAA does indeed reduce aliasing on edges, so someone in the 90s might call it edge antialiasing

Hmmmmm... I never heard the term Edge Anti-Aliasing used for Supersamling!
I mean SSAA works on the whole image and there already were AA-technics working on edges only back then.
So especially on hardware like the Dreamcast, which also supports SSAA, it would be strange to claim it supported both SSAA/FSAA AND Edge Anti Aliasing, if both were the same.
Well, more on that below!

The Dreamcast has nothing that we would now call hardware edge antialiasing. It just has supersampling.

Good that that is finally cleared up.

The only sources saying DC supported Edge Anti Aliasing would be - again - segaretro and the following document they give as a soure:

Here on pages 3 and 21, it says it supports Edge Anti Aliasing as polygon function (right next to "Bamp"-Mapping - lol).
I never found it mentioned for Dreamcast anywhere else in the document or in any other source.

It is possible for the PVR to render at resolutions greater than 1280x480, but it requires software workarounds that make it less efficicent. I've done 640x480 with 4x SSAA (1280x960) by rendering the screen in two halves, top and bottom, but this requires submitting geometry twice.

I saw you mentioning this in another thread.
Resubmitting geometry meaning out of the VRAM again? That would cost bandwith...

It seems like 1280x480 would be relatively (!) cheap on DC compared to other GPU architectures since it was done using the tile buffers, if I understand correctly.
So not much of of a bandwith hit here and you sad it only increases VRAM-usage slightly.
But what about fillrate cost?
Perhaps that was the reason why only a handful of commercial released games used it - and the ones that did perhaps were heavily CPU-limitated anyway.

That, and also because it would be a suboptimal increase in image quality...
Textures from the time were not shimmering as much as later texture content anyway, and DC had good texture filters in place to handle this.
For edges, the SSAA was effectively ordered grid supersampling, which is not optimal for edge smoothing (a rotated grid with two subpixels would be much more effective, but was only done in later hardware). Also, the horizontal direction would be less important on CRT displays from back then - vertical supersampling would be much more important, but was not done in any game due to the TA buffer limit.

One minor issue with vertical downsampling is that the PVR can't do a 2 pixel box filter, it always does a 3 pixel filter, so there's some slight additional vertical blur, but it's not bad. The user can specify one weight for the center sample, and one weight for the upper and lower sample. Horizontal downsample is always a box filter.

The attached image shows 4x SSAA on the DC. For testing, I deliberately used a different color for the background on both halves to show which render each half belongs to, but it would be seamless if I used the same colors. IIRC, the vertical downsample uses a 50% weight for the center sample, and 25% for the top and bottom samples.

That exactly sounds like the way the Dreamcast handles the deflickering on interlace output!

Did you use it for you 4x SSAA? :)
Was it even feasible to use it for a resource demanding game or were you merely trying if it worked at all?
 
Back
Top