The old "Edge Anti Aliasing"

Omarz

Newcomer
Hello there!

I am interested in old Anti Aliasing techniques, before AA became actually usable and before MSAA was used.
Of course we had and have SSAA and there is plenty of information on that;

but I am interested in the methods which were known as "Edge Anti Aliasing" on early PC 3D graphic cards and consoles, back in the 90s.


I find it really, really hard to get information on how exactly that worked - and how well it worked.



There seem to be two different methods here.
One that was used on Rendition Vérité seems to need a specifically designed engine with back-to-front-sorting of polygons, tagging the polygon edges that were to be anti-aliased, then sending the data back to the CPU and let the CPU do the actual AA work.
So it was merely a software based solution? What was the hardware support here, at all?
On the other hand, the Vérité seemed to have a kind of programmable rasterizing pipeline, so it may have worked differently.

And then there seems to be another approach, like in OpenOG, which involved transparent pixels on polygon edges. That seemed to have problems on intersecting polygons.
But I have no clue if that is how it was done.

Can anybody explain?

Did it even make sense to use this or was it just too slow?



Also, some documents claim Dreamcast's PowerVR2 supported this. But I have never seen any game use AA (aside from a handful doing SSAA).
Who knows more?


It is well known how it works on N64, tho.
 
From my memory (I could be wrong as it's 20 years ago), it works like what you described. It's generating alpha pixels on polygon edges and you need to render from back to front with alpha blending in order to work correctly. This of course as you said does not work when polygons intersect with each other.

It actually worse than that because you can't really "sort" triangles as triangles can be at weird positions and no sorting can handle all possible situations. For example, if you sort using the center of a triangle, it's possible a very long triangle could be behind another triangle near its tip and got sorted as in front of that triangle, causing incorrect rendering results.

PowerVR can do FSAA relatively cheap due to its block deferred rendering architecture, thus it can keep the multisampling buffer inside the chip, saving a lot of memory bandwidth.

If you can control your scenes to carefully not intersect polygons and all objects are relatively well defined, it's possible to do edge antialiasing, and that's how some very old games do that.
 
Idea is basically rendering wu antialiased lines at edges of polygons.
This most likely increases polygon size with additional pixel at edges.

Sorting polygons were needed as always with transparency.

During times of software rasterization most games already sorted them, so it was not such a big problem.
 
Last edited:
Thanks for your replies!


Do you have more info/sources on how exactly that works?
Are both methods I described the same?
Was it even a feasible thing to do, or was it just too slow as the CPU had to do it?

I read that it was more of a "checklist"-feature back then, just to claim your card supported Anti Aliasing...



Hm yeah PowerVR's SSAA is relatively well known.
To my knowledge, the Tile Accelerator in Dreamcast practically limited it to 1280x480 since it only buffered 600 tiles at a 40x15 configuration, and as each tile was 32x32, you ended up at 1280x480.

BUT in some of the hardware documentation manuals, Edge Anti Aliasing is mentioned for the Dreamcast, without explaining it further.
 
No, the "GPU" (they weren't called that way back then) generates those alpha pixels. Since the transformations were done on the CPU side, it's possible for the CPU to sort the triangles after transformation. The CPU then sends those triangles to the GPU with appropriate alpha blending mode to make those triangles look smooth.
 
On the CPU side the extra sorting will cost some performance, but depends on how you do that it can be minimized. For example if you can make sure objects won't intersect each other you can do it by sorting objects instead of all triangles.
On the GPU side, on older GPUs (where this technique were more commonly used), some GPU take more time to generate these alpha pixes. Some also become slower when alpha blending is enabled. On GPU with early Z rejection (probably not very common during the days of edge AA), rendering from back to front can be much more slower than rendering from front to back.
 
The driver collection at VOGONS includes Rendition Vérité and its SDK, so if you want to look at it's instruction set and hardware, it's right there. It's been a while since I looked at it, but IIRC, the original V1000 basically just has the custom CPU do everything. It's not like the N64, where a CPU does setup (RSP) and hardware does the actual pixel pushing (RDP), but more like the SuperFX, where there are some special instructions to help with rendering (like a DDA step instruction) but otherwise it's all just software. The later V2000 adds much more dedicated rendering hardware, making it N64-like.

There's a lot of wrong information on the DC. A lot of places (like the Sega Retro wiki) say that the PVR in the DC can do things that are exclusive to the Neon 250, like have 2D acceleration or display resolutions like 1600x1200. Also, people hadn't really settled on what exactly certain terms mean. In the 90s, you might see people calling bilinear filtering a form of antialiasing, which isn't entirely inaccurate, but that's not how people would describe it now. SSAA does indeed reduce aliasing on edges, so someone in the 90s might call it edge antialiasing (even though it does more, like help with texture/shader aliasing,) but now people would only use the term edge antialiasing for something that ONLY works on edges.

The Dreamcast has nothing that we would now call hardware edge antialiasing. It just has supersampling. You could do other kinds of AA with software assistance, like drawing alpha blended lines on polygon edges, and it's possible to do a 3dfx t-buffer or OpenGL accumulation buffer like effects, but nothing built into the hardware.

It is possible for the PVR to render at resolutions greater than 1280x480, but it requires software workarounds that make it less efficicent. I've done 640x480 with 4x SSAA (1280x960) by rendering the screen in two halves, top and bottom, but this requires submitting geometry twice. One minor issue with vertical downsampling is that the PVR can't do a 2 pixel box filter, it always does a 3 pixel filter, so there's some slight additional vertical blur, but it's not bad. The user can specify one weight for the center sample, and one weight for the upper and lower sample. Horizontal downsample is always a box filter.

The attached image shows 4x SSAA on the DC. For testing, I deliberately used a different color for the background on both halves to show which render each half belongs to, but it would be seamless if I used the same colors. IIRC, the vertical downsample uses a 50% weight for the center sample, and 25% for the top and bottom samples.
 

Attachments

  • DC 4x SSAA.png
    DC 4x SSAA.png
    116.5 KB · Views: 15

In your linked article, it says:
"Several other architectures from this generation supported Edge AA as well, but I doubt they could come close to the speed of Vérité."
Would that be because of the CPU in front of the pipeline?

I mean if I understand correctly, it would require a game engine with specific adjustments, polygon sorting, it would not work on all polygon edges, would cost some fillrate (due to transparencies involved) and change the look of the polygons (as it made the lines thicker). Right?
So this is why it was rarely done back then?


BTW, here is another interesting read:
"Many cards claim support for anti-aliasing by implementing "edge" anti-aliasing or anti-aliasing through "oversampling." Edge anti-aliasing is accomplished by tagging which polygons are an edge and then going back and letting the CPU perform anti-aliasing on these edges after the scene is rendered. In order for a game to support this, it has to be designed with this in mind as the edges have to be tagged. [...] In other words, it's useless for games, but are implemented for OEM "checklists" and improving 3D Winbench quality scores."

This is what I meant in my original posting with the other method. This would be done after the scene is rendered and doing it this way seems pretty much useless.
 
Very interesting Read TampaM!

There's a lot of wrong information on the DC. A lot of places (like the Sega Retro wiki) say that the PVR in the DC can do things that are exclusive to the Neon 250, like have 2D acceleration or display resolutions like 1600x1200.

I can confirm there is a lot of misinformation on segaretro, they seem to be known to be somewhat unreliable.

A lot of people take this for granted, that's why in a lot of youtube-videos and comments you read stuff like "DC natively renders everything at 1600x1200 internally and then just outputs it at 640x480" and more like this.
1600x1200 is a PC resolution and absurdely high for a gaming console from 1998. The only source segaretro gives is a web archive containing an age-olde article from the website "sega technical pages", which I used to read back then and which is down for centuries now. So it makes sense they would confuse it with Neon 250, whose specs were altered a lot from the PowerVR CLX2 used in the Dreamcast.

But I think using a bit of logical thinking is enough to disprove the "renders at 1600x1200 internally".
If that was true - why should the VGA-Signal of the Dreamcast output only 640x480?
The VGA-Box was made to connect to PC-monitors, which were perfectly capable of displaying resolutions a lot higher than 640x480 in the DC's days. Most of them going up to 1280x1024, some even 1600x1200.
So why limit VGA-output to 640x480 and go through all the hassle of using an absurdely high resolution which costs a ton of resources, then going through the process of downsampling it, and then only output it as 640x480? Makes no sense at all.
Also, one can see it with own eyes easily when using a VGA-Box that there is no AA at all - aside from 4 games I know of (excluding maybe homebrew) that do horizontal 2xSSAA in the form of 1280x480.

1600x1200 is not even an integer multiple of 640x480 (as 1280x960 would be). You would get artifacts downsampling that resolution to 640x480.


BTW, segaretro has a lot of other things wrong.
The claim that DC does up to 5 Million polygons per second ingame, for example. Only source given: An age-old article written for a game-magazine that claims (!) that the developers claim (!) the game does 5 Million.
I mean what is the average size of memory required for a polygon in the display list? 32 bytes? ~1 vertex per polygon, UV coordinates and what not going into the display lists...
And to my knowledge after reading some of the manuals, you would double store it in VRAM in a way that the Tile Accelerator writes, while the rendering core reads in parallel, so you got twice the memory usage if I got that right.
So running at 30fps, that alone would use more than 5 MB of the DC's 8 MB VRAM. Plus the 1.17 MB double framebuffers, you have only 1.83 MB left for everything else. Texture data alone is way more than that (VQ texure compression already considered), so 5 Million just can't be true for this game.

Dreamcast was and is a marvellous piece of hardware, so that I think is the reason why people just take that for granted and spread it all around:
They just WANT it to be true (and DC besting the PS2). ; D


Another example for misinformation on segaretro would be the Sega Model 3 technical specs.
It says that Model 3 Step 2, for expamle, has 6 (!) GPUs in the form 6 Real3D-pro 1000 GPUs (chip number 315-6060).
Model 3 does not even have a GPU. GPUs were a later thing, the Real3D-pro is an image generating system spanning several PCBs with a dozend of specialized chips for different functions, not a GPU. And Model 3 does not use 6 (!) Real 3D image generators, each consisting of several boards. Model 3 instead IS a (slightly modified) Real 3D-pro.
They cannot even count right. There are no 6 chips with number 315-6060 present, you can tell if you just look at the Model 3 Step 2 PCB! There are only 4, and those are not GPUs, but texturing units!
There is a lot of more wrong info in that article, I looked through all the source material, which simply does not support those claims. It does not add up.

So a lot of misinformation here.


people hadn't really settled on what exactly certain terms mean. In the 90s, you might see people calling bilinear filtering a form of antialiasing, which isn't entirely inaccurate, but that's not how people would describe it now.

Yes, in the early 90s, texture filtering as also called "texture antialiasing" in some cases. I mean you could call it that, but only a short time after, nobody would.

SSAA does indeed reduce aliasing on edges, so someone in the 90s might call it edge antialiasing

Hmmmmm... I never heard the term Edge Anti-Aliasing used for Supersamling!
I mean SSAA works on the whole image and there already were AA-technics working on edges only back then.
So especially on hardware like the Dreamcast, which also supports SSAA, it would be strange to claim it supported both SSAA/FSAA AND Edge Anti Aliasing, if both were the same.
Well, more on that below!

The Dreamcast has nothing that we would now call hardware edge antialiasing. It just has supersampling.

Good that that is finally cleared up.

The only sources saying DC supported Edge Anti Aliasing would be - again - segaretro and the following document they give as a soure:

Here on pages 3 and 21, it says it supports Edge Anti Aliasing as polygon function (right next to "Bamp"-Mapping - lol).
I never found it mentioned for Dreamcast anywhere else in the document or in any other source.

It is possible for the PVR to render at resolutions greater than 1280x480, but it requires software workarounds that make it less efficicent. I've done 640x480 with 4x SSAA (1280x960) by rendering the screen in two halves, top and bottom, but this requires submitting geometry twice.

I saw you mentioning this in another thread.
Resubmitting geometry meaning out of the VRAM again? That would cost bandwith...

It seems like 1280x480 would be relatively (!) cheap on DC compared to other GPU architectures since it was done using the tile buffers, if I understand correctly.
So not much of of a bandwith hit here and you sad it only increases VRAM-usage slightly.
But what about fillrate cost?
Perhaps that was the reason why only a handful of commercial released games used it - and the ones that did perhaps were heavily CPU-limitated anyway.

That, and also because it would be a suboptimal increase in image quality...
Textures from the time were not shimmering as much as later texture content anyway, and DC had good texture filters in place to handle this.
For edges, the SSAA was effectively ordered grid supersampling, which is not optimal for edge smoothing (a rotated grid with two subpixels would be much more effective, but was only done in later hardware). Also, the horizontal direction would be less important on CRT displays from back then - vertical supersampling would be much more important, but was not done in any game due to the TA buffer limit.

One minor issue with vertical downsampling is that the PVR can't do a 2 pixel box filter, it always does a 3 pixel filter, so there's some slight additional vertical blur, but it's not bad. The user can specify one weight for the center sample, and one weight for the upper and lower sample. Horizontal downsample is always a box filter.

The attached image shows 4x SSAA on the DC. For testing, I deliberately used a different color for the background on both halves to show which render each half belongs to, but it would be seamless if I used the same colors. IIRC, the vertical downsample uses a 50% weight for the center sample, and 25% for the top and bottom samples.

That exactly sounds like the way the Dreamcast handles the deflickering on interlace output!

Did you use it for you 4x SSAA? :)
Was it even feasible to use it for a resource demanding game or were you merely trying if it worked at all?
 
From my memory (I could be wrong as it's 20 years ago), it works like what you described. It's generating alpha pixels on polygon edges and you need to render from back to front with alpha blending in order to work correctly. This of course as you said does not work when polygons intersect with each other.

It actually worse than that because you can't really "sort" triangles as triangles can be at weird positions and no sorting can handle all possible situations. For example, if you sort using the center of a triangle, it's possible a very long triangle could be behind another triangle near its tip and got sorted as in front of that triangle, causing incorrect rendering results.

PowerVR can do FSAA relatively cheap due to its block deferred rendering architecture, thus it can keep the multisampling buffer inside the chip, saving a lot of memory bandwidth.

If you can control your scenes to carefully not intersect polygons and all objects are relatively well defined, it's possible to do edge antialiasing, and that's how some very old games do that.
could we use 1080i instead of 1080p to get a interlaced picture to perform some kind of Edge AA?

The picture on the left shows how a game looks on a CRT, right is the typical image from modern displays.

eNrJ6iT.jpeg


MZ66NFI.jpeg


I got the images from the post below, 'cos the "natural" edge AA created by CRT monitors worked very well, like when you play games at higher framerates, which gives you a certain degree of natural AA when in motion.

 
Last edited:
I guess that depends on what you want from AA. Some don't like jagged edges (thus "edge AA") and some don't like flickering ("temporal" aliasing). Using blank lines between pixels generally helps the first but not the second problem.
It's a bit like blank frame insertion. Basically it asks the brain to fill in the blanks, and it can work very well, but with lower brightness. Also I don't know if it's going to be more demaning on the brain as I haven't seen any research on this topic, but in theory it's possible.
 
If that was true - why should the VGA-Signal of the Dreamcast output only 640x480?
The video output chip of the Dreamcast runs at a mostly fixed pixel clock. It outputs pixels at a fixed rate for either NTSC/PAL 480i or VGA 640x480 60hz. The only other standard resolution that the DC's pixel clock matches is 640x400 at 70hz. To run at a higher resolution, the hardware has to be able to output pixels faster, but the DAC in the DC only operates at around 27 MHz for VGA, or half that for NTSC/PAL. For 640x480x60 noninterlaced, after you add overscan you need to output pixels at a rate of around 27 Mpixel/sec (the video signal for a 640x480 display is actually something around (IIRC) 853x525, which gives the magnets in the CRT time to move the beam from right to left when starting a new line, or from bottom to top when starting a new frame). 1600x1200 at 60 Hz requires a 162 MHz pixel clock. You could try to set the DC to output at 1600x1200, but it would run at 10 Hz.

I mean what is the average size of memory required for a polygon in the display list? 32 bytes? ~1 vertex per polygon, UV coordinates and what not going into the display lists...
And to my knowledge after reading some of the manuals, you would double store it in VRAM in a way that the Tile Accelerator writes, while the rendering core reads in parallel, so you got twice the memory usage if I got that right.
So running at 30fps, that alone would use more than 5 MB of the DC's 8 MB VRAM. Plus the 1.17 MB double framebuffers, you have only 1.83 MB left for everything else. Texture data alone is way more than that (VQ texure compression already considered), so 5 Million just can't be true for this game.
Le Mans definitely does not run at 5 Mpoly/s. I would guess the devs probably meant that their T&L could run fast enough to output 5 Mpoly/s, which is very doable, even if the PVR couldn't handle that much.

The size of a vertex varies depending on format. The TA accepts arbitrary length strips, but they are broken up into substrips of up to 6 triangles (8 verts) long. Each substrip has a 12 header (20 byte if the polygon is affected by full modifier volumes, if you just do shadowing, it's still 12 bytes) containing stuff like depth compare, vertex format, and texture information, and then the vertices. Each vertex contains position (12 bytes), color (4 bytes), UV coords (8 bytes for 32-bit coords, 4 bytes for 16-bit coords, 0 bytes if untextured), optional specular (4 bytes), and if full modifier volumes are used, a second set of UV and vertex colors (up to 16 more bytes). Long strips will save memory because you store fewer headers and few redundant copies of vertices.

So if you draw triangles with 32-bit UV and no specular with an average substrip length of 3 triangles, the vertex size would be (12 + (12 + 4 + 8) * 5) / 3, or 44 bytes per tri. If you ran 1.5 Mpoly/s at 60 FPS, with the double buffering you would need a bit over 2 MB of video RAM to store the polygon data.

Something that's possible is to render the frame buffer in strips, and only double buffer the strips. So instead of having two large vertex buffers, each capable of storing 50K polys, one for each for an entire frame, you could have two smaller vertex buffers, that store maybe 20K polys, each one for a 160x480 region of the screen, then alternate between the buffers as you render the frame buffer in fourths. This is less efficient, since you have to resubmit geometry multiple times if it crosses a region of the screen, but it saves memory. It's also dependent on the polygons being distributed relatively evenly across the screen, since the vertex buffers have to handle the worst case load for one area of the screen. It would work poorly for a fighting game, since the polygons would be concentrated in the characters, but would work better for an overhead shooter, or an outdoor game (as long as you don't turn the camera on its side.)
I saw you mentioning this in another thread.
Resubmitting geometry meaning out of the VRAM again? That would cost bandwith...
The CPU has to resend everything to the hardware for the split screen rendering. It's the same as rendering two entire frames, but one frame is shrunk and written to the top half of the frame buffer, and the second frame goes to the bottom half. It probably would have been easy to add a hardware feature for the PVR to support the split screen trick without CPU assistance, and allow the PVR to reuse the vertex data for both halves, but unfortunately there isn't one.

It seems like 1280x480 would be relatively (!) cheap on DC compared to other GPU architectures since it was done using the tile buffers, if I understand correctly.
So not much of of a bandwith hit here and you sad it only increases VRAM-usage slightly.
But what about fillrate cost?
Perhaps that was the reason why only a handful of commercial released games used it - and the ones that did perhaps were heavily CPU-limitated anyway.
Rendering without mipmaps is typically more expensive than rendering 2x SSAA with mipmaps. Shenmue would have had better performance with SSAA with mipmaps than the version that was actually released. Of course, mipmaps take up more video RAM, so...

I listed some benchmarking of PVR fillrate with different texture formats and SSAA here.

That exactly sounds like the way the Dreamcast handles the deflickering on interlace output!
It's the same thing. The deflickering is done by turning on the downsampling, but using the smallest amount of downsampling possible.

Did you use it for you 4x SSAA? :)
Was it even feasible to use it for a resource demanding game or were you merely trying if it worked at all?
If you're fine with 30 FPS and a reduced polygon count, 2x2 SSAA is usable.
 
Anti-alasing is the solution and the curse for many games.

Someone decided to try out older games on Windows XP without Anti-Alasing, and the experience was surprisingly positive.

A reddit user of the r/pcmasterrace forum wanted to try out some games from the beginning of the 21st century - such as Halo: Combat Evolved or Warcraft III: Reign of Chaos among others - on an OS suitable for them (Windows XP) and was pleasantly surprised to remember how "clean" they looked compared to how blurred the most advanced titles look today, even when playing at resolutions higher than those games could support.

The surprise for the player who tried those games on Windows XP is that in some cases he was more attracted by the scenes and graphics seen in those games than the "ultra-realistic" ones we have today. There may be a part of nostalgia, but he might have a point.

 
Very well explained TampaM.

The TA accepts arbitrary length strips, but they are broken up into substrips of up to 6 triangles (8 verts) long.
Each substrip has a 12 header (20 byte if the polygon is affected by full modifier volumes, if you just do shadowing, it's still 12 bytes) containing stuff like depth compare, vertex format, and texture information, and then the vertices. Each vertex contains position (12 bytes), color (4 bytes), UV coords (8 bytes for 32-bit coords, 4 bytes for 16-bit coords, 0 bytes if untextured), optional specular (4 bytes), and if full modifier volumes are used, a second set of UV and vertex colors (up to 16 more bytes). Long strips will save memory because you store fewer headers and few redundant copies of vertices.

So if you draw triangles with 32-bit UV and no specular with an average substrip length of 3 triangles, the vertex size would be (12 + (12 + 4 + 8) * 5) / 3, or 44 bytes per tri. If you ran 1.5 Mpoly/s at 60 FPS, with the double buffering you would need a bit over 2 MB of video RAM to store the polygon data.

What would be the reasons why you would not use all 6 polygons for a strip? Or is that done by the TA automatically?
I mean I understand models back then would not be as complex as today by far... but it still seems to me that such small strips would rather be an exeption.
So why did you take 3 as an average?

And what would be the disadvantage of using only 16 bits precision for UV coordinates? Misalignment/shimmering artifacts?

Rendering without mipmaps is typically more expensive than rendering 2x SSAA with mipmaps. Shenmue would have had better performance with SSAA with mipmaps than the version that was actually released. Of course, mipmaps take up more video RAM, so...

Yeah, makes sense as all those high res textures in the distance would eat fillrate for breakfast.
That is AFAIK one of the main causes for slowdown in Shenmue.
The reason why they chose not to use mip mapping seems to be memory indeed, seems like every byte mattered.

MSR on Dreamcast seems to have the same problem, judging from the looks. There is so much texture shimmering, I think they also sacrificed Mip Mapping for more memory.



Also, it seems a lot of PS2-games have an awful lot of texture shimmering. I wonder why that would be, as the hardware supports mip-mapping.
The "standard" answer you would get would be they try to save space in the 4 MB eDRAM to make texture "caching" easier; but I guess that should not be aproblem if you schedule the GIF-bus transfers of textures accordingly...
I think I read here somewhere that the functions to select the correct texture LOD were very simplistic on the GS and would need manual touch-up in the game's engine.
Don't know if that's true, but there must be a reason why many PS2-games have this problem.

Compare Le Mans on Dreamcast and on PS2, for example.
PS2: Textures in the distance very sharp, but shimmering.
DC: Textures in the distance too blurry (no Anisotropic Filtering used and perhaps conservative texture LOD), but no shimmering.

I listed some benchmarking of PVR fillrate with different texture formats and SSAA here.

Very interesting indeed!
 
Anti-alasing is the solution and the curse for many games.

Someone decided to try out older games on Windows XP without Anti-Alasing, and the experience was surprisingly positive.

A reddit user of the r/pcmasterrace forum wanted to try out some games from the beginning of the 21st century - such as Halo: Combat Evolved or Warcraft III: Reign of Chaos among others - on an OS suitable for them (Windows XP) and was pleasantly surprised to remember how "clean" they looked compared to how blurred the most advanced titles look today, even when playing at resolutions higher than those games could support.

Well I understand that things like TAA or FXAA and the like cause some blurriness;

but games from that age should be perfect candidates for MSAA and - even better - SGSSAA (if your game engine/hardware supports it and if you get it to run).

Using the latter, you had a sharp but natural looking image without any jaggies or pixel crawling and it also reduces texture shimmering and/or improves texture sharpness (if you select a more aggressive texture LOD).


If you play a game from the beginning of the mellenium and use halfway recent hardware, you could also do crazy things like 2x2 DSR + 8xSGSSAA for 32x SSAA displayed at high resolutions like 4K... image quality would be perfect. : D
 
Well I understand that things like TAA or FXAA and the like cause some blurriness;

but games from that age should be perfect candidates for MSAA and - even better - SGSSAA (if your game engine/hardware supports it and if you get it to run).

Using the latter, you had a sharp but natural looking image without any jaggies or pixel crawling and it also reduces texture shimmering and/or improves texture sharpness (if you select a more aggressive texture LOD).


If you play a game from the beginning of the mellenium and use halfway recent hardware, you could also do crazy things like 2x2 DSR + 8xSGSSAA for 32x SSAA displayed at high resolutions like 4K... image quality would be perfect. : D
in that sense Clive Barker's Undying, a game that I played recently, is the perfect example of what you say.

Many old games didn't have subpixel detail at all. All the textures were like blocks. A gate..., a block of textures, Bars in the windows was the same.

I played it on GoG a few weeks ago and the IQ was literally PERFECT. Not a single jaggie anywhere, not even on weapons. The environments looked as if they had 16xAA or something like that, applied all over, and of course without any aliasing issues.

Guess this also applies to old games like Vampire the Masquerade Redemtpion, one of my favourite games to date, which had the same kind of textures. The hands and fingers of the characters were an entire block, glued together.

800full-vampire:-the-masquerade----redemption-screenshot.jpg


But in terms of AA all those things work like a charm.
 
Back
Top