Why did N64 have super blurry antialiasing?

GuardHei

Newcomer
In many gameplay videos as well as my personal memory, n64 games with antialiasing enabled were always messy lookings. I originally thought this was due to some quick antialias hack back in the early days, which trades performance with sharpness (maybe applied a full screen blur or etc.)

However, after I read through related info, it seems not to be the case.
As far as I understand (sources from ultra64’s programming manual on n64), and to my surprise, n64’s hardware antialiasing works very similar to 8xMSAA with a checkerboard subpixel offset pattern. And the blender unit would proper resolve the polygon colors with each other and the background given the coverage values.

If that’s the case, then only polygon edges will be blurred, and we shouldn’t see a messy image. However, in actual games like goldeneye, with antialiasing on, it seems like the ground texture is also weirdly blurred (I believe those pixels are inside a triangle). Here’s a comparison video video

So, do I misunderstand anything here? Or did most developers choose to implement their own software antialiasing solution due to other concerns (maybe performance? I know n64 is hungry on bandwidth)? Or is this “blur” not related to antialiasing at all (maybe the video output signal is incorrectly processed/compression artifacts)?

Also, a side question is that where does n64 store its frame buffer and zbuffer? I know n64 used the unified memory architecture. But does the RSP include any sort of cache to quickly access the frame buffer and zbuffer, or they all locate on the main RDRam memory, and has a super high latency to access?
 
N64 has two separate anti-aliasing methods that most games used simultaneously.

One is poly edge AA, it may have results similar to MSAA, but its analytical: it takes in a single sample but estimates coverage based on the sample's distance to the closest triangle edge. The result (as far as I understand) is simply alpha blended on top of the rest of the frame buffer, so it also only works when drawing back to front (impossibilitating the optimization opened up by the z-buffer of rendering front to back and skipping ocluded pixels early)

The second AA is, as you guessed, a full screen blur (horizontal only, I think). Some people seem really anoyed by it, but I don't feel too bothered. I find that some subtle imperfections in the displaying of the frame help conceal some of its syntheticness, even in games as old as n64's. Yet I know thats an unpopular opinion.
 
Last edited by a moderator:
N64 has two separate anti-aliasing methods that most games used simultaneously.

One is poly edge AA, it may have results similar to MSAA, but its analytical: it takes in a single sample but estimates coverage based on the sample's distance to the closest triangle edge. The result (as far as I understand) is simply alpha blended on top of the rest of the frame buffer, so it also only works when drawing back to front (impossibilitating the optimization opened up by the z-buffer of rendering front to back and skipping ocluded pixels early)

The second AA is, as you guessed, a full screen blur (horizontal only, I think). Some people seem really anoyed by it, but I don't feel too bothered. I find that some subtle imperfections in the displaying of the frame help conceal some of its syntheticness, even in games as old as n64's. Yet I know thats an unpopular opinion.
The second aa is interesting to hear. It did not seem to be included in the programming guide. Also, I don’t think the RSP is programmable, so how is the second “horizontal blur” implemented? Hardware? Cpu directly manipulating the framebuffer? Or the video output device takes the job? But this filter does look like the root of the blurriness.

Regarding the first aa, I’m slightly confused. First, according to the documentation, it states that you don’t have to draw polygons from back to front as long as their z-buffered. Of course, it is utilizing the blender unit, so transparent objects need to have this treatment. (I also doubt whether N64 will actually support stuffs like early z kill, as its zbuffer is handled by the blender unit at the very end of the screen space pipeline.)

In addition, could you elaborate a bit more on how n64 estimates the coverage info given only one sample?

It makes sense to me, cuz I don’t think n64 is powerful enough to do coverage rasterization 8 times more like the msaa. Yet according to this subpixel mask in the documentation, it does look like there is 8 samples, but again, it maybe I’m misunderstanding.

88797056-9D17-408E-B65C-72752178F281.jpeg
Link to the above image: http://n64devkit.square7.ch/tutorial/graphics/6/6_2.htm
 
The second aa is interesting to hear. It did not seem to be included in the programming guide. Also, I don’t think the RSP is programmable, so how is the second “horizontal blur” implemented? Hardware? Cpu directly manipulating the framebuffer? Or the video output device takes the job? But this filter does look like the root of the blurriness.

Regarding the first aa, I’m slightly confused. First, according to the documentation, it states that you don’t have to draw polygons from back to front as long as their z-buffered. Of course, it is utilizing the blender unit, so transparent objects need to have this treatment. (I also doubt whether N64 will actually support stuffs like early z kill, as its zbuffer is handled by the blender unit at the very end of the screen space pipeline.)

In addition, could you elaborate a bit more on how n64 estimates the coverage info given only one sample?

It makes sense to me, cuz I don’t think n64 is powerful enough to do coverage rasterization 8 times more like the msaa. Yet according to this subpixel mask in the documentation, it does look like there is 8 samples, but again, it maybe I’m misunderstanding.

View attachment 6686
Link to the above image: http://n64devkit.square7.ch/tutorial/graphics/6/6_2.htm

Let me preface that I am not sure of anything of what I'm saying, but rather going off of memory from stuff I learned second hand through time.

As for the second AA, as I understand it, it is applied at the very end of the pipeline by the video signal whatever unit. Its hard coded but toagleble (Quake 1 port lets the user turn it on/off in the settings)

For the edge AA, I always thought they used analytical AA. Instead of using multiple discrete samples use trigonometry to compute how much of the each pixel's area is covered by the polygon. There are multiple mathematical ways to implement that, but I used to think n64 implemented an aproximation of it by choosing the closest triangle edge to each pixel center and calculating the shortest distance from the center to the that edge and estimating coverage thar way. (since that uses one edge per pixel, it will be wildly incorrect at vertices or micro polygons)

Now, that image you've shown completely throws a wrench at what I though to be the case. Perhaps I misremembered all this, or perhaps there is a false description of the AA around the web that I ended up believing. The 8 subsamples aproach seems surprisingly bruteforcey for the time!

As for early Z, I believe a pixel would be shaded regardless of if it would end up occluded or not, but still it only gets drawn to the frame buffer (color and Z) if it passes Z test, so there is that bandwith saving by failing the Z test.

I noticed, for example, many games have non-antialiased edges at every line that's directly in front of the sky. That would leads me to believe such game is drawing sky last. If that does not save any performance, then they are loosing AA for no good reason. Banjo Kazzoie is an example that comes to mind, but again, my memory can fail me.
 
Last edited:
I just found a source for my own understanding of n64's AA here in B3D itself.

N64 used edge antialiasing using Wu line antialiasing algorithm or something similar.

Good thing is that it is basically perfect in terms of AA gradient and involves no more sampling than normal rendering.
Bad thing, it needs sorted back to front polygons or order independent rendering and it cannot be properly applied to smaller than one pixel polygons.
Also there are some small errors at line ends.

Almost all GPUs support this method, including Voodoo, verite, anything fron nvidia & ati, GS.

On ps2 the spinning cubes at startup and music player uses this algorithm as well.
I always wondered why this method wasn't used in some scenes & edges on ps2.

Some games on Ps2 like BaldursGate: DA, used plain brute force SSAA.

but later it gets corrected to what you had said instead:

This is how the N64 AA worked:
http://www.patentstorm.us/patents/6999100.html

The abstract talks about the 12-bit coverage mask, but it doesn't mention that the N64 used 9-bit-per-byte RAM so that every 3-byte RGB pixel could stash a 3-bit coverage mask for the AA. 4-pixel quad * 3 bits/pixel = 12 bit mask.

So what I first described might indeed have been a comon misconception.
 
Let me preface that I am not sure of anything of what I'm saying, but rather going off of memory from stuff I learned second hand through time.

As for the second AA, as I understand it, it is applied at the very end of the pipeline by the video signal whatever unit. Its hard coded but toagleble (Quake 1 port lets the user turn it on/off in the settings)

For the edge AA, I always thought they used analytical AA. Instead of using multiple discrete samples use trigonometry to compute how much of the each pixel's area is covered by the polygon. There are multiple mathematical ways to implement that, but I used to think n64 implemented an aproximation of it by choosing the closest triangle edge to each pixel center and calculating the shortest distance from the center to the that edge and estimating coverage thar way. (since that uses one edge per pixel, it will be wildly incorrect at vertices or micro polygons)

Now, that image you've shown completely throws a wrench at what I though to be the case. Perhaps I misremembered all this, or perhaps there is a false description of the AA around the web that I ended up believing. The 8 subsamples aproach seems surprisingly bruteforcey for the time!

As for early Z, I believe a pixel would be shaded regardless of if it would end up occluded or not, but still it only gets drawn to the frame buffer (color and Z) if it passes Z test, so there is that bandwith saving by failing the Z test.

I noticed, for example, many games have non-antialiased edges at every line that's directly in front of the sky. That would leads me to believe such game is drawing sky last. If that does not save any performance, then they are loosing AA for no good reason. Banjo Kazzoie is an example that comes to mind, but again, my memory can fail me.
Oh man, the modern age of graphics completely let me forget about the cost to write into the framebuffer, but ya you are right about the bandwidth save when failing the ztest.

Weirdly enough, I don’t understand why Banjo exhibits the behavior you describe. If we are only talking about opaque objects with ztest on, the order of draw doesn’t affect the aa. So even if you draw the background after the foreground objects, you should still see antialiased edges. Transparent object, since they can’t be ztested if wanting multiple layers, have to be drawn from the back to front to make the aa work properly. All of these are addressed in the manual here: ultra64.ca/files/documentation/online-manuals/man/pro-man/pro15/index15.7.html

Now, I’ve tried to read a few more sources and I think the horizontal blur may be more complex than what we want. But this is purely my guess, and I don’t think I quite understand some of the readings:

In this source, people throw out a guess that the improper interpolation from lower resolution to output causes the blur, which both makes sense and causes questions — if this is an output issue, when ppl can toggle the blur on/off? What it makes sense is that Nintendo doesn’t record any form of horizontal blur antialiasing in the manual, so I’m 80% sure that the “full screen” horizontal blur is probably not a thing. (If this is something can be toggled, Nintendo should have it in the documentation, I guess?)

After reading through the antialiasing part, I think I might know which part could potentially be the problem. But first I’ll have to build a kinda long context to make sure we are on the same page.

So we all know n times msaa requires both color and depth textures to be n times bigger so that we can keep track on each sub sample. But n64 is pretty early. Not only this requires more bandwidth to read/write, but also the cost makes the super small ram incapable to do this.

What Nintendo does is that (again, from my understanding), it assumes only two edge polygons will present inside one pixel. Therefore, the framebuffer only has to store the extra coverage mask, and instead of storing the color on each sub sample, it always refers to that one color stored in the framebuffer as the color of the other polygon. So when you antialias in the blender, the hardware is essentially doing a linear blend between the two polygon colors based on the coverage.

Now you can see there are two problems (and they also list them in the manual). First thing is that if you have multiple edge polygons inside a single pixel, it may be wrong to blend in this way, as different subsamples may get blend differently. Apparently there’s now way to solve this, but I guess this is not a huge problem and very related to the blurry issue.

The second problem is that, this doesn’t properly blend between the backbuffer’s background color and the polygon colors of a pixel is not fully covered. Nintendo refers this as the antialiasing of the silhouettes.

So Nintendo adds the other pass inside the video filter unit to solve this, which is the one last step right before outputting to NTSC/PAL — they blend the background color and the foreground according to the coverge info.

Now I think here the classic blur nightmare comes. I originally never dig into this step because I think this is very straightforward. But now come to think about it again, as we only have one single framebuffer with no extra color info stored, how can n64 tell what the background color is?

Well, here is the solution they use, and you should see the problem:

“The ForeGround color is always the color stored in the frame buffer for that pixel. The BackGround color is found by examining fully covered pixels in a 5x3 pixel area around the current pixel. Note that Z is not used in determining the BackGround color and so it is safe for Z to be single-buffered.”

Link is here: http://ultra64.ca/files/documentation/online-manuals/man/pro-man/pro15/index15.6.html


So I guess that is enough to end the question, the background color used is properly an average of the neighboring colors.

So is it a full screen horizontal blur?

Well it might look like it, and in fact it is neither full screen as it only happens on pixels that are not fully covered, nor a pure horizontal blur as it is using the 5x3 kernel, which is more or less a stretched rectangle.

I hope other ppl could step in if there’s any wrong info or misconceptions.
 
Oh man, the modern age of graphics completely let me forget about the cost to write into the framebuffer, but ya you are right about the bandwidth save when failing the ztest.

Weirdly enough, I don’t understand why Banjo exhibits the behavior you describe. If we are only talking about opaque objects with ztest on, the order of draw doesn’t affect the aa. So even if you draw the background after the foreground objects, you should still see antialiased edges. Transparent object, since they can’t be ztested if wanting multiple layers, have to be drawn from the back to front to make the aa work properly. All of these are addressed in the manual here: ultra64.ca/files/documentation/online-manuals/man/pro-man/pro15/index15.7.html

Now, I’ve tried to read a few more sources and I think the horizontal blur may be more complex than what we want. But this is purely my guess, and I don’t think I quite understand some of the readings:

In this source, people throw out a guess that the improper interpolation from lower resolution to output causes the blur, which both makes sense and causes questions — if this is an output issue, when ppl can toggle the blur on/off? What it makes sense is that Nintendo doesn’t record any form of horizontal blur antialiasing in the manual, so I’m 80% sure that the “full screen” horizontal blur is probably not a thing. (If this is something can be toggled, Nintendo should have it in the documentation, I guess?)

After reading through the antialiasing part, I think I might know which part could potentially be the problem. But first I’ll have to build a kinda long context to make sure we are on the same page.

So we all know n times msaa requires both color and depth textures to be n times bigger so that we can keep track on each sub sample. But n64 is pretty early. Not only this requires more bandwidth to read/write, but also the cost makes the super small ram incapable to do this.

What Nintendo does is that (again, from my understanding), it assumes only two edge polygons will present inside one pixel. Therefore, the framebuffer only has to store the extra coverage mask, and instead of storing the color on each sub sample, it always refers to that one color stored in the framebuffer as the color of the other polygon. So when you antialias in the blender, the hardware is essentially doing a linear blend between the two polygon colors based on the coverage.

Now you can see there are two problems (and they also list them in the manual). First thing is that if you have multiple edge polygons inside a single pixel, it may be wrong to blend in this way, as different subsamples may get blend differently. Apparently there’s now way to solve this, but I guess this is not a huge problem and very related to the blurry issue.

The second problem is that, this doesn’t properly blend between the backbuffer’s background color and the polygon colors of a pixel is not fully covered. Nintendo refers this as the antialiasing of the silhouettes.

So Nintendo adds the other pass inside the video filter unit to solve this, which is the one last step right before outputting to NTSC/PAL — they blend the background color and the foreground according to the coverge info.

Now I think here the classic blur nightmare comes. I originally never dig into this step because I think this is very straightforward. But now come to think about it again, as we only have one single framebuffer with no extra color info stored, how can n64 tell what the background color is?

Well, here is the solution they use, and you should see the problem:

“The ForeGround color is always the color stored in the frame buffer for that pixel. The BackGround color is found by examining fully covered pixels in a 5x3 pixel area around the current pixel. Note that Z is not used in determining the BackGround color and so it is safe for Z to be single-buffered.”

Link is here: http://ultra64.ca/files/documentation/online-manuals/man/pro-man/pro15/index15.6.html


So I guess that is enough to end the question, the background color used is properly an average of the neighboring colors.

So is it a full screen horizontal blur?

Well it might look like it, and in fact it is neither full screen as it only happens on pixels that are not fully covered, nor a pure horizontal blur as it is using the 5x3 kernel, which is more or less a stretched rectangle.

I hope other ppl could step in if there’s any wrong info or misconceptions.
Can’t believe I forgot to attach one of the sources.

For the output problem, this is discussed here: https://shmups.system11.org/viewtopic.php?f=6&t=56988
See the 14th post on the first page
 
Oh man, the modern age of graphics completely let me forget about the cost to write into the framebuffer, but ya you are right about the bandwidth save when failing the ztest.

Weirdly enough, I don’t understand why Banjo exhibits the behavior you describe. If we are only talking about opaque objects with ztest on, the order of draw doesn’t affect the aa. So even if you draw the background after the foreground objects, you should still see antialiased edges. Transparent object, since they can’t be ztested if wanting multiple layers, have to be drawn from the back to front to make the aa work properly. All of these are addressed in the manual here: ultra64.ca/files/documentation/online-manuals/man/pro-man/pro15/index15.7.html

Now, I’ve tried to read a few more sources and I think the horizontal blur may be more complex than what we want. But this is purely my guess, and I don’t think I quite understand some of the readings:

In this source, people throw out a guess that the improper interpolation from lower resolution to output causes the blur, which both makes sense and causes questions — if this is an output issue, when ppl can toggle the blur on/off? What it makes sense is that Nintendo doesn’t record any form of horizontal blur antialiasing in the manual, so I’m 80% sure that the “full screen” horizontal blur is probably not a thing. (If this is something can be toggled, Nintendo should have it in the documentation, I guess?)

After reading through the antialiasing part, I think I might know which part could potentially be the problem. But first I’ll have to build a kinda long context to make sure we are on the same page.

So we all know n times msaa requires both color and depth textures to be n times bigger so that we can keep track on each sub sample. But n64 is pretty early. Not only this requires more bandwidth to read/write, but also the cost makes the super small ram incapable to do this.

What Nintendo does is that (again, from my understanding), it assumes only two edge polygons will present inside one pixel. Therefore, the framebuffer only has to store the extra coverage mask, and instead of storing the color on each sub sample, it always refers to that one color stored in the framebuffer as the color of the other polygon. So when you antialias in the blender, the hardware is essentially doing a linear blend between the two polygon colors based on the coverage.

Now you can see there are two problems (and they also list them in the manual). First thing is that if you have multiple edge polygons inside a single pixel, it may be wrong to blend in this way, as different subsamples may get blend differently. Apparently there’s now way to solve this, but I guess this is not a huge problem and very related to the blurry issue.

The second problem is that, this doesn’t properly blend between the backbuffer’s background color and the polygon colors of a pixel is not fully covered. Nintendo refers this as the antialiasing of the silhouettes.

So Nintendo adds the other pass inside the video filter unit to solve this, which is the one last step right before outputting to NTSC/PAL — they blend the background color and the foreground according to the coverge info.

Now I think here the classic blur nightmare comes. I originally never dig into this step because I think this is very straightforward. But now come to think about it again, as we only have one single framebuffer with no extra color info stored, how can n64 tell what the background color is?

Well, here is the solution they use, and you should see the problem:

“The ForeGround color is always the color stored in the frame buffer for that pixel. The BackGround color is found by examining fully covered pixels in a 5x3 pixel area around the current pixel. Note that Z is not used in determining the BackGround color and so it is safe for Z to be single-buffered.”

Link is here: http://ultra64.ca/files/documentation/online-manuals/man/pro-man/pro15/index15.6.html


So I guess that is enough to end the question, the background color used is properly an average of the neighboring colors.

So is it a full screen horizontal blur?

Well it might look like it, and in fact it is neither full screen as it only happens on pixels that are not fully covered, nor a pure horizontal blur as it is using the 5x3 kernel, which is more or less a stretched rectangle.

I hope other ppl could step in if there’s any wrong info or misconceptions.

Man, I think you may have cracked it. It fits with other parts of the documentation you've linked yesterday. (I perused some of it)

That means means Nintendo had post-process AA back in 1996. Kind of a spiritual precursor to the morpholohical algos that became popular mid PS360 gen.

That explains how non-silhouette edges never exibit artifacts, which I assume could very likely show up if all you do is alpha blend each poly's edge naively without information about neighboring connected tris...
 
Man, I think you may have cracked it. It fits with other parts of the documentation you've linked yesterday. (I perused some of it)

That means means Nintendo had post-process AA back in 1996. Kind of a spiritual precursor to the morpholohical algos that became popular mid PS360 gen.

That explains how non-silhouette edges never exibit artifacts, which I assume could very likely show up if all you do is alpha blend each poly's edge naively without information about neighboring connected tris...
Miss the days when Nintendo was so ambitious on tech
 
Because the Dev tool's given by Nintendo made gave horrid performance. Like Z buffering eating up a lot of fill rate, Struggling to fit textures into 8 ~ 64MB carts, No use of 650k mode, more.
 
Besides all that, another elephant in the room is RDRAM thus a lot of time hardware was idling and waiting for data, all for sake of simplicity.

Only viable solution to that would have had considerably increase in production cost, though failure rate of single huge die containing CPU and GPU was already very costly.
Good for Sega for not taking design from SGI yet Nintendo 64 still outsold it, more so once CPU and GPU were made into separate chips than one monolith.
PlayStation 1 had 132MBps for VRAM bandwidth IIRC and 75MBps for system and in comparison on Saturn it varies wildly from processor to processor.
Two chips of 2MB SDRAM in 1996 could have possibly match total bandwidth of PlayStation 1. Also Donkey Kong 64 may have not crashed.
 
Back
Top