Using the SPU's to do AA

Anti-alias

AA.png

To me the upper is already low-resolution anti-aliased (this is why there is many shades on edges). The lower picture looks like they take upper picture (which is already anti-aliased), then increase resolution alot so pixels are much smaller, then blur. If it is actual anti aliasing you will not see saturation loss in main areas. Am I crazy? You tell me. :)
 
That's correct, indeed

To me the upper is already low-resolution anti-aliased (this is why there is many shades on edges). The lower picture looks like they take upper picture (which is already anti-aliased), then increase resolution alot so pixels are much smaller, then blur. If it is actual anti aliasing you will not see saturation loss in main areas.
Indeed, I noticed that myself when putting this quick mock-up together that's why I selected and covered the outside edges to create a real alising effect instead of a simple point filtered upscale like what it would have been. Still, I should have chosen a very low saturation source but since it was just an example to convey a joke (that the definition didn't exclude complete screen blur as a valid anti-aliasing method), I didn't bother to look for the outline-only version of the B3D logo in my art folder.
 
This may be doable but it assumes that the polygons themselves are correct. If they, themselves, are an approximation of another surface, then who is to say that it has been sampled correctly <shrug>

exactly... all this talk of exactly what antialising is in terms of signal theory (not that i'm knocking that) but this discussion is mostly ignoring the fact we're talking about worlds approximated by flat triangles in the first place :)

It was interesting to hear the point that blur is used in offline rendering.
I suppose you could argue that by bluring an image you are creating a true antialiased image, just at a lower res? I know that doesn't get the benefit of jittered samples though.
 
[...] we're talking about worlds approximated by flat triangles in the first place :)

Yeah and that's driving me nuts since the beginning of this thread... Meshes can be prefiltered (lod) but in the end it's always a bunch of polygon with nasty straight line containing infinitely high frequency...

So in the end using triangles is like shooting a bullet in your foot... I'm not 100% sure but it seems that point-based rendering or volume rendering are better behaved here.

Because in the end over-sampling is a waste of energy and if the scene data could be nicely pre-filtered it should not even be necessary. But as long as we use edges we are fucked...

Pointillism is the future of CG, you can quote me on that ;)
 
For me the main point of AA has always been to compensate for a lack of resolution. So it's a problem that should kill itself in a few years, when resolution simply becomes so high that it just doesn't matter anymore (on smaller 1920x1080 tvs it's already starting to become a bit irrelevant, especially when you're not with your nose against the screen).

So while it is still a problem, the main goal of AA should be to increase the resolution of, say, a line, by creating a visual illusion that makes it seem as if you are looking at a higher resolution line than you really are. You can do this by strategically placing pixels of varying intensity and color that fool the eye/brain into thinking you are looking at something a lot more fluent than the line really is.

And that's the whole point of AA. You can think of all sorts of tricks in terms of drawing a curve and so on, but as long as you're forced to do so at a resolution that allows individual pixels to be easily visible, especially when the pixels have highly contrasting levels of brightness or color space, it won't matter. You still need to similate the resolution you don't have.

And sometimes you can do that much more effectively, when you know more about the image than you can deduct from the final output buffer. To give an extreme example, take Farid's examples. If you don't know that the first one is meant to show aliasing, then AA would ruin it. ;) And if you have a vector line that is supposed to run in a staircase like manner, how are you supposed to know that you don't want it to look smooth?

So AA at the level of drawing the pixels would be the most efficient. But this may also mean that you sometimes could need a higher resolution than a mesh gives you. Curve hints could really help make curves look much better.

Disclaimer: I know extremely little about how modern GPUs work, so don't laugh at me.
 
For me the main point of AA has always been to compensate for a lack of resolution. So it's a problem that should kill itself in a few years, when resolution simply becomes so high that it just doesn't matter anymore (on smaller 1920x1080 tvs it's already starting to become a bit irrelevant, especially when you're not with your nose against the screen).

You should patent that "human-eye powered FSAA": move your chair 4 meters behind and enjoy :D
 
For me the main point of AA has always been to compensate for a lack of resolution. So it's a problem that should kill itself in a few years, when resolution simply becomes so high that it just doesn't matter anymore (on smaller 1920x1080 tvs it's already starting to become a bit irrelevant, especially when you're not with your nose against the screen).

I'm not sure I agree with that. I can't see raster displays increasing in resolution in the near term to the point where a 50" TV for example, would have pixels small enough to be indistinguishable by the human eye (which would be necessary to approximate the "infinite resolution effect" required to completely negate the presence of aliasing..
Not to mention the fact that you still have to find a mircoprocessor powerful enough to render to all though pixels which would be expensive and a monolithic waste of resources..

I do kinda agree with your definition of AA though in that it's really only a means of providing a more "perceived" continuous representation of a continuous world mapped onto a discrete image space (raster display).

Personally I don't see how geometric complexity in terms of having finite tri-based approximations/representations of virtual objects in factors into the problem at all..

At the end of the day reducing/removing aliasing during the "colour/point mapping phase" of the world-space to the screen-space isn't dependant on how accurate the input data (virtual object model) is in the first place and thus, if you can successfully represent a more continuous-looking line/curve/edge in the raster display then surely your AA has, for all intensive purposes, fullfilled it's intended purpose to a satisfactory level..

If the actual input data is inacturate in it's ability to fully describe/virtualise the concept/entity it's attempting to model then that's a different matter altogether and only represents the limitations of using a discrete/quantified virtualisation model to represent something that, in real life, is argueably infinitely unquantifiable (i.e. how many atomic [in the literal sense of the word] "points", if at all, make up a physical object...?)

[Hope that made sense.. :p ]
 
For me the main point of AA has always been to compensate for a lack of resolution. So it's a problem that should kill itself in a few years, when resolution simply becomes so high that it just doesn't matter anymore (on smaller 1920x1080 tvs it's already starting to become a bit irrelevant, especially when you're not with your nose against the screen).
Except it doesn't work like that, because resolution so far is limited by manufacturing capacity of pixels. You can get 1080p displays, but they're always large, such that the size of pixels in relation to area on your retina stays roughly the same. A pixel on a 17" 1280x1024 display at 2' viewing distance is the same in size as a 1024x768 15" display at that viewing distance, or a 40" 1080p display at 6' or whatever it is - they alll hang out together at ~75 dpi . Monitor resolutions and GPU output resolutions have been going up and up, but we still have jaggies and a need for antialiasing. No-one has yet produced the 15" 1600x1200 display that wouldn't need AA ;)

This is in fact something I'm disappointed with. Back in the day, I was looking forward to super-smooth graphics due to super high-resolution displays. Text really would benefit from it. Instead we have pretty rubbish AA solutions on text. Clear-type gives me eye-strain. 150 dpi displays haven't happened, and aren't set to happen. Jaggies will remain noticeable without AA.
 
Back to signalling theory I was always under the impression sample jitter DID give you considerable visual bang per buck (in the places where its' really straight edges, and not characters face or clothing or a brick wall with a rough corner ) so i do take the point that there's a difference between AA and increasing res.
You're eye is always drawn the the false moving artefacts, right ? But when sampling is random for example you've nothing so out of the ordinary for your perception to latch onto.
 
You're eye is always drawn the the false moving artefacts, right ? But when sampling is random for example you've nothing so out of the ordinary for your perception to latch onto.
Except for the noise. Jittered sampling just means you've traded stairstepping for noise. And it takes a lot of samples to get rid of that noise, which, if you assume the jittering is not the same every frame, you have differing noise every frame. You could theoretically reduce it to the point where it can be virtually eliminated with a post-process, but then you'll be oversampling AND blurring.

Granted, I do think it's better than regular grid sampling, but making it passable is the harder thing.
 
Last edited by a moderator:
I think that for a glimpse of the future, you probably shouldn't look at big 50" screens ... the larger the screen, the bigger the compromises necessarily have to be regarding to the latest technologies.

Instead, look at this laptop as a small example: http://reviews.pricegrabber.com/laptop/m/7774964/

It has a 15.4" 1920x1200 screen. I have my 22" 1680x1050 right next to my 15.4" 1440x900 laptop right now, and while my 22" still looks pixellated from this distance, the laptop screen requires me to pay close attention to notice the pixels. Trade that in for 1920x1200 and we're getting there.

So in a few years time, obviously not all screens are going to be free of aliasing, but it won't be long before there are a couple of them out there that really, really don't need any AA.

I'm guessing that in about 5 years from now, we'll have some kick-ass 3840x2400 screens on our desks that don't need a whole lot of AA. Heck, there have been screens around with more than that for a long time, just unaffordable. ;)

In the meantime though, there will be screens (and machines) around for a while yet that need some form of AA. So I think we might get both solutions. In the case of AA, I think it will come as a part of rendering screens that are more tuned to how eyes see things and, probably more importantly, how cameras see things / make things look. I could imagine a rendering engine that can make a smooth transition based on the 3d mesh from drawing a 3d cube wireframe that simulates sharp lines in the focal area using the kind of color/brightness suggestive-of-sharpness AA I suggested (and people already use all over the place for drawing tiny but good looking 16x16 pixel icons, sprites and stuff ;) ), and makes a smooth transition into blur as the pixels move away from the focus area. It's either that, or who knows the breakthrough of real multi-pass raytracing or something similar.

Any(which)way, we'll get there.

Incidentally, Shifty, the subtitles on BluRay movies look rather splendid. Even on our lowly 768p screen.
 
I'm not sure I agree with that. I can't see raster displays increasing in resolution in the near term to the point where a 50" TV for example, would have pixels small enough to be indistinguishable by the human eye (which would be necessary to approximate the "infinite resolution effect" required to completely negate the presence of aliasing

Very large resolution does not solve the problem of aliasing. You'd still see stuff like moire. What needs to be done is higher quality filtering of each individual pixel.

And current 40+ screens *are* at the diminishing returns limit for resolution, at normal viewing distances, in regard to the angular resolution of the fovea of the human eye (which is one arc-minute).

Cheers
 
Except for the noise. Jittered sampling just means you've traded stairstepping for noise.
"..for higher frequency noise". It's true that using a jittered (or better still, poisson disk) sampling pattern shifts the aliasing error from low frequencies to high frequencies but the human eye/brain finds this far less disturbing. After all, the human eye uses poisson sampling patterns to hide the aliasing the photoreceptor cells themselves (due to their sampling) would cause.
 
I'm not 100% sure but it seems that point-based rendering or volume rendering are better behaved here.

yeah, fuzzying up the edges where there is an inherent error in the representation... in a manner more deliberate than screen blur?

Commonly used feathered alpha-billboarded sprite trees have a lot in common with the type of rendering you're mentioning here ? and hence they are very good at being appearing AA'd..

I did try volume slice rendering with fuzzy contours for ground surfaces. didn't really get any content that made sense for it though. (ok it looked downright weird.)

Tangent: Hey! how about doing a fuzzy point renderer on SPUs, that way we get AA on spus... :)
 
"..for higher frequency noise". It's true that using a jittered (or better still, poisson disk) sampling pattern shifts the aliasing error from low frequencies to high frequencies but the human eye/brain finds this far less disturbing.
Well, you can't necessarily say that frequency is the only concern. Magnitude of the noise is also an issue as it can be perceived as error. The big problems don't really occur at minor transitions, but in cases where an edge is a separation of very high contrast (e.g. dark object against a bright skybox) -- the results in these cases are less like "snow" and more like "salt-and-pepper." It typically requires a lot of samples to get out of a hole here.
 
Back
Top