Alternative AA methods and their comparison with traditional MSAA*

I think considering how good MLAA is already, perhaps a hybrid form will come into being at some point which provides MLAA routines with additional "hints".
 
I think considering how good MLAA is already, perhaps a hybrid form will come into being at some point which provides MLAA routines with additional "hints".
I still think that the optimum IQ results from a combo of SSAA+MLAA!
Of course, as Shifty pointed out - you need an appropriate "non-weak" system :mrgreen:
 
I still think that the optimum IQ results from a combo of SSAA+MLAA!
Of course, as Shifty pointed out - you need an appropriate "non-weak" system :mrgreen:

I think the optimal system probably draws pixels right the first time. :D But exactly how, I'm not sure either. Perhaps the original 3D data should have information about edge contrast and background color, lighting and global illumination so that pixels are drawn right the first time. ;)

I'm still wondering if it is not easier now just to start fresh, and design a system where you place a dot in space that has properties like weight, luminance, density, volume, magnetism, surface behavior, global illumination, self illumination and so on, where all parameters can be variable, calculated in real-time, pre-calculated or just set to a fixed value depending on the performance available.

These points in turn are used as a baseline to calculate how light travels from it to the camera surface corners and how it is influenced in the meantime. It would be a system that behaves much more like vector graphics and how it is eventually rendered in pixels can be drawn like an intelligently aa-d vector. It could potentially solve a lot of problems related to curved surfaces, physics behavior and animation, transparency, shadows and lighting, not to mention requiring way less data storage.

It'd definitely be a lot of work to set it up, and getting content creation tools could be very hard as well, but the pay-off could be immense, especially if hardware was designed with this approach in mind from the start.
 
IMO, Nvidia's mixture of MSAA and FAA (CSAA) is the right trade off between cost and correctness.

Cheers
 
I'm still wondering if it is not easier now just to start fresh, and design a system where you place a dot in space that has properties like weight, luminance, density...

These points in turn are used as a baseline to calculate how light travels from it to the camera surface corners and how it is influenced in the meantime. It would be a system that behaves much more like vector graphics...
Great idea. And every point can be called...a vertex! How do you think we got where we are? An evolution of vector graphics. The only difference at the moment is we aren't drawing triangles with bounding lines that get AA'd. MLAA is, somewhat backhandedly, that sort of solution applied at the wrong end of the drawing process. However, the issues of aliasing aren't a result of data representation but image construction. Three points defining a triangle can be rendered on a GPU with per-pixel rasterisation to produce jaggies, or with a Wu AA'd line-drawing method. Could GPU's integrate something similar during the rasterising step? It's an idea. Still, it comes down to image construction, which in turn comes down to performance. The data types being used are all sorts of compromises, such as not modelling internal densities because we haven't got a terabyte of RAM per system to play with.
 
The edge would just have to move with near integer pixel speed for the effect to be visible.
It'll be visible, but no different to non-AA'd edges. For something like scrolling text, pixel crawl can be offputting. But in the middle of a game, I don't think players will notice a wall edge moving one pixel instead of a fraction of a pixel.
 
It'll be visible, but no different to non-AA'd edges. For something like scrolling text, pixel crawl can be offputting. But in the middle of a game, I don't think players will notice a wall edge moving one pixel instead of a fraction of a pixel.

I agree, I don't think it is that big of a problem in reality.

For example having a row of narrow vertical structures moving horizontally (ie. a row of lamp posts or somesuch), you would see the image blurred/softened, but since the alternative aliased image would be a flickering mess, I'm not sure it is such a set back.

Cheers
 
I still think that the optimum IQ results from a combo of SSAA+MLAA!
Of course, as Shifty pointed out - you need an appropriate "non-weak" system :mrgreen:

2xMSAA + 1.5x1.5/2x2SSAA is a beauty I tell you and realisable unless your set res is beyond 1440x900/1680x1050 for last gen GPUs (48xx series/GTX260). It is actually realisable in quite some games and I had no problems with a 4890 doing that for Crysis Wars and Arma 2 which allows SSAA by engine cvar/menu (1920x1200 2xMSAA v.h -> 1440x900 and 2880x1800 -> 1440x900). Though seeing ME/ME2 with SSAA leaves a bitter taste in my mouth when I can only use MSAA. ME games with their shader aliasing/texture mapping aliasng which games run at high framerates (40-50fps 1680x1050) with everything enabled and that is with brute force 4xMSAA +4xTSAA.

Sadly only Nvidia users have had the luxury and 5xxx owners for enabling as it pleases them unless third party application is used but then that is limited to (Tommti tool, DX10-DX11). Thing is most multiplatform games are DX9.. :mad:

But then it depends on game but some like ME greatly benfts from SSAA becouse with 4x or 24xAA the shader aliasing/texture detail mapping still destroys a lot of the IQ and edges already look razor sharp with 4xMSAA/TSAA upclose.

EDIT: What Gubbi said about CSAA.
 
I think considering how good MLAA is already, perhaps a hybrid form will come into being at some point which provides MLAA routines with additional "hints".

I'd still like to have real AA as well, performing supersampling where necessary, using an adaptive approach. MLAA could then help smooth out the remaining aliasing, but there's a lot of cases where you'd need something else to fix image quality problems.
And it's not possible to always fit your artwork to work well with MSAA, there are many possible settings for a game where it's just not enough.
 
Great idea. And every point can be called...a vertex! How do you think we got where we are? An evolution of vector graphics.

I know that. But when I started learning about graphics pipelines, I thought it would be a fun learning exercise to try to 'start from scratch' taking into account all modern requirements. If anything, it would help me understand why things are not done that way. ;)

The only difference at the moment is we aren't drawing triangles with bounding lines that get AA'd. MLAA is, somewhat backhandedly, that sort of solution applied at the wrong end of the drawing process. However, the issues of aliasing aren't a result of data representation but image construction. Three points defining a triangle can be rendered on a GPU with per-pixel rasterisation to produce jaggies, or with a Wu AA'd line-drawing method.

Sure, but that's why I think maybe we could do it differently.

Could GPU's integrate something similar during the rasterising step? It's an idea. Still, it comes down to image construction, which in turn comes down to performance. The data types being used are all sorts of compromises, such as not modelling internal densities because we haven't got a terabyte of RAM per system to play with.

But for how long won't we have a terabyte of RAM to play with? ;) Certainly, by the time the graphics pipeline is reinvented with artists tools to go along with them, we just may have a terabyte of RAM per system ... !

Seriously though, I'm not sure if you're either not getting me, or I'm not getting it, so I'm going to try to describe some examples of what I'm thinking about, by taking different aspects and how the system would deal with them. Perhaps I'm reinventing the wheel, perhaps the idea is just plain silly, or perhaps it's crazy enough that it might just spark an idea into someone who does know what he's doing. ;) But at the origin is that I wish more people came up with LocoRoco style innovations.

Let's say we are looking at transparency. I have a light, and a camera, and inbetween I have a few points in space, and all of them block the light between the camera and the lightsourse at least partly. Each of the points is defined as having a certain mass and density, from which I derive with a simple formula what the basic size is of the perfect 'ball' is in space. This single pixel therefore represents an object in 3D that can occlude the light source from the camera partly or completely. Also, one of the objects can be so large that it actually encompasses another smaller object that is close enough to it (given that I have not defined any physical interaction properties like magnetism, surface attraction, tension and strength and so on).

Next, I add a transparency value to the points in space, which defines the transparency of the object it represents. The transparency could be constant, or fall off at a certain rate from the center of the object, or defined by a formula ('code fragment') depending on how much computing power we have to support it, but for now lets take a constant value.

Next, I do the same for the light and the camera, defining the equivalents to mass and density for the light-source pixel. Perhaps lights and objects could just have a positive or a negative light value, negative being equivalent to transparency, and positive being equivalent to light?

Now, in order to determine what is actually visible on the camera, I first map the outline of the light source to the camera as a circle. But not a bunch of dots looking like a circle, rather, a circle formula including the description of its light intensity transposed from 3D to 2D. Next I start working on finding the intersections with the other objects (which would in our current definition also be circles, that intersect with the light circle), and again transpose the 3D formula in to a 2D version that represents how it interacts with the light circle.

This continues until finally I have a 2D formula that describes how the light-circle should be drawn on the camera lens. Only at this point, actual rasterisation happens.

Does this make any sense at all at this point?
 
Seriously though, I'm not sure if you're either not getting me, or I'm not getting it, so I'm going to try to describe some examples of what I'm thinking about, by taking different aspects and how the system would deal with them. Perhaps I'm reinventing the wheel, perhaps the idea is just plain silly, or perhaps it's crazy enough that it might just spark an idea into someone who does know what he's doing. ;) But at the origin is that I wish more people came up with LocoRoco style innovations.
I'm all in favour of ideas no matter how crazy, because you never know where the next best thing comes from.
...does this make any sense at all at this point?
Sure, though what I'm reading here sounds very much like voxels resolved with ray-tracing if you want 3D shape and transparency. If you're just going with sphere's, you could get away with a straight 2D drawing method, scaling and drawing circles. The problem with volumes at the moment is representing detailed models. Ther have been ball-based games made from point clouds, but it's going to be hard to make a convincing person with lots of spheres or cubes.

I guess at this point, that's the driving factor for any engine. How can we model our objects in data? This imposes limits on the data structures available, which in turn limits what are effective rendering strategies. If someone can invent a way to represent a character that isn't vertex based, and isn't memory-munching point-cloud based, that'd open the door for looking at alternative ways to draw it.

Which is an interesting topic, but getting a bit OT.
 
I'm all in favour of ideas no matter how crazy, because you never know where the next best thing comes from.
Sure, though what I'm reading here sounds very much like voxels resolved with ray-tracing if you want 3D shape and transparency. If you're just going with sphere's, you could get away with a straight 2D drawing method, scaling and drawing circles. The problem with volumes at the moment is representing detailed models. Ther have been ball-based games made from point clouds, but it's going to be hard to make a convincing person with lots of spheres or cubes.

I guess at this point, that's the driving factor for any engine. How can we model our objects in data? This imposes limits on the data structures available, which in turn limits what are effective rendering strategies. If someone can invent a way to represent a character that isn't vertex based, and isn't memory-munching point-cloud based, that'd open the door for looking at alternative ways to draw it.

Ok, I was just giving one example. Key is to how I think the points and their mass should interact. This is why I mentioned surface tension and strength and such - two point-based objects close to each other with the right physical properties will merge or attract, so to describe a shape, you don't need a complex point cloud.

The theory is that in this data-model, a point can get almost any number of properties (that an engine will handle) or none at all, depending on what is needed. The primary goal, above everything else, is to try to define everything with as little data as possible. But the important difference is that these can also define the shape. A point based shape could even have a formula that continuously changes the shape, fed by, say, music wave forms.

Which is an interesting topic, but getting a bit OT.

Yes, I'll make a new topic.
 
I've only read the Intel paper on MLAA. And it's not really clear to me how they find the edges.

Does the algorithm require the vertex data and/or the zbuffer data to find the edges? Or does it only need the final pixels?

If the algorithm only needs the final pixels for the 3 steps (finding edges, identifying patterns, blending colors) they wouldn't it be feasible to have some kind of after market dedicated hardware that sits on the output port (HDMI or whatever) to capture the frame data and then spit out a MLAA'ed version.

I can see how games with lots of post processing effects and lots of alpha blending for particles etc would not play well with such a device. But still... for a certain subset of games, especially older stuff, there might be some benefits? Or is there something I am misunderstanding?
 
I've only read the Intel paper on MLAA. And it's not really clear to me how they find the edges.
It's actually unimportant to the AA method. The clever bit is samoothing the steps. Finding edges can be done any number of different ways, on any suitable source data.

Does the algorithm require the vertex data and/or the zbuffer data to find the edges? Or does it only need the final pixels?
You don't need vrtex data, but it could prove handy in selecting edges.

I can see how games with lots of post processing effects and lots of alpha blending for particles etc would not play well with such a device. But still... for a certain subset of games, especially older stuff, there might be some benefits? Or is there something I am misunderstanding?
Interesting notion, although I don't really see a market. The cost is probably prohibitive as the processing isn't cheap, while I don't see an ASIC being worth developing just for this. If early titles had SPU cycles to spare, I wonder if the PS3 firmware could be updated to provide an MLAA post mode? I woudln't have thought the FB would going through the system though, with that left to the developer (hence per-title upscaling) so I think this extremely unlikely.
 
Interesting notion, although I don't really see a market. The cost is probably prohibitive as the processing isn't cheap, while I don't see an ASIC being worth developing just for this. If early titles had SPU cycles to spare, I wonder if the PS3 firmware could be updated to provide an MLAA post mode? I woudln't have thought the FB would going through the system though, with that left to the developer (hence per-title upscaling) so I think this extremely unlikely.


Last year I read Toshiba was making TVs with a Cell processor in them. A high end TV that could itself actually generate MLAAed frames as an optional image enhancement mode could be an interesting use of an embedded Cell processor. Even if it was a low end Cell with 4SPUs, it might be feasible. It would have the side effect of generating additional TV lag though of course.

I wonder what MLAA would look like applied to an SD res frame like the Wii generates.
 
There's already post-processing lag in the TVs. Adding AA with in internal processor shouldn't slow it down any moreso. It would certainly be nice to have MLAA on Borderlands! Although you'd need to apply it on native signals. Once upscaled the processing shouldn't work. SD signals would work, captured to internal FB and MLAA'd before upscaling to the display, but something like RDR rendered subHD and then output as 720p, wouldn't work quite as it should. Although I'm not sure how the MLAA method would work on upscaled data.
 
There's already post-processing lag in the TVs. Adding AA with in internal processor shouldn't slow it down any moreso. It would certainly be nice to have MLAA on Borderlands! Although you'd need to apply it on native signals. Once upscaled the processing shouldn't work. SD signals would work, captured to internal FB and MLAA'd before upscaling to the display, but something like RDR rendered subHD and then output as 720p, wouldn't work quite as it should. Although I'm not sure how the MLAA method would work on upscaled data.
In that case,, wouldn't the UI be blurred as well like saboteur?
 
how much memory does MLAA takes up compare to like 4X MSAA? Maybe Sony will work with Epic and update it with UE3 so all the UE3 games can have AA on PS3 finally.
 
In that case,, wouldn't the UI be blurred as well like saboteur?
Yes, if you use it for whole image it will try to find edges in everything.
how much memory does MLAA takes up compare to like 4X MSAA? Maybe Sony will work with Epic and update it with UE3 so all the UE3 games can have AA on PS3 finally.
4xMSAA takes 4 times the memory of framebuffer & Z-buffer.
MLAA takes a buffer from where you find the edges and where you write them to, this can be a same buffer and it must be in a main memory.
 
Last edited by a moderator:
Back
Top