Digital Foundry Article Technical Discussion Archive [2010]

Status
Not open for further replies.
I was hoping they'd put up comparisons between the colour and depth inputs as well as comparisons to 4xMSAA (since they mention that in their table of render times).
 
nice article + it does look like it is now if xbox360 will also benifit from this technique

these two shots are quite a good comparison
http://www.iryokufx.com/mlaa/gallery/images/unigine_01_msaax8.jpg
http://www.iryokufx.com/mlaa/gallery/images/unigine_01_mlaa.jpg

you can see MSAA8x handles the small fiddly edge stuff (like the weather vane on the top left) better
but MSLL handles the larger edge things better (like the railing top right)

But why did they compare it to 8xMSAA, when the Xbox360 is capable of 4xMSAA max?
 
eDRAM tiles are resolved to main memory, so the render target memory overhead for enabling MSAA should be nothing.

So, why do people even borther MLAA on Xbox360?
Is tiling really this complicated to implement?
Do you have to completely rebuild your code for this?
 
So in short: is the ATI MLAA the same as GOW3 MLAA?
Is the MLAA available for PS3 devs the same as the GOW3 MLAA?

I am a bit surprised to see that MLAA blurs the image a bit compared to no AA. GOW3 looks super crisp to me, And I wonder if their MLAA had the same slight blur as presented in the ATI shots?

I am only aware of this comparison here showing "Sony MLAA":
http://www.neogaf.com/forum/showpost.php?p=22661479&postcount=29

but the pics are rather small to judge blur - what do you guys think?

PS: I am astounded how much detail 1080p mode packs in!!
Higher resolution for the win?!

Probably not the same. Since...

Developer applied MLAA can be significantly better as the developer can choose where and when to apply the MLAA. It's a shader process after all. Whereas driver implemented AA is "all or nothing" with regards to MLAA. ATI's solution applies it to the framebuffer after the scene is rendered by the game and sent to the monitor. The driver then applies MLAA and forwards the final image to the monitor.

Therefore, everything gets MLAA applied regardless of whether it would benefit or not. And in the case of such things as Text, actually make things worse. That's where developer applied MLAA has it's most obvious advantage. Text for example can be overlaid on the image after they apply MLAA in the simplest case. I would imagine you'd be able to do similar things with say, detail textures. As said, developers can choose where and when to apply it.

And that doesn't even get into possible differences in what kind of MLAA is being used.

So ATI's MLAA is sometimes nice, sometimes not so nice. And as pointed out by some others, it's not always comparable or better than standard Box MSAA + Transparency AA. But it's another option we can use, and as such, it's nice when it does work well.

Heh, higher resolution is nice, but in your case, you're actually commenting on higher pixel density. For example, take a 46" TV. A 720p image has less pixels per inch than a 1080p image. So it'll look more granular and coarse. Same details in both, but you have more pixels per inch in the 1080p image, thus details can be sharper.

Now take my home situation. Where a 1920x1200 image on a 24" looks visually the same as a 2560x1600 image on a 30" monitor other than the obvious size difference. Both are exactly as sharp as the other. The 30" is very very slightly sharper as it has a slight advantage in PPI (pixels per inch). But basically details are equally sharp on both.

Regards,
SB
 
Since aliasing in general becomes less offensive as resolution increases, I'd think the GOWAA on a 1080p framebuffer would be sufficient for the most of us. I have a single dead pixel in the bottom right corner of my 1080p TV, and it's very hard to spot even when I'm looking for it. This is a 46" TV from 8' (2.5 meters) viewing distance, which is probably what the average user has. When pixels are that size, aliasing is much less noticable. So I think Sony might just add a few more SPU's and just go with MLAA on 1080p next gen.

Another common misperception of the issue. Resolution on its own has no impact on perceived aliasing.

What you're actually referring to is, that as pixel density (PPI or Pixels Per Inch) increases, some forms of aliasing become less noticeable. So, for example, a 480p image on a 10" screen (high PPI) would appear to have less aliasing than a 1080p image on an 80" screen (very low PPI). Now put that onto the same size screen and the situation would be reversed. 480p on a 40" screen (low PPI) versus 1080p on a 40" screen (high PPI) and the perceived aliasing is now reversed.

Certain aliasing artifacts will remain regardless of resolution however. Shimmer/crawling being one of them, although the effect will be reduced, and for some that might be enough to make it unnoticeable. It's one reason that aliasing is usually far more noticeable in motion than it is in still shots.

Regards,
SB
 
So, why do people even borther MLAA on Xbox360?
Is tiling really this complicated to implement?
Do you have to completely rebuild your code for this?

MLAA since it's basically shader based AA (perhaps better to call it Software based AA) can be applied in more situations than standard Box MSAA. Box MSAA relies on finding the edges of polygons and then applying AA to it. As well, I don't believe X360 can apply Box MSAA to MRTs which are used heavily when doing deferred rendering or lighting.

That becomes a problem in UE3 games for example, where the lighting is applied after geometry. So basically MSAA is applied, then lighting is applied and now we have aliasing applied on top of a anti-aliased edge.

In that case, MLAA could be applied after all rendering is done. Or after a certain step of rendering is done. Or only certain parts of an image. After all, all anti-aliasing is basically a blur filter. Box MSAA only applies the blur filter to edges of polygons, thus your overall image isn't blurred but only edges. MLAA potentially allows you to apply AA (blur) to everything on screen. But when used smartly by a dev, they can apply it only to what they want and only when they want.

But if you try to get too sophisticated with edge detection, etc for automatic application of MLAA, the system costs can start to skyrocket. One reason that ATI's MLAA is so cheap for instance is that it applies to the whole scene and the edge detection doesn't appear to be very sophisticated. Thus you have blurring of edges you otherwise wouldn't want anti-aliased (like text for example). On the other hand ATI has a more advanced Edge Detect shader AA, but it must be combined with Box MSAA. It has phenomenal quality but also a very high computational cost.

That's also why Quincunx is so cheap. It doesn't even attempt to find an edge. It just applies the blur to every pixel on the screen.

Regards,
SB
 
Box MSAA relies on finding the edges of polygons and then applying AA to it.

MSAA is about evaluating subsample coverage around a pixel.

That becomes a problem in UE3 games for example, where the lighting is applied after geometry. So basically MSAA is applied, then lighting is applied and now we have aliasing applied on top of a anti-aliased edge.
UE3 is a multipass renderer. It only defers shadows. They do use a lot of post-processing though, which is after the resolve.

That's also why Quincunx is so cheap. It doesn't even attempt to find an edge. It just applies the blur to every pixel on the screen.
Quincunx on NV3x-G7x uses 2x multisampling with the quincunx filter.
 
Quincunx on NV3x-G7x uses 2x multisampling with the quincunx filter.

Ah, thanks, I'd forgotten that it was used with 2x MSAA. But the Quincunx filter is still a very basic blur filter applied to the entire image. As such it's pretty cheap (basic blur + no edge detection).

Regards,
SB
 
But why did they compare it to 8xMSAA, when the Xbox360 is capable of 4xMSAA max?
yes strange
A likely reason is they wanted to show that MSLL can often exceed/equal MSAAx8 quality (i.e. a lot better than 4xAA)

Though ideally they should of shown 1xAA, 4xAA, 16xAA, MLAA

Ive said this before, but once more for the record
Quincunx is like a more expensive version of 2xMSAA, the developers dont choose it cause its quicker than 2xMSAA cause in fact its not, they choose it cause they think it looks better!

heres a PDF you might like to read WRT quincux
http://es.nvidia.com/object/gdc_ogl_multisample.html

Of course resolution impacts aliasing, aliasing is more than just a polygons images. (like Amstrong saiz MSAAalso doesnt just work on a polygons edge)
in fact the insides of polygons is prolly more important for image quality (esp when moving) than the edges, the higher the res ( the more likely each change in pixel's color frame to frame ) though mipmaps are more important than MSAA for this
 
(like Amstrong saiz MSAAalso doesnt just work on a polygons edge)
in fact the insides of polygons is prolly more important for image quality (esp when moving) than the edges, the higher the res ( the more likely each change in pixel's color frame to frame ) though mipmaps are more important than MSAA for this

Eh, what he posted doesn't contradict what I said. Standard Box MSAA as used on consoles and PC is only done along polygon edges. He was only clarifying that my reference to MSAA being a blur along the edge is with regards to subpixels being used to determine the final color of the pixel comprising the edge of the polygon and not the surrounding pixels themselves. As well resolution on its own has no effect on this other than increasing the amount of pixels comprising each edge. And it especially has no effect on perceived or actual aliasing if the size of your pixel remains constant.

It only comes into play in perceived aliasing (but not actual aliasing) if the size of your pixel changes. That can be accomplished either by keeping the same resolution but reducing the size of the display (for example increasing the distance between you and the display) and/or keeping the same size of display but increasing the resolution at that display size.

And in both of those cases, it only changes the perceived aliasing not the actual aliasing. The aliasing artifacts still remain exactly as before but our perception of it changes. For some people it may reach a point where it becomes non-noticeable or non-visible to them. For others with either better eyesight it may still be visible or noticeable. Or for those with that notice motion or anomalies more, it may still be visible in motion even when it's not noticeable when not in motion.

So just saying increased resolution will help with aliasing is relatively meaningless without context. Increased resolution while keeping display size the same will help with perceived aliasing is true. Increasing resolution while display size also increases (say 1920x1200 on a 24" monitor -> 2560x1600 on a 30" monitor for example) will have no effect on perceived aliasing. And in all cases the actual aliasing still exists, only the perception of such changes.

Regards,
SB
 
Need for Speed: Hot Pursuit demo showdown:
http://www.eurogamer.net/articles/digitalfoundry-nfs-hot-pursuit-demo-showdown

I must say I am surprised with the comments on the aliasing. That's the element of the current version of Criterion's engine that i find the most lacking. Apparently the game is using some special technique to reduce aliasing. The article mentions "reducing artifacting on faraway elements of environment". That's where i find it most noticeable, especially in the second part of the daylight track (lampposts, barriers and bridges). I find it dissapointing that Criterion decided to sacrifice the performance of the engine for the sake of some additional graphical flair that can't really shine through due to the image quality. The "jaggies" ruin it all for me.
 
You're reading it wrong. The article explicitly mentions the objects that are well AA'd.
pay particular attention to the power lines and road markings... object that usually break down into a sub-pixel mess
 
I see. Power lines are definitely clean looking. Too bad that technique apparently could not be applied to all elements of scenery across the board. The "sub-pixel mess" problem rears it's ugly head later on that track to the point where I find it really distracting (and I am not the kind of person to be usually distracted by details like that :) ).
 
You're reading it wrong. The article explicitly mentions the objects that are well AA'd.

So what exactly are they doing to achieve such perfect AA on distant object? Those power lines look like they've got 16xmsaa applied! Seems a technique like this could work fantastically in collaboration with MLAA since sub-pixel aliasing is its major weakness.
 
eDRAM tiles are resolved to main memory, so the render target memory overhead for enabling MSAA should be nothing.

Well, that's only true if you're doing forward rendering (or don't need otherwise need access to sub-pixel data).
 
Hopefully there will be more info shared about the anti-aliasing process(es) used in NFS:HP. As mentioned, the image quality on those objects are exceptional.
 
I'm just curious, what are your thoughts on MLAA's implementation in GoW3??

Dunno, I haven't see any good quality pics of the final game with their mlaa implementation on and off. With the ati pics linked earlier it's easy to compare and see the blur, but in Gow3 without any comparison pics there's no way to know how much blur their method adds, and given that it's a one platform game makes it even harder still to make comparisons. Maybe one day someone will make a cross platform game that uses Gow's mlaa on the ps3 version and regular msaa on the other versions, then we can compare to see if there is detail loss. Personally I'm not a fan of any post process aa method that mangles texture detail, and every post process aa method I've seen so far does (when comparisons can be made). I'd like to have seen Gow's method on a game like Fallout New Vegas which has lots of fine lines and details on stuff like the signs in the strip, etc.
 
Status
Not open for further replies.
Back
Top