For what it's worth, Pandemic's implementation of MLAA on the PS3 seems much sharper than the Intel-based screenshots. Not nearly as much impact on the textures and overall clarity. This is most evident on comparison shots 2 and 5.
Would a possible or feasible solution to some of these artifacts perhaps be to turn off the MLAA process altogether when there's a movement/rotation of the camera going on (maybe at or above a specified speed), thereby avoiding the appearance of such artifacting with motion on the screen? After all, one miight argue that aliasing is a much bigger problem when the screen is sitting still than when there's movement or camera motion, in which case the perceived severity of jagged edges are not that of a problem at all, to the naked eye (well, at least beyond a certain speed of movement).
So it is not MLAA after all !?
http://www.eurogamer.net/articles/digitalfoundry-saboteur-aa-blog-entry
Most likely theyre also use the depth values/(or normals perhaps) as well to see if something is truly an edge.Comparison shots 2 suggest that Pandemic uses edge detect on PS3 with a much higher threshold, so that only clearly contrasted lines are dealt with. The Intel method considers almost anything an edge and blurs the whole picture. Look at the huge difference between the boxes.
I saw Christer Ericson's nick appearing twice on B3D online members list for the past few days. Checked his Twitter:
http://twitter.com/ChristerEricson
No further words from Mr. Ericson besides his quick comments ? The Saboteur's implementation looks different from the Intel one (Just edge detect + blur ?)
Firstly, the definition of MLAA isn't in keeping with the basic outline of the technique as it has been described by some Pandemic staff, and it is almost certainly an extension of the edge-filter plus blur technique seen in several cross-platform titles already. As Christer says, "the qualitative difference comes down to how you edge-detect and how you 'blur'."
In terms of the effectiveness of luminance for determining edges, Christer also pointed out that the pixels we've highlighted as causing problems for edge-smoothing in The Saboteur aren't a product of red meeting black, but actually brown and red - similar in terms of luminance values and thus more likely to cause that particular artefact.
makattack said:Which was a response to:
http://twitter.com/kamidphish
@ChristerEricson We did that too. Can we be part of the cool crew? about 19 hours ago from TweetDeck in reply to ChristerEricson
What is this magical Saboteur "AA" method? 1:22 PM Dec 12th from TweetDeck
@Brimstone: SPUs could do something like that, yes, the problem with that was (IIRC) that it's exclusively based on Z-buffer differences, so edges at the same depth (i.e. two connected polygons) would not get any AA at all.
With thanks to Barbarian for sharing his compilation of the Intel code, I'm happy to present some "before and after" shots. To ensure an accurate result, all shots processed were full 24-bit RGB dumps/lossless BMPs.
"before and after" shots
http://twitter.com/Digital_Foundry/status/14181011643.6GB lossless version of the CE3 engine trailer came in overnight... "Maximum Media Assets" from Crytek!
Matrox's FAA isn't a post process step. Edge fragments append to a buffer as they are rendered. Without this information the technique doesn't work. Plus, it had multiple flaws so I doubt anyone would want to just license it.If Fragment-AA maps to the SPUs maybe Sony should go to Matrox, which is still an independent company, and license the patents.
Matrox's FAA isn't a post process step. Edge fragments append to a buffer as they are rendered. Without this information the technique doesn't work. Plus, it had multiple flaws so I doubt anyone would want to just license it.
It seems like a lot of effort for questionable gain. The GPU would need to know if the pixel shader is working on an edge and export coverage, color, and Z info. Even if it could work it would likely be very slow. It definitely gets away from the idea of using the SPU as this puts more work on the GPU just to have the SPU do a bit of post processing.Why couldn't developers just store the buffers for Matrox FAA in XDR RAM?
I'am a little confused of which direction the thread has taken is it FSAA or some other type of technique used to reduce aliasing? The devs from Pandemic described it as AA using SPU was that misleading?
I doubt that it is the same, but it could be something similar.http://www.playstationuniversity.com/improvements-to-gowiii-since-demo-explained-2774/
“The E3 content is over a year old,” Feldman writes on the God of War forums. ”The team has learned a lot since then, so the final game should reflect that effort. Off the top of my head features added since E3 include: anti-aliasing on the CPU – looks much better then the 2x at E3"
could it be the same trick?