Driver-applied MLAA on D3D9 games like GTA 4?

Indeed. As I very specifically mentioned in my prior post, single-pixel coverage issues will never work right in MLAA, so going to look for them as an example is like me pointing out the lack of red in my front lawn.

However, I also agree that the most interesting thing would be "mixed" modes -- such as 4xMSAA + 4xMLAA. I think you could get some pretty interesting results with a bit of luck...
 
However, I also agree that the most interesting thing would be "mixed" modes -- such as 4xMSAA + 4xMLAA. I think you could get some pretty interesting results with a bit of luck...

That would be interesting. You'd still have to figure something out for transparencies though.
 
Obviously you don't want to do MLAA after MSAA, because it makes edges harder to detect. A simple method to create a "simulation" is to use 4X SSAA with MLAA, that is, doing MLAA at 4X resolution and then resample to the real resolution. In this case the one-pixel problem is handled by SSAA while other AA was done (or "enhanced") with MLAA.

To do this with MSAA is a bit more difficult because you don't have a ordered grid in normal MSAA (nor you want that either). So a better way would be doing 4 separate MSAA on each subsample individually and then merge them together. A simple optimization is to do MLAA only on edge pixels (e.g. do MLAA only for pixels with subsamples belong to different triangles).

Of course, this would be much more expensive and it can be argued that whether this is cheaper than a simply higher order MSAA or not (e.g. if 4X MSAA + MLAA produces similar results with 16X MSAA but slower then it's basically pointless).
 
anyone with an eyefinity set up want to try and take some mlaa pictures ? I'm wondering how it looks at such high resolutions
 
Oh, damn

I hope it's because the HD4800 series can't do it, rather than AMD just being cheap and won't bother...
I hope this isn't a sign of AMD ditching that series when it comes to other features also(like the multithreaded drivers Firaxis was talking about for example)

Tbh, low level cards(specifically those in laptops) need MLAA more than the high performing ones do
 
Oh, damn

I hope it's because the HD4800 series can't do it, rather than AMD just being cheap and won't bother...
I hope this isn't a sign of AMD ditching that series when it comes to other features also(like the multithreaded drivers Firaxis was talking about for example)

Tbh, low level cards(specifically those in laptops) need MLAA more than the high performing ones do

There are quite vast technology differences between HD 4000 and the DX11 products that make things unfeasible to support. With regards to Multithreading, the drivers for all products are already multithreaded, Firaxis are talking about something specific to DX11.
 
There are quite vast technology differences between HD 4000 and the DX11 products that make things unfeasible to support. With regards to Multithreading, the drivers for all products are already multithreaded, Firaxis are talking about something specific to DX11.

Ah ok

I just remember prior to the release of DX11, that I read about DX10/10.1 getting some benefits of better multithreading and other smaller things with DX11, so that's where my confusion came from:LOL:
 
I hope it's because the HD4800 series can't do it, rather than AMD just being cheap and won't bother...
Their MLAA is implemented as a compute shader. HD5000 series was the first generation of ATI cards supporting compute shaders (shader atomics, barriers, etc synchronization primitives).
 
"They" just had to make it real-time, not playable. That's an important aspect of MLAA, as per Compute Shader you can re-use a ton of data instead of having it to fetch from memory again and again. I think, that's the main reason limiting it to DX11 capable cards.
 
Geforce 8000 series were the first CUDA (GPU compute) capable graphics cards from NVIDIA. NVIDIA was having more broad GPU compute support earlier than ATI. Naturally you can do MLAA without compute shaders, but compute shaders make it more efficient. If ATI's MLAA method depends heavily on compute shader optimizations, they would need to make a completely separate MLAA filter for the earlier cards (with likely a heavier performance impact or a reduced filter quality).
 
Its strange , it should be default with 3d vision, because of the 3d perf hit. I also think its hardware related,either way, for some reason its absent.

I need 120 fps, won't mind running 720p, in fact 720p on DLP projector is league by itself, Im not even considering 1080p.
 
So after over a decade of allegiance to Nvidia, I splurged on an HD 6970 yesterday and I'm pretty much enjoying it other than a few games refusing to start until I uninstall/reinstall them. This thing is way quieter and cooler running than my previous card so I'm already happy about that.

That MLAA option is a freaking godsend in games that won't allow any other AA option to work. I also think I prefer it in a lot of UE3 games because it removes that shimmering effect you sometimes get around the corners of objects when you try to force MSAA.

I agree that because of the blurry text issue and some games not looking great with it on it isn't the holy grail or anything. It especially looks bad in games that have a lot of thin geometry like Half-Life, but those games usually have excellent MSAA implementation anyways,

But because of so many games coming out that either implement traditional AA methods poorly or not at all (Dead Space, GTA IV, Divinity 2, etc.) it's a welcome addition.
 
Back
Top