Why is NVDA still using impractical 8x AA setting w/ 6800U?

Do any boards still do fsaa? I was just wondering I realize it is ridiculously slow, but considering how perful these boards are, it would look nice on older games.
 
See it this way: If it's 4x SS + 2x MS like currently, reviewers are gonna say "Okay, 8x is unplayable, but 1600x1200 4x RGMSAA looks great and runs amazingly good!"
If it's 4x MS + 2x SS, they'd say "It's kind of a mixed bag when it comes to AA performance - the GeForce 6800 is awfully bad at 8x AA, and barely manges to reach the quality of ATI's 6x AA. On the other hand, 4x MSAA looks good, but if you want superior AA quality, go ATI."

And then, NVIDIA can boast about 40-50% performance gains in the next driver revision this way hehe ;)


Uttar
 
I was under the impression that you don't have to "force" SS, but can manually compute the neccessary amount of samples and blend them in the pixel shader instead. This should for example conserve framebuffer memory and increase precision as well AFAIK.
 
Laa-Yosh said:
I was under the impression that you don't have to "force" SS, but can manually compute the neccessary amount of samples and blend them in the pixel shader instead. This should for example conserve framebuffer memory and increase precision as well AFAIK.
Well, you can. I'm just not sure it would be that good of a thing to do. In particular, there's a lot of texture filtering logic that you would need to duplicate in such a shader, which would reduce performance.
 
It would be great if the texture sampler returned a bitmask (say 4 bits for 4x AA) for alphablended textures indicating to the multisampler which (opaque) samples should store their z values and which (transparent) samples should not. This would provide for anti-aliasing of alpha blended textures with multi-sampled performance.
 
SA said:
It would be great if the texture sampler returned a bitmask (say 4 bits for 4x AA) for alphablended textures indicating to the multisampler which (opaque) samples should store their z values and which (transparent) samples should not. This would provide for anti-aliasing of alpha blended textures with multi-sampled performance.
Notation: alpha blended textures are anti-aliased. It's the alpha test that causes problems.

Anyway, the problem with a bitmask is simply that you'd need to take additional texture samples to perform it properly.
 
Alpha testing is the major culprit of course. However, alpha-blending can have the same aliasing problems as alpha-testing, depending on whether the alpha values properly blend the edges; which is why I used the broader term.

And, yes, it requires proper extra sampling, as does all anti-aliasing. However, these samples are readily available when anisotropic filtering is enabled.
 
SA said:
Alpha testing is the major culprit of course. However, alpha-blending can have the same aliasing problems as alpha-testing,
No, it doesn't. Alpha blending has the problem of blurring in the near-range, and alpha blends are not order independent (you must sort alpha-blended surfaces on the CPU).

Anyway, allowing the GPU to hardware-accelerate antialiasing of alpha-tested surfaces would provide better performance than other forms of fixing alpha tests via AA.

And as for the extra texture samples being there if anisotropic is enabled, that's not true. You'd have to take an extra texture sample, whether it be anisotropic, bilinear, or trilinear, at each sample position for proper behavior.

Still, neighboring pixels do tend to share color data, and switching to the write mask for your alpha test idea would be very gentle on texture cache, but you'd still have to take the additional samples.
 
nobie said:
They should enable the 6x mode in the drivers. It seems to get decent performance.
But they'd also need to have an appropriate sample pattern for that to help any.
 
Chalnoth said:
Ailuros said:
How about leaving anti-aliasing entirely to ISVs and let the developer decide wherever MSAA, SSAA or a combination of the two in the same scene would make more sense?
I think it'd be even better if the developer was given the option to switch between MSAA and SSAA (with the same number of samples, of course) on a per-traingle basis. This way SSAA could be used for those shaders that really need antialiasing, without dramatically reducing performance.

Even better would be to specify a minimum desired amount of SSAA: so that applications could use mixed-mode FSAA if available. Now, this would require completely programmable FSAA sample patterns to implement properly.

That's more or less what I had in mind; I just oversimplyfied it. Using SSAA selectively on parts of the scene where it's actually needed would properly antialias all "problematic" cases and a lot of fill-rate would be saved since the majority of the scenery would actually get MSAA.

----------------------------------------------------------------

SA,

Welcome back :)
 
Sxotty said:
Do any boards still do fsaa? I was just wondering I realize it is ridiculously slow, but considering how perful these boards are, it would look nice on older games.

Apparently NV40 is capable of 16x SSAA. And it's faster than 8xS :oops: :?
 
Ailuros said:
That's more or less what I had in mind; I just oversimplyfied it. Using SSAA selectively on parts of the scene where it's actually needed would properly antialias all "problematic" cases and a lot of fill-rate would be saved since the majority of the scenery would actually get MSAA.

I think that what JohnH is saying is that with new rendering techniques, like f.e parallax bumpmapping, the majority of the scenery will need supersampling.
 
Laa-Yosh said:
JohnH said:
Kombatant said:
What I find odd is the HUGE drop in performance. I know it does SS, but still...70-90% drop? I was like "wow" when I read Dave's review.
Its probably bandwidth, MSAA typically allows you to losslessly compress the FB by up to 4:1, this can't (easily) be done when applying SS.

You'e right and wrong - it is true that with SS your samples are usually different and cannot be compressed efficiently.
However the main performance problem is fill rate - with SS you actually have to render 2-4-n times as many pixels, complete with texture reads and shader calculations.

Yes, yes I know what the cost of super sampling is, I was trying to make a more subtle point about IMR's and FSAA.

I'd also say that brute force SSAA is not the answer to shader aliasing. It's better to implement some sort of AA in the pixel shader, as it can then be adaptive, which SS isn't.
Thats not always as easy as you'd think, at least in terms of minimising performance impact, but yes its one possible solution.

John.
 
StealthHawk said:
Sxotty said:
Do any boards still do fsaa? I was just wondering I realize it is ridiculously slow, but considering how perful these boards are, it would look nice on older games.

Apparently NV40 is capable of 16x SSAA. And it's faster than 8xS :oops: :?

Does 16x SSAA looks better than 8xS ?
________
Honda cn250 history
 
Last edited by a moderator:
Chalnoth said:
No, it doesn't. Alpha blending has the problem of blurring in the near-range, and alpha blends are not order independent (you must sort alpha-blended surfaces on the CPU).

Unless you have a card capable of sorting translucency in hardware, of course. You'd need to have all the scene data available before you start rendering but thats not a problem for everyone ;)
 
mikechai said:
Does 16x SSAA looks better than 8xS ?
Certainly. But the "reg0C" mode isn't 16x OGSS. Last time I saw 16x OGSS was in a Det3 series driver.
 
Back
Top