Why AA doesn't get the respect it probably deserves

When I'm buying a $650 video card I have zero intention of being forced to make choices between IQ options. I want it all, bay-bee, and I paid to get it. :D
 
When I'm buying a $650 video card I have zero intention of being forced to make choices between IQ options. I want it all, bay-bee, and I paid to get it. :D

Well, there's that too :D. My 8800 GTS is currently suiting my needs, as my 7900 GTX did last year. If I missed out on anything last year because my GPU wouldn't do HDR+AA, it certainly didn't feel like it. I haven't encountered anything yet this year that is missing anything either, except perhaps my wallet, but such is life!
 
That's nice and all Ailuros, but it misses the point that we don't even have the option to enable or disable a "quality" mode of AA that would involve some form of SSAA.

Nothing against that; in fact I'm still protesting that NV has stripped the G80 from hybrid modes. I don't see or what it can hurt to leave them in the drivers even if you'd need 3rd party applications to enable them.

If you noticed I mentioned previously that it would be a feature most likely uses for past game and for currrent and future games that don't go "balls to the walls" on 3D bells and whistles.

For games such as those there would always be the fallback option of enabling a "performance" mode of AA in the control panel that would involve some form of MSAA and Transparancy AA.

Speaking for myself I would gladly lower resolution and texture quality in order to experience the wonderful goodness that was 3dfx's RGSS AA. And if that was unplayable even at lower resolutions/texture details/etc, then I still have the option of trying MSAA.

The option could be there and that for both IHVs. If sparse transparency supersampling is possible, then I can not imagine a single reason why it wouldn't be possible to also have it full screen.

It would still be for rare corner cases and even there if the application allows high resolutions the gain can be questionable. I had tried as another example Serious Sam: 2nd encounter on the G70 in 1280*960 with 16xS+16xAF (and unless you really want to pick at straws that's not too far away from 4xRGSS). Considering the entry scene had a lot of alphas the framerate without any monsters hopping around was in the 30's, whereby au contraire it was in the upper 70's in 2048*1536 with 4xAA +TSAA/16xAF.

What I'm pondering on is that any form of AA isn't a simple equasion, that many samples that kind of EER and boom that's better or worse. In my book the type and size of the monitor and the possible range of resolution absolutely need to be part of that equasion too. Now that said in the first case I got roughly 90 dpi across the screen and in the latter 132 dpi on one axis and a form of oversampling for free on the other. When the ballpark is that large between resolutions there's nothing Supersampling can do to compensate.

If the difference between usable SS and usable MS is merely one resolution (which is normally unlikely), then of course yes by all means I'd go for the first.

And if RGSS AA proved too slow and MSAA didn't work with said game engine, I could then put it away and decide not to play it until faster hardware came out that could effectively use SSAA on said game.

Personally albeit I'm an IQ and AA freak too I doubt I could wait 3-4 years or more to play a game I'm anxiously waiting to play.

I think I and most proponents of SSAA would argue that it's most likely to be used with older games and current and future games that aren't bleeding edge. And we're fine with that.

My point was merely that those games would have to be very old. I'm still protesting for NV to re-enable the hybrid modes in some way, but I have the feeling that I'm talking to deaf ears here. It could be better than that, but those were a lot better than nothing. Do they actually need to put in any effort to support the hybrid modes nowadays?
 
When I'm buying a $650 video card I have zero intention of being forced to make choices between IQ options. I want it all, bay-bee, and I paid to get it. :D
If only IHVs designed their latest-and-greatest video chips knowing exactly what kind of performance to expect on all machines (i.e. even those with a slowish CPU) with not only current games but games expected to be released at the same time they released that chip of theirs...
 
Nothing against that; in fact I'm still protesting that NV has stripped the G80 from hybrid modes.

As was I, but ChrisRay has found that they've now come back. They are available via registry or using nHancer with the 100.xx leaked drivers, sorry don't recall the specific number and can't find mention of it in that thread, but I think there are a few sets with it included now.
 
You'd have to go back way further than say UT2k4, something extremely CPU bound and a resolution threshold not higher than 1280*something. Unfortunately I don't have any SS available anymore to test with on the G80, but the only game that was fully playable on the former 7800GTX with 16xS and 16xAF in it's maximum available resolution (1280*960) was Unreal1.
Super-sampling on the card is different, as AFAIK it's uniform grid, consume more memory, etc. Doing it programatically with blending means that you need no more memory, can use whatever sampling pattern you want (rotated grid, quincunx, whatever) and the results scale exact as you'd expect: 1/2 fps for 2x, 1/4 for 4x, etc. So if you're game is running at 300fps, why not throw some 4x super-sampling in there and have it run at 75fps instead, since you're not even seeing the 300fps anyways (even on a CRT)...

Incidentally in D3D10 there's another option that I haven't pursued... theoretically one could duplicate and jitter the geometry in the GS and sent it off to different render targets. This would cost a bit more memory, but avoid the CPU work necessary to re-render different frames, and AFAIK be almost as efficient as any hardware super-sampling method.

It's not the quality of Supersampling I'm questioning here, rather it's usability due to it's inevitable fillrate penalty.
True it's quite expensive, but as I mentioned it may be unavoidable to get good AA in the future. Furthermore as polygons get near pixel-sized it'll become almost free on the current architecture.

And when I say "temporal anti-aliasing", I don't mean ATI's alternating sampling mode... in fact I hate the fact that the term is now ruined... I mean taking samples from the game time domain as well so as to get some nice motion blur.

For the time being however, I agree with nAo: deferred shading + MSAA is how I'd design any future engine. I'd even go fully deferred lighting as it avoids the shader permutation combinatoral explosion and gets the rasterizer to cleverly solve the light contribution problem per-pixel :)
 
I desperetaly need a new computer + D3D10 compliant GPU + some spare time to test all these new nice ideas... too bad it's not gonna happen anytime soon.. :)
 
Super-sampling on the card is different, as AFAIK it's uniform grid, consume more memory, etc. Doing it programatically with blending means that you need no more memory, can use whatever sampling pattern you want (rotated grid, quincunx, whatever) and the results scale exact as you'd expect: 1/2 fps for 2x, 1/4 for 4x, etc. So if you're game is running at 300fps, why not throw some 4x super-sampling in there and have it run at 75fps instead, since you're not even seeing the 300fps anyways (even on a CRT)...

How many games are there really out there released past let's say 2004 that can sustain an average 300fps?

Despite that as I said the game should limit me to a specific resolution, otherwise I'd still go for the highest resolution possible with MSAA +AF and call it a day. If I had on a recent high end system a monitor attached that has a native resolution of 1024*768, then by all means I'd be so abysmally CPU limited that nothing else than Supersampling would make sense.

Why not on the other hand go for 2048*1536 with 8xMSAA +8xTSAA and 8xAF instead if it's possible? Multisampling was slated high resolution AA for a reason.

True it's quite expensive, but as I mentioned it may be unavoidable to get good AA in the future. Furthermore as polygons get near pixel-sized it'll become almost free on the current architecture.

How free exactly? I would imagine that Supersampling would render Multisampling redundant, if polygon interior data gets smaller than polygon edge data. If I'd assume they're on the exact same level (and please correct any possible layman's brainfart here), then Supersampling is still filtering by a factor of 2x more data than Multisampling.

I'm not saying that you're wrong on the course of things as you describe them, I just think that you're addressing the less foreseeable future.

And when I say "temporal anti-aliasing", I don't mean ATI's alternating sampling mode... in fact I hate the fact that the term is now ruined... I mean taking samples from the game time domain as well so as to get some nice motion blur.

I'm very well aware of what real temporal AA means, both in terms of output quality as in terms of performance.

For the time being however, I agree with nAo: deferred shading + MSAA is how I'd design any future engine. I'd even go fully deferred lighting as it avoids the shader permutation combinatoral explosion and gets the rasterizer to cleverly solve the light contribution problem per-pixel :)

Not that it's relative to the debate (and some may excuse the pun intended) but every time I read the term "deferred" my heart is pumping louder ;)
 
Last edited by a moderator:
As was I, but ChrisRay has found that they've now come back. They are available via registry or using nHancer with the 100.xx leaked drivers, sorry don't recall the specific number and can't find mention of it in that thread, but I think there are a few sets with it included now.

I figure they're there in 100.87 XP, but I'll have to give them a shot.
 
How many games are there really out there released past let's say 2004 that can sustain an average 300fps?
I still play Myth II a fair bit, so I'm not the right person to ask ;)

How free exactly? I would imagine that Supersampling would render Multisampling redundant, if polygon interior data gets smaller than polygon edge data.
Hard to say, and actually it's a little bit less that it's "free" (unless we're talking pixel-sized polygons and 4xSSAA) than a much more efficient use of the rendering pipeline. The problem with pixel-sized polygons is that we effective run FOUR PROGRAMS (3 vp + 1 fp) on them. REYES is much more efficient at this kind of workload, but unless/until that gets implemented in GPU hardware, there will be a minimum polygon size to stay efficient that is definitely more than 4 pixels, and probably more like 16 or more.

I'm not saying that you're wrong on the course of things as you describe them, I just think that you're addressing the less foreseeable future.
Perhaps, but that's my job :)

I'm very well aware of what real temporal AA means, both in terms of output quality as in terms of performance.
That comment wasn't meant for you - it was to the person who said that they didn't like temporal AA, and related it to monitor refresh rates, etc.
 
Incidentally in D3D10 there's another option that I haven't pursued... theoretically one could duplicate and jitter the geometry in the GS and sent it off to different render targets. This would cost a bit more memory, but avoid the CPU work necessary to re-render different frames, and AFAIK be almost as efficient as any hardware super-sampling method.
Can each render target have its own Z-buffer in D3D10? Or do you basically just use a single giant rendertarget and divide it up manually?
 
I'm perfectly happy with 30 fps for anything other than competition grade FPS play. And even 20-25 FPS is prefectly fine if it isn't an FPS.

So that said if a game runs anywhere from 80-120 fps depending on the type of game. I'd be happy to enable 4xRGSS and take the 1/4 hit to FPS.

Regards,
SB
 
Can each render target have its own Z-buffer in D3D10? Or do you basically just use a single giant rendertarget and divide it up manually?
That's a good point, I believe it's only one. Thus you would have to divide it up and do the clipping yourself... actually wait, I believe there are N viewports :)
 
Last edited by a moderator:
I'm perfectly happy with 30 fps for anything other than competition grade FPS play. And even 20-25 FPS is prefectly fine if it isn't an FPS.

So that said if a game runs anywhere from 80-120 fps depending on the type of game. I'd be happy to enable 4xRGSS and take the 1/4 hit to FPS.

Regards,
SB

Trouble being that in most flight or racing sims (which don't need excessive framerate) I've tried in the past years I'm having a hard time sustaining something over a 30-40fps average in a high resolution with all bells and whistles enabled.
 
Trouble being that in most flight or racing sims (which don't need excessive framerate) I've tried in the past years I'm having a hard time sustaining something over a 30-40fps average in a high resolution with all bells and whistles enabled.

Really? What card are you running?

On the X1900XT512 I have here I'm kicking around 70-90 in GTR2, a little more in rFactor. I do get the idea that I'm CPU limited there aswell. I'm using 1280x1024 though, but a friend tells me that he's getting pretty similar results at 19x12 with the same card (though he does have a C2D@3.2 comparted to my A64@2.8).

I would have a hard time justifying 2x2SS if it took me down to 20 FPS, but 1x2 would be nice.

The next card I buy, all being reasonably equal from nV and AMD, will be the one with the supersampling options. Love it for racing games. I used 8xS on all my racing games when I had a 7800GT and very much missed it when I went to ATI.
 
I don't play many flight sims or racing games anymore so it's hard for me to comment. Though I have been keeping and eye on them from time to time. And the thing I notice with them is...

Very very few flight sims are GPU limited. In most cases I've seen they are extremely CPU bound. Especially if the flight sim even tries to attain some form of accurate flight dynamics. One of the more recent ones for example, Flight Simulator X from MS is massively CPU bound.

Cases like those are prime examples of when some form of SSAA would be greatly welcome.

Regards,
SB
 
And when I say "temporal anti-aliasing", I don't mean ATI's alternating sampling mode... in fact I hate the fact that the term is now ruined... I mean taking samples from the game time domain as well so as to get some nice motion blur.
A game isn't usually a movie where the scene is controlled and there are people who know how most of us will be watching and just how shutter time will affect our perception.

Temporal aliasing is a function of movement ... and movement is relative. Motion blur is an effect more than anti-aliasing.

PS. is the G80 still forced to do pixel shading in quads or could it work on micro-polygons in theory?
 
Last edited by a moderator:
Temporal aliasing is a function of movement ... and movement is relative. Motion blur is an effect more than anti-aliasing.
That's only true if you're blending over a domain large than your frame time. If you're sampling only from the time between "this frame and the next", it's a reasonably good approximation of what we see, and I can't think of a better definition of "temporal anti-aliasing". The "aliasing" in this case is the discrete frames that are being shown on the monitor rather than a continuous sequence of intermediate positions (as in real life).
 
Things can move on the screen, but not on my retina.

PS. here is something I googled which explains it a little more eloquently.
 
Back
Top