Quincunx & FSAA: how is it changed between Geforces

Matt Burris said:
That depends on what you define as acceptable. For me, acceptable is not playing games at 800x600 or 1024x768x16, I like to stick with the modern day and not have to play games under 1024x768 or in 16-bit if I can help it. 800x600 on a 19" monitor just looks horrid, IMO.


Have you forgotten that was 2 years ago? During that time Q3 at full IQ (16x12x32) was just close to 40 fps on a GF2 which is not playable in most peoples eyes. Not saying 800x600x32 at 4xFSAA was any more playable but you get the idea. Neither method ofter that magical 60fps number. We did not have super fast cards that allowed for max IQ and max speed back then. And which where the popular games back then? Most of them were old engines and only a few had support for advanced features.

Don't get me wrong though, I think it would be nice if we get FSAA quality on par with 3dfx's V5 RGSS implementation. I've wondered why neither ATI nor NVIDIA has done it yet after all this time, but I'm pretty certain they have a good enough reason to want to stick to OGSS or RGMS and whatever the next-gen cards will use.

No reason to pick OGSS over RGSS unless your limited with the hardware space (RG is usally more expensive to do. in hardware in terms of costing more transistor).
 
Ailuros said:
Mike,

Albeit I can detect a pinch of sarcasm there, I'm still not sure how to interpret the above quote. Do I have access to those special settings or not? If not then it's a moot point.

Yes it was a sarcastic remark :) I normally avoid such posts, but felt it was called for under the circumstances.

I certainly second your opinion for higher quality forms of antialiasing. But it may take a few more generations of graphics chipsets before we are satisfied since AA at the consumer level is still in its infancy.
 
Ailuros, I don't think you correctly identified who or what I was referring to and I endeavoured to keep my comment deliberately low-key. Nevertheless the peace seems to have been preserved.
I am curious about your own preference. I understood there was little benefit in going above anisotropic level 4 with FSAA. Perhaps I am casually misinterpreting this:.http://www.nvnews.net/reviews/Leadtek_ti4400/page_4.shtml.
I wonder if many people really care about FSAA. A lot used to underestimate it. It was a little underwhelming for the drastic performance hit on the GF2 generation. Those "fanboys" after framerates would be unlikely to acquire the V5500. I don't know how much people have moved on.
 
Doomtrooper said:
I thought Shark stated screen shots are not picking up what is actually being seen, like 3DFX's post filter. As I don't own a Geforce 4 but have had experience with a Geforce 3, I do know the blurring was VERY noticeable...

Yes, he mentioned it in an earlier post and has seen it occur using HypersnapDX, CTRL-PRNTSCN, FRAPS, and even the games own internal screenshot function. I am doing some investigation and hope to have some examples from Tribes posted this evening.
 
Above said:
I wonder if many people really care about FSAA.

We may never be able to answer that question, but there are a couple of lengthy forum threads at nV News that discuss the subject. There are even replies from GeForce3 and GeForce4 users who could care less about AA.

How Important Is Antialiasing To You?

http://www.nvnews.net/forum/showthread.php?s=&threadid=12926


What Type Of Antialiasing Are You Using?

http://www.nvnews.net/forum/showthread.php?s=&threadid=13156


What Do You Like With A GeForce4 - Quincunx Or 2X AA?

http://www.nvnews.net/forum/showthread.php?s=&threadid=14392


Ansistropy vs Antialiasing Debate

http://www.nvnews.net/forum/showthread.php?s=&threadid=3525


Geforce3 Antialiasing Disappointing

http://www.nvnews.net/forum/showthread.php?s=&threadid=2457


The number of people who really care about AA quality is probably extremely low. While AA is gaining in popularity, think about how many PC owners have no idea of what AA is or how to enable it.
 
Above said:
Ailuros, I don't think you correctly identified who or what I was referring to and I endeavoured to keep my comment deliberately low-key. Nevertheless the peace seems to have been preserved.
I am curious about your own preference. I understood there was little benefit in going above anisotropic level 4 with FSAA. Perhaps I am casually misinterpreting this:.http://www.nvnews.net/reviews/Leadtek_ti4400/page_4.shtml.
I wonder if many people really care about FSAA. A lot used to underestimate it. It was a little underwhelming for the drastic performance hit on the GF2 generation. Those "fanboys" after framerates would be unlikely to acquire the V5500. I don't know how much people have moved on.

I wouldn't think that you are underestimated or misunderstood the nvnews article. In the last sets of drivers the differences in IQ between 32tap and 64tap anisotropic (especially in D3D) are either minimal or non-existant, while performance still drops with the latter.

What you've missed in my former reply is that I claimed that a combination of 2xRGMS/16tap (or level 2 if you prefer)/bilinear is in quality (at least for me) somewhere in between 2xRGSS and 2xOGSS (both with bilinear).

Since there are too many differences in implementation it's hard to come to fair comparisons. Overall I doubt that any supersampling algorithm though can filter textures more or better than plain 16tap anisotropic filtering does, at least the output doesn't show any of the kind.

Make a simple experiment (if you still have a card that does SSAA). Choose trilinear/no AA and capture a screenshot and contradict it with a 2x SSAA/bilinear screenshot from the same spot. Normally the latter comes out superior.

Now when you don't use FSAA at all and the card does support anisotropy comparing plain trilinear with 16tap anisotropic filtering, with the latter you get slightly better output. Upping the ante to 32tap makes the output times better than trilinear and immediately indentifiable.

Implementations are quite different between different cards; to the degree that comparisons are not only hard to make but could get unfair for either method used too.

Concerning past generation cards and performance hits with Supersampling employed there was always the KYRO2 around; although a budget card and using just simple OGSS the performance drop (since it being a Tiler) was nowhere near that of it's str8 competitors. Hence my comment that if it has to be Supersampling these days then at least on a high end Tiler. Although I still don't see a reason to fall back again to SSAA and not use an advanced adaptive MSAA algorithm on a Tiler and get FSAA for free and take only the hit for anisotropic filtering.

Plus the fact that with recent cards that employ Multisampling there always a chance to use low level aniso with 2xRGMS and choose a much higher resolution depending on game of course always. Aliasing does not get eliminated in high resolutions, you'll get less though and jaggies get smaller. Enabling the above in high resolutions I can't find much to object.

I'm still expecting you to answer why 3dfx planned to implement RGMS into it's products past the VSA-100.

http://rashly3dfx.tripod.com/products/rampage.html
 
Exaggerated FSAA ability of the Kyro chips has been fostered probably by Anandtech testing Serious Sam with poor methodology. On gamebasement.com is a more reliable appraisal. That is to say at the very lowest resolutions it was applied efficiently but by reaching XGA the advantage had tailed off.
 
Above said:
Exaggerated FSAA ability of the Kyro chips has been fostered probably by Anandtech testing Serious Sam with poor methodology. On gamebasement.com is a more reliable appraisal. That is to say at the very lowest resolutions it was applied efficiently but by reaching XGA the advantage had tailed off.


Deferred renderer like Kyro will always have and advantage over cards like GF2 when performing supersamping: they will take less of a percentage hit as the as the scaling is done prior to frame buffer, as opposed to in frame buffer. For example, while using 4x SS at 800x600, GF2 etc. will be dealing with 1600x1200 framebuffer, which then is scaled down to 800x600, while Kyro would only use 800x600 framebuffer in the first place, thus saving bandwidth. It doesn't mean that KyroII will always be faster when performing FSAA, but it does meant that it will take less of a hit.
 
I'm still expecting you to answer why 3dfx planned to implement RGMS into it's products past the VSA-100.

I'll answer that!!

Could it be because they wanted to mimic what you see in higher-end workstations.. being the combination of multisampled edges and supersampled textures? :)

This would be, in effect, the same thing as many SGI's perform. All textures are supersampled (2x2, 4x4 or even 8x8 supersampling) at the mipmap/perspective corrected level and then textured. You then use a simple multisampling algorithm to handle the rest. What you get is superior performance and same or better image quality. Throw in anisotropic filtering to boot- you lose any last traces of texture aliasing.

Given the rumored architecture and pipelines of said chips, this wouldn't be too much of a stretch given the focus of these rumored specs.

It's unfair to speculate that they were just planning to do the NVIDIA manuever and simply slap in an RGMS mode and call it a day- especially after the V5.
 
Above said:
Exaggerated FSAA ability of the Kyro chips has been fostered probably by Anandtech testing Serious Sam with poor methodology. On gamebasement.com is a more reliable appraisal. That is to say at the very lowest resolutions it was applied efficiently but by reaching XGA the advantage had tailed off.

Not that it´s of any relevance here since I used the K2 as a case example as to how deferred renderers handle FSAA overall, but you may want to reread Ben Skywalker´s conclusions especially concerning FSAA again.

Is that all that hit your eye from my former reply?
 
Shark,

Old discussion revisited hm? Yes they would have used a Multisampling accumulation buffer, yet the presentations I had seen when they introduced their M-buffer back at GDC matched quite that description:

'M-Buffer' (ie. Multisample-Buffer that was used on R4, also had T-Buffer support) M-Buffer allowed for 4 samples per clock to be generated at NO pixel rate loss, unlike the 4X hit taken by VSA-100 and T-Buffer. The Downside is that MS takes the same texel coordinates and jitters them, thus there's no texture clean up. 3dfx implimented the advanced Anisotropic filtering to remedy this once and for all.

Slight difference the T-buffer would have still been present; no idea though if the drivers would have allowed access to it.
 
Sharkfood said:
One of the biggest changes I've seen in the Quincunx method is one that appears to be related to using a post-filter.

What this does is makes it impossible to capture Quincunx in screenshots. What you get is not what the final result looks like. What this has done is lead to a ration of websites publishing Quincunx screenshots with "See? It's not blurry!" when the screenshots only have the multisampling effect applied and not the post-filter blur.

Shark,

I've spent the last two hours comparing Quincunx image quality in Tribes and Quake 3. Although I didn't post any screenshots, I've been able to confirm the findings you mention above. I've made a mistake assuming that some of the screenshots I took in my GeForce4 Ti 4200 preview illustrated what Quincunx actually looks like on the GeForce4 and will update the article.

This is unfortunate, but I would like to run another test this weekend. I will ask my younger son to enable various forms of antialiasing, along with no antialiasing. Without knowing what he selected, I will play a few games and try to determine the setting he enabled. What I hope to accomplish is to see if I can consistently determine when Quincunx is being used on the GeForce4.

On the other hand, I have noticed that when run under Direct3D, the IL-2 Sturmovik screenshots are representative of Quincunx image quality.

No AA - http://www.nvnews.net/previews/geforce4_ti_4200/images/il2_d3d_noaa.shtml

Quincunx AA - http://www.nvnews.net/previews/geforce4_ti_4200/images/il2_d3d_qcaa.shtml

If you have time, I would appreciate it if you could run a similar test (No AA vs Quincunx) using a Direct3D based game and report your findings. Thanks for setting me straight.

Edit: I would also like to apologize to Nagorak for my sarcastic remark in an earlier post in this topic. Had I known then what I do now, I would have used a different approach. But I still think he went overboard with his comments on Quincunx (as it pertains to the GeForce4) :)
 
I have noticed that when run under Direct3D, the IL-2 Sturmovik screenshots are representative of Quincunx image quality

I wish I had this game to test! It would be an exception from most of my testing.

If you have time, I would appreciate it if you could run a similar test (No AA vs Quincunx) using a Direct3D based game and report your findings

I notice this in EQ alot and it's a DX8.0 game. Truly, I have yet to find a screenshot that I can produce that accurately depicts Quincunx in most any game tried, albeit I havent taken hundreds of screenshots, just dozens.

EQ illustrates this quite well, again, as it has a nice text box that display quite a bit of blurring in Quincunx but none at all in screenshots.

No AA:
eq-noaa.txt

eq-noaa-zoom.txt


Quincunx:
eq-quin.txt

eq-quin-zoom.txt


Visually, the difference is night and day- not only with the text, but also with texture blending. Screenshots do not depict the blending at all as the above screenshots show identical results.

One thing I have noticed with many of the website screenshots- they depict some amount of blurring in their Quincunx screenshots, but from my testing this is a strange blurring that occurs with anisotropic filtering wehn combined with *any* form of AA. Example:

2xRGMS + AF:
eq-aniso.txt

eq-aniso-zoom.txt


As I have scoured the web, all the Quincunx depictions all show anisotropic filtering "noise" as a depiction of the "new" Quincunx, yet the actual on-screen blurring that occurs with Quincunx I have yet to see in a single screenshot. It's not quite as subtle as what Anisotropic blurring does and is readily visible at least on my monitor.

I think if NVIDIA implemented a post-filter, this would be the wisest choice for them and it would also explain not only the performance improvements but also the increase in quality. Post filters are truly "hardware" methods versus algorithmic/GPU blends of the same and I'm sure much better neighbor blending can be implemented with less resources this way.

I just found it rather odd that everywhere I looked for visions of Quincunx- not a single depiction matched my experience with the mode. I actually use Quincunx on my GF3 for games like Quake2 and Half-Life as those games have such pixelated/grainy textures that it actually looks very nice.. (especially Quake2!).. but with the GF4, I found no way to illustrate it's effect.. and later discovered this was the case with everything else I've tried to date. Maybe I can get IL2 and see if what is shown in those shots is anisotropic filtering blur or truly quincunx blurring in the framebuffer- but from how it looks to me, if other Direct3D games can be relied on for similar IQ, it doesnt look like the neighbor-blend I'm used to seeing with Quincunx to *my* eyes.

Cheers,
-Shark
 
John Reynolds said:
Neverwinter Nights settings allow for switching AA modes in-game and is therefore a nice way to easily discern differences. Unfortunately, since it's an OpenGL title there's no 4xs, just 4x with the 9-tap filter. Anyways, Quincunx is again very blurry and this is definitely quite apparent.

I just installed Neverwinter and i definitely agree with you.
Quincunx is so blurry that i would never consider using it in that game, which rocks btw.

On the other hand, it didn't look that bad in Quake 3 so maybe it's usable depending on the game you play (bought my GF4 ti4400 a couple of days ago so i haven't had time to try that many games yet though. btw switched from a TNT 1 so it was an emergency :)).

Not that i think i'll ever use it though but some people might prefer the softer, blurrier look.
 
Ailuros said:
Not to be picky here, but I´d like to see at least rotated grid SSAA on the next batch of ATI cards this time around.

Well the 8500 was supposed to be a psuedo random AA :-?

The closest its got is 2xq in d3d in 7206 drivers which, from what I can tell from screenshots is a RGSS but able to be turned 90 degrees every pixel or so.
 
About quincunx FSAA, I've noticed something rather odd when using 2 monitors.
I'm using a GF4 Ti4200 128megs, and leaving the 2nd monitor showing windows when running games only very marginally affects performance. When using quincunx though, that changes drastically. With the 2nd monitor off I get for example 65fps, with it on it's like 35. Without FSAA or with 4x FSAA the difference between 2nd monitor on or off is just a couple of frames.
Is the 2nd ramdac used as a postfilter for quincunx?? Just a wild guess...
 
2x AA ist superior to Quincunx. I dont understand the Quincunx debate. 2x better and faster than Quincunx. Quincunx is nothing but a marketing-hoax, to create the legend of GF3 = 7 times faster than GTS. (They benched Quincunx vs. 2x2 Supersampling.)
 
It's a shame we have to scrabble about after how exactly nvidia are implementing things in an apparent lack of information. Are we really so in the dark or just not looking in the right places?
Interesting about Quincunx and dual monitors.
 
Back
Top