New D3D FSAA Viewer

Say, how was this found out exactly? Was this a discovery resulting from poking around in the registry, or did someone at or near ATI take pity on the horde jonesying for its rumor fix and slip us a bit of the pixelsmack?

Coincidence? Since when has anything been a coincidence in this business? ;)
 
3dilettante said:
Say, how was this found out exactly? Was this a discovery resulting from poking around in the registry, or did someone at or near ATI take pity on the horde jonesying for its rumor fix and slip us a bit of the pixelsmack?

Coincidence? Since when has anything been a coincidence in this business? ;)
I was wondering the same thing... somebody probably disassembled the drivers (or a developer with developer drivers got the reg key and leaked it).
 
Does it solely alternate patterns (doing from one 6x pattern to another 6x pattern) or does it also alternate number of samples?
 
Geeforcer said:
Does it solely alternate patterns (doing from one 6x pattern to another 6x pattern) or does it also alternate number of samples?
I'm sure it solely alternates patterns. Alternating the number of samples wouldn't bring much benefit.

Oh, and on performance, there should be no noticeable performance hit. The changing of the algorithm only needs to be done once per frame. Uploading the new sample information once per frame cannot be very expensive (i.e. it should be on the order of the expense of a state change: something that will not impact performance once per frame).
 
Long ago on B3D (2000?) one of the members wrote a demo to show this kind of "temporal" AA, just rendering with the CPU and high framerates a simple scene, and using the CPU to jitter subpixel geometry positions. Can anyone find it? It was a CPU-only simulation of 3dfx temporal antialiasing.

I have two problems with this method:

1) it doesn't yield any improvement unless you have really high *stable* refresh rates. I actually think it's worse IQ than simply wasting fillrate on supersampling and halving the framerate. And 2 temporal samples is woefully inadequate.

2) it's not real temporal antialiasing in the sense that has been used for a long time: approximating exposure It requires the eye to do integration, and how's that gonna work on my LCD? Real temporal AA would smooth out unstable framerates and provide better realism if your ability to simulate the world and render it was far higher than your display device's refresh rate.


Where it makes sense is this: inter-frame jittering will add randomness/noise to the image which breaks up the monotomy of the sample pattern, thus making aliasing regularities harder for us to detect.

Thus, I don't see this as temporal AA. I see it as adding noise.
 
Chalnoth said:
Geeforcer said:
Does it solely alternate patterns (doing from one 6x pattern to another 6x pattern) or does it also alternate number of samples?
I'm sure it solely alternates patterns. Alternating the number of samples wouldn't bring much benefit.

Oh, and on performance, there should be no noticeable performance hit...

But what would then be an advantage of 6xT (as in, a number of alternating 6x patterns) over regular, single-pattern 6x? Since I can't test it out myself, I presume the quality of 6xT is better then regular 6x while the performance is about the same.

BTW, this is one of the things that NV40 could not do due to the lack of programmable sample pattern.
 
Well, it could approximate it using two techniques. NV40 is programmable sample positions in a rudimentary sense: you can select between ordered grid and rotated grid. But more generally, you could offset geometry between frames, as was done in the old B3D demo I mentioned (but can't find anymoe)

It's not as flexible or as "good" as having fully programmable sample positions.
 
Actually Nvidia could get pretty close to this by simply changing the pixel center between frames. Say 1/8 of a pixel diagonally between frames, the sample pattern would effectively be more regular and there is a potential issue with text rendering, but the overall effect would be pretty close.
 
Hmm... ran some benchmarks but it looks like it overrides the control panel and forces v-sync when enabled. It does look as if there's a slight performance hit because the synched framerate plummets more often and pulls down the reported average when using 12xT compared with 6x.

Haven't tried OGL yet - or the *even higher* temporal modes yet (3 sample positions!).

MuFu.
 
ERP said:
Actually Nvidia could get pretty close to this by simply changing the pixel center between frames. Say 1/8 of a pixel diagonally between frames, the sample pattern would effectively be more regular and there is a potential issue with text rendering, but the overall effect would be pretty close.

But ATi could radically change sampling patters between frames (like Mufu shots show).

What I'd like to know (where did MuFu go) is how much better does "4xT" look then 4x?
 
MuFu said:
Hmm... ran some benchmarks but it looks like it overrides the control panel and forces v-sync when enabled. It does look as if there's a slight performance hit because the synched framerate plummets more often and pulls down the reported average when using 12xT compared with 6x.

Haven't tried OGL yet - or the *even higher* temporal modes yet (3 sample positions!).

MuFu.

Why 12xT? You are not really taking more then 6 samples in any frame, are you?
 
Geeforcer said:
ERP said:
Actually Nvidia could get pretty close to this by simply changing the pixel center between frames. Say 1/8 of a pixel diagonally between frames, the sample pattern would effectively be more regular and there is a potential issue with text rendering, but the overall effect would be pretty close.

But ATi could radically change sampling patters between frames (like Mufu shots show).

What I'd like to know (where did MuFu go) is how much better does "4xT" look then 4x?

Actually the predominant issue with this technique is that it will have zero effect on moving portions of the image, and any shift much > 1 pixel will dwarf any change in the sampling pattern.

I keep thinking that this doesn't really matter, since moving pixels are less of an issue, but the couterpoint to this is that when playing a game how many pixels are really still?
 
ERP said:
Geeforcer said:
ERP said:
Actually Nvidia could get pretty close to this by simply changing the pixel center between frames. Say 1/8 of a pixel diagonally between frames, the sample pattern would effectively be more regular and there is a potential issue with text rendering, but the overall effect would be pretty close.

But ATi could radically change sampling patters between frames (like Mufu shots show).

What I'd like to know (where did MuFu go) is how much better does "4xT" look then 4x?

Actually the predominant issue with this technique is that it will have zero effect on moving portions of the image, and and shift much > 1 pixel will dwarf any change in the sampling pattern.

I keep thinking that this doesn't really matter, since moving pixels are less of an issue, but the couterpoint to this is that when playing a game how many pixels are really still?

I have already noticed this in UT2004. Using 4xT you can clearly see a degree of aliasing along edges when moving which is not present when stationary.
 
I am impressed. This is the kind of thing I like to see from IHVs, Just like Nvidia enabling 6xS 8xS ect for Geforce 4 users with registry hacks.
 
ChrisRay said:
I am impressed. This is the kind of thing I like to see from IHVs, Just like Nvidia enabling 6xS 8xS ect for Geforce 4 users with registry hacks.

Can't say anything about ATI, but I was certainly NOT impressed with the fact that for months you were forced for months to rely on registry hacks to select your AF and AA and texture compression settings on Nvidia card.
 
So OpenGL Guy proposed it internally in April 2003. Then, when someone else proposed it here two months later, he pooh-poohed it with an "I'm not so sure"?
 
Geeforcer said:
ChrisRay said:
I am impressed. This is the kind of thing I like to see from IHVs, Just like Nvidia enabling 6xS 8xS ect for Geforce 4 users with registry hacks.

Can't say anything about ATI, but I was certainly NOT impressed with the fact that for months you were forced for months to rely on registry hacks to select your AF and AA and texture compression settings on Nvidia card.

When was this? and with what drivers? I've never heard of it. And For Which cards? I have never had trouble controlling features for the Geforce 4 I had (may it RiP)

6xS 8xS 8x and 12x ect, have never been supported by Geforce 4 cards, So you had to use something like atuner or manually edit a registry mode in Riva Tuner.

But I dont remember anything about texture compression.
 
Note on quality:
Temporally-jittered AA could allow better image quality than static AA.

But I think the main problem with the patterns shown is that they are not sparse sampled. Given the small number of samples, it would be much better to switch between multiple sparse patterns than less optimal patterns (there are many sparse 6x patterns to choose from, for instance). The use of less optimal patterns is likely the reason MuFu noticed more aliasing when moving.

Note that you would need high framerates for decent quality, and you'd probably want every other sample pattern chosen to produce a similar image (it's not necessary that every other sample pattern produce be the same: just have similar angle dependencies).

On performance:
Any drop in performance from this is disturbing, and would seem to be indicative of a massive inefficiency due to the hardware only being meant to change AA sample patterns while not playing 3D games. If you take, for example, the fact that the driver resets the display every time a D3D setting is changed, there may be much that needs to be reset every time the sample pattern is changed that normally wouldn't be reset during a game.
 
Back
Top