BenSkywalker said:
That sounds like you are simply applying a weighted filter. That's what I was saying - the subpixel position of each sample can determine the weight applied to that sample.
I'm not talking about a fixed weighting such as 50%/12.5% ala Quincunx, I'm talking about utilizing a weighted sampling based on the relative intensity of focus in relation to the camera angle.
For each super sample, a set of texels will be chosen, each of which will weighted according to the texture's orientation relative to the camera. The 'fixed weighting' is only applied to the the collection of
all supersamples and so individual texels will have dynamic weights.
Anyway, there's no need to go mucking about with finding individual vectors - the overall projection of the scene at a higher rendering resolution will take care of it all.
I'm not asking if you think it is good enough, I'm asking if you are saying it wouldn't be any better.
From what I understand of what you are asking, I don't think it is going to be any (or significantly) different. By the sound of it, you are only re-ordering the maths.
I'm possibly being a bit thick but what exactly are you asking?
Do you think using the current isotropic filtering implementation as it relates to AA would be equal to or greater then a sampling implementation that was weighted based on angle in relation to the camera?
I still think there is some confusion here. When a screen pixel is projected into texture space, the footprint of that texel becomes distorted into an ellipsoid
ish shape (if you consider texels to be circular) or an arbitrary quad (if originally square). When people refer to isotropic filtering of textures they mean that the algorithm approximates that 'foot print' by a weighted sum of four (bilinear) or eight (trilinear) axis aligned square sets of texels.
Anistropic texture filtering (and the quality is implementation dependent) just tries to model that footprint a bit better. One typical approach is by dicing it into a smaller set of squares and using more bilinear/trilinear samples. FWIW, in the
original texturing paper, by Catmull, the system exactly sampled the set of texels. Unfortunately this is too slow hence all the approximations (eg William's MIP Mapping/Trilinear and others) that have followed.
With supersampling, OTOH, the original screen pixel may be generated in an isotropic space, but each super sample will be individually projected into the texture and so the combination of the smaller samples will automatically approximate the arbitrary footprint of the entire pixel. In addition, it will take into account obscuring due to intersections of objects or occlusion.
There is nothing to stop you combining SS with AF-Texture-filtering.
In summary, I think what you are asking will already be occuring with (correct) SS.
Tahir said:
But Gamma Corrected AA only works on the edges rather than the whole screen so would it matter that much from monitor to monitor.. can't say I have noticed the differences on several different monitors (old and new).
It's also important away from the edges as well - if you have a fast changing texture or lighting it's better to downsample in the linear domain.
As for differences in monitors, it's not that important. Even a rough approximation to the nonlinear gamma is a lot better than none at all!
But I would definitely think it was the sample patterns used.
Assuming that you don't pick a
stupid pattern (!), I'd say that gamma is more important.
But there are still big differences in AA quality even with different monitors.
Could also be that many people have their brightness cranked up far to high.