FSAA and high polygon loads mutually exclusive?

Heck, I've always been most impressed with Id's networking code...probably as much as the actual 3D code...There are so few titles out that stress systems out like Id-powered games...and the truth of the matter is that most of them totally suck in this regard...

OT, but..
Quake network code sucked, up until QuakeWorld.
Quake2 network code sucked, up until they implemented the QuakeWorld code (and led to a lot of unhappy campers.. literally)
Quake3.. I'm not going to go there. :smile:

Besides, I really thing that FSAA is an acquired taste...One thing that 3dfx preached that was totally on the $$...Once you use it, you won't go back. Running in high-resolution simply does NOT address the problem @ hand, despite what some people claim...There is an abundance of definitive proof that supports this claim...

Yeah, and those dirty bastards at 3dfx ducked out before putting anything out that has the abundance of bandwidth and fillrate to deliver this for every game. :smile:

I agree totally with your R250 speculations, although I disagree with the R200 commentary on SV and performance. I also disagree with that NVidia engineer's stipulation as well given just how lowly and pathetic SS is on the GTS, Ultra and fall-back hacked modes on the GF3.

I'd imagine some ATI engineers are probably scoffing in the same way about NVidia's anisotropy. Both sides can equally scoff at different approaches when the approaches obviously take short-cuts at the expense of actual effect applied.

Supersampling is expensive and there are no two ways about it. It's simple to ride a multisampling high road and scoff at something substantially more expensive requiring more power to pull off. The 8500 does admirably well in framerate for performing SS- it's just it doesn't have the gobs and gobs of bandwidth overhead required to pull this off in games that are already swallowing up this precious resource by the bucket. Luckily a substantial majority of the games out there are well within the power afforded to the card to maintain playable framerates with AA enabled.

The whole basic premise of this discussion is really very fundamental. Video hardware has a fixed and finite amount of resources available. Everything chips away at those resources and some are consumed more readily than others (bandwidth vs fillrate vs gpu states vs geometry handling vs etc.etc.).

Obviously if a game running "flat" with no AA and no anisotropy applied consumes all available resources leaving little to none remaining for these added novelties, performance loss is going to be dramatic when applied. From this basic principle, mutual exclusions can be arrived- but those exclusions will be *very* game, platform, driver, hardware and implementation specific.

If this werent the case, then we gamers and 3d enthusiasts wouldn't spend so much time trying to get new "leaked" drivers and spend countless hours adjusting game settings, driver adjustments and the like. We're simply trying to shuffle bottlenecks to find that "sweet spot" balance given the finite amount of resources our hardware has and what we can and cant achieve from a particular hardware's choice of implementing features.
 
DoomTrooper:

Like everyone else has been saying, it's really all about tradeoffs. Are you willing to give up maybe anistropic filtering, or perhaps higher res textures, for the benefit of antialiasing? Some people will, others won't. Depending on what the bottleneck is, AA may only slightly decrease your framerate, or possibly largely decrease it. It also depends on what hardware you're running. As both rendering engines/games, and video hardware get more and more complex, your going to see less and less of a hit doing something like 4X multisampling AA. Sure, there will probably be better AA options by that point that will take extra bandwidth, but something like a 4X implementation will likely be taken for granted the same way that bilinear filtering is today.

I mean, how many people are arguing that bilinear filtering is a worthless feature because it's not "compatable" with high polygon games, and slows online games down too much for it to be worth anything? A lot of people see your arguements in the same vein. Once apon a time, bilinear filtering was a costly thing to implement, but now it's just assumed, if not superceded by trilinear and anistropic filtering. Antialiasing will eventually go the same way.

Nite_Hawk
 
Why cant just 1 friggin company actually bring out a card with an (edge-) AA method which is not some watered down version of supersampling?
 
I will tell you this...As far as anisotropic filtering is concerned, you will get a nice entertaining discussion about it from the nV point of view...

From where they stand, I believe they might tell you that ATI's implementation is something they would *never* do...ever. It simply is not correct.

If there is one thing that I can say about nVidia...and there may be an exception here or there...But generally speaking, when it comes to specifications, nVidia does NOT cut corners. This has been the primary reason why there hasn't been any kind of "LOD" adjustment in the OGL drivers...There is a *very* distinct specification as to how the LOD is to be set, and you should not deviate from it.

Likewise, they would go on-and-on about how their implementation is just "correct" and ATI's is simply not. You may disagree with them, claiming that the tradeoff is worth it...but as far as what the implementation is supposed to do, it's *very* tough to argue with this fact when you come to realize that the 8500 does not apply anisotropic filtering across all objects (see xbit-labs).

I can value the tradeoff...But the one thing that I have *never* been able to "dig" is that stupid limitation whereby you cannot enable aniso + trilinear simultaneously...The side effects are just plain bad...I can only hope that it *might* be fixed/addressed with R250 (I doubt it), but certainly w/ R300.
 
There is no such thing as "correct anisotropic". There is no specification for how anisotropic i supposed to be done.
 
There might not be a correct way, there sure are hella wrong way's :)

A footprint for a perceptually good anisotropic filter will probably fall somewhere between the projection of a box shaped pixel and a circular gaussian representing the pixel to texture space .... my guess is that ATI due to their weird LOD calculation will very often fall far from either of those marks, to a far greater extent than NVIDIA.
 
On 2002-02-13 12:29, MfA wrote:
There might not be a correct way, there sure are hella wrong way's :smile:
In that case I propose always sampling the top left texel - damned fast :smile:
 
On 2002-02-13 11:28, Typedef Enum wrote:
I will tell you this...As far as anisotropic filtering is concerned, you will get a nice entertaining discussion about it from the nV point of view...

From where they stand, I believe they might tell you that ATI's implementation is something they would *never* do...ever. It simply is not correct.


Well seems like SGI believes its a valid methdod:


http://www.sgi.com/software/performer/brew/anisotropic.html
 
What does SGI know?

They also use that abhorrent SLI technology and parallel graphics subsystems. Any good NVidia PR rep. will tell you how dumb, impractical and useless such a concept truly is. :smile:
 
On 2002-02-13 12:18, Humus wrote:
There is no such thing as "correct anisotropic". There is no specification for how anisotropic i supposed to be done.

Yes there is, its gotta be done by Nvidia to get the golden seal of approval :smile:
Seriously with MAX ansitropic on a 8500 you can't even see the mip map borders yet with a G3 if you try Max anistropic you get a 50% peformance hit, so your forced to drop the Anistropic level which gives a inferior filtering image in the distance.
Their is flaws in both ways but at least I get decent filtering in the distance with little peformance hit.
 
Anisotropic filtering is supposed to approximate direct texture convolution (i.e., project the filter kernel onto the textured surface, and integrate everything underneath it).

There isn't a single "correct" approach, but it is certainly possible for one implementation to be more correct than others. Only sampling the projection of the line of anisotropy onto the principal texture axes is certainly less correct than only sampling along the line of anisotropy.
 
Wasn't Rampage supposed to have programmable filter kernels? Imagine using your own sharper or edge detection filter, and loading photoshop filters onto the card. :smile:

I think it would be cool if the iterators and samplers of 3d hardware were separate units that were programmable in a different phase: specify the filter kernel and special a basis matrix for the iterator (e.g. spline, hermite, etc interpolation) It might break texture cache/prefetch algorithms badly tho.
 
The current programmable hardware can do a lot of this already (and will get much better in the future). Sharpen, contrast adjustment, edge detect, luminance, bicubic (I think), and other filters can be applied if you're willing to burn a few texture stages on the same texture, jitter your tex coords appropriately, and work out the necessary blending math.

Although, it would be kind of nice to do this inside the texel unit, rather than forcing the shader to perform the convolution itself.
 
Back
Top