Chalnoth said:
Well, duh, because they focused on off-angle surfaces. It really upsets me that nobody focused on off-angle surfaces back when the Radeon 9700 was released and the GeForce4 Ti cards were still beating the pants off of it in anisotropic filtering quality.
Chalnoth, you incessant rants and refusal to accept even the most obvious facts about NV's few glaring shortcomings in the past piss everybody off here.
I showed you many times that GF4 took an enormous performance hit with anisotropic filtering. It had very little to do with extra samples for off angle surfaces. NV30 also had angle independent AF (or very nearly so), and it had a performance hit very similar to ATI all else being equal. 90+% of rendered pixels in a Quake3 demo are vertical or horizontal. Yet we see this from GF4:
http://graphics.tomshardware.com/graphic/20020206/geforce4-17.html#anisotropic_performance
117 fps at 1024x768 w/ 8xAF; 132 fps at 1600x1200 w/o AF
Fillrate drops to almost 1/3! My 9700P shows 10-20% hit max in Q3 with 16x Quality AF. This is an extreme case, but usually the GF4 had 3x the performance drop with AF.
No one cares if the GF4 quality is a bit better when it has a performance hit way higher. It's
always been performance first, quality second (up to a certain point, obviously).
Today, graphics cards from ATI and NVidia are often within 20% of each other, which is hard to notice when not looking at a graph. Games use more off angle surfaces now too than back then since gamers are demanding more varied environments. Hence the focus on AF quality. You being pissed about this tells more about your bias than the media's. In fact, even today only HardOCP has stated that it makes a noticeable difference. Some other sites are even writing off ATI's higher quality AF as insignificant.
Chalnoth said:
Except these two things do not follow. Firstly, I really don't see how you can quantify ATI as doing more "forward thinking." It was, afterall, nVidia was the first one to implement a large number of the technologies that we take for granted in 3D graphics now, including anisotropic filtering, FSAA, MSAA, programmable shaders, and hardware geometry processing.
MSAA is a speed optimization, and GF4 barely outpaced theoretical SSAA (given the same RAMDAC downsampling) - 4xAA reduced fillrate by 70% instead of 75%. Colour compression was the real innovation. The shader hardware in the original Radeon was unbelievably close to DX8 PS1.0. Both had 8 math ops, both had fixed mode dependent texturing, but the Radeon had a 2x2 matrux multiplication first instead of 3x3. It had 3 texture multitexturing instead of 4. I worked at ATI, and I am (rather was) very familiar with R100/R200 architecture. The vertex shaders were barely changed - R100 just didn't quite meet the spec, which rumour says was changed too late for ATI, so they couldn't call it a programmable vertex shader according to Microsoft. Saying who invented what in any field is often a wash, and realtime graphics is no different.
I'm not agreeing with the statement that ATI is way more innovative or forward-looking than NVidia, but rather that these innovations are very evolutionary. Both companies are driving each other similarly, especially when you consider how early design decisions are made. If that's what you're saying too, then maybe you shouldn't come off like your saying ATI is just a follower.
Either way, lay the AF thing to rest. GF4's AF speed was pathetic.