It's worse than that. Toss an NV PhysX gpu in that competing vendor rig and PhysX will run fine, but AA is still disabled unless you hack the inis to fool the proggy into thinking it's an NV card and then, miraculously, AA works.
The "Nvidia workaround" was to force AA through CCC which gave very slow AA compared to the in-game MSAA. Nice trick to win the benchies.
Considering the fact that the test system was a Core i7, I cannot see how a processor with that much floating point performance could get bogged down so badly by PhysX. I've heard that PhysX is code which is copyrighted, I wonder how that effects the ease of a CPU implementation. Its like the game turns the CPU into a single core P4 2.4Ghz Northwood once its activated. Surely the out of the box implementation should at least use two cores standard!
How can developers even take it as a serious Physics implementation if its only good for technology demos on the latest Nvidia GPUs whilst none of the older G92 and below can implement it without taking a massive hit to performance and none of the ATI cards can do it at all if they don't have a decent CPU implementation?