Let's assume that the results are genuine - i.e. it was a real GTX 480 being used and that the odd readings in GPU-Z are just because the program's database hasn't been updated yet. The VS results make sense: the simple test is a measurement of brute vertex throughput, which the 480 would be better at, whereas the complex test is about shader strength, favouring the 5870. The PS test is all about shader (5870 clear favourite then), whereas the shader particles is a combination of shader, texturing and vertex throughput (hence why it's almost even). The fill rate figures would actually be a reasonable fit for a GTX 480 running at 650MHz: multitexturing peak fill rate = 39,000 Mtexel/s for 60 TUs @ 650MHz. 48 ROPS at the same clock speed would give an output of 31,200 Mtexels/s for the single texturing fill rate test but that doesn't match the indicated 14493 - it does, though, if the 480 only has one blender per ROP pairs.
All of the results seem reasonable apart from the single texturing figure - I don't understand why NVIDIA would go backwards with their ROP designs but considering little else of the Fermi design makes a lot of sense to me, I wouldn't be surprised if this really was a genuine set of results...
When performing alpha blending, primitives that have already been rendered to the back buffer need to be sampled and then blended with the next primitive that's overlaid. This is all done by the ROPs but some chips, such as the G80, only has one blending unit for each pair of ROPs - so although it can read/write 24 pixels per cycle, it can only blend and output 12 pixels per cycle. Thus the fill rate test in 3DMark06 ends up giving results almost half that the chip is theoretically capable of.