3D Marks CPUs

The data I've seen in another forum suggests that the results isolate what I'd call "compute core performance" for lack of a better term. Memory latencies and bandwidth do not influence results and neither do cache sizes as far as I've seen (i.e. 512kiB seems to be more than enough). The XBitlabs results support this, too.

The dual-core scaling is obscenely high. Going from an Athlon 64 4000+ to an Athlon 64 X2-4800+ nets a 90% performance gain :oops:

I'm not so sure about the practical value of that benchmark. Pathfinding is a real problem for real games and makes excellent benchmarking material, but in real games just noone would implement it in that way. The 3DMark06 approach just doesn't seem to scale well, or IOW its absolute performance is totally unacceptable for the amount of actors involved.

If you spend upwards of four billion CPU clock cycles for simulating a single frame, in a scene with that (relatively low) level of complexity, something just has to be wrong about your software design and your algorithms IMO. And it's not cache misses that hurt here. No use blaming slow memory when you hardly access anything outside of the caches.
 
Back
Top