I've said it before, and I'll say it again: 3DMark should not be treated like a 3D benchmark gift from heaven; it's not. It's actually a pretty horrible benchmark, even as a system-wide measure of performance.
Anyhow, on with my point. I ran the UT2K3 benchmark (included with the leaked demo) on both a Radeon 8500 128MB and a Visiontek GeForce3 (vanilla - no Ti 500). From what I've seen of all the 3DMark 2001 SE benchmarks, I would think that the Radeon would be at least somewhat faster than a GeForce, if not a GeForce3 Ti 500 with a "measily" 64MB. Here are my results:
GF4 Ti 4200 @ 250/444 w/no FSAA 116.2 fps
GF4 Ti 4200 @ 250/444 w/QC FSAA 88.5 fps
GF4 Ti 4200 @ 300/560 w/no FSAA 126.2 fps
GF4 Ti 4200 @ 300/560 w/QC FSAA 105.1 fps
GF3 @ 200/460 w/no FSAA 104.7 fps
GF3 @ 200/460 w/QC FSAA 55.2 fps
GF3 @ 215/515 W/no FSAA 113.0 fps
GF3 @ 215/515 w/QC FSAA 60.8 fps
R200 128MB @ 275/275 w/no FSAA 91.8 fps
R200 128MB @ 275/275 w/3x FSAA (perf.) 45.9 fps
Pretty startling: a "vanilla" GeForce3 has over 10 fps on a 128MB Radeon 8500 - the "superior" DX8.1 GPU. I also thought that with the 128MB perhaps the GeForce3 would lose out, but that's not the case. The benchmark was run with all texture details set to "UltraHigh" (in the preferences "console" and at 1024x768x32. NVIDIA drivers used: 29.42. ATI drivers used: Catalyst.
System:
ASUS A7N266-C
AMD Athlon XP 2000+ @ 2115
512MB Samsung PC2100
80GB WD Caviar "SE"
Philips Siesmic Edge 5.1
3Com 100Mb NIC
Windows XP Pro
Note that this is NOT a flame directed at ATI or ATI users. To the contrary; I think ATI's been kicking *ss; nonetheless, I think this is an excellent example of real world benchmarks vs. synthetic benchmarks and how the results of one, having as little real meaning as it does, can disillusion so many.
Anyhow, on with my point. I ran the UT2K3 benchmark (included with the leaked demo) on both a Radeon 8500 128MB and a Visiontek GeForce3 (vanilla - no Ti 500). From what I've seen of all the 3DMark 2001 SE benchmarks, I would think that the Radeon would be at least somewhat faster than a GeForce, if not a GeForce3 Ti 500 with a "measily" 64MB. Here are my results:
GF4 Ti 4200 @ 250/444 w/no FSAA 116.2 fps
GF4 Ti 4200 @ 250/444 w/QC FSAA 88.5 fps
GF4 Ti 4200 @ 300/560 w/no FSAA 126.2 fps
GF4 Ti 4200 @ 300/560 w/QC FSAA 105.1 fps
GF3 @ 200/460 w/no FSAA 104.7 fps
GF3 @ 200/460 w/QC FSAA 55.2 fps
GF3 @ 215/515 W/no FSAA 113.0 fps
GF3 @ 215/515 w/QC FSAA 60.8 fps
R200 128MB @ 275/275 w/no FSAA 91.8 fps
R200 128MB @ 275/275 w/3x FSAA (perf.) 45.9 fps
Pretty startling: a "vanilla" GeForce3 has over 10 fps on a 128MB Radeon 8500 - the "superior" DX8.1 GPU. I also thought that with the 128MB perhaps the GeForce3 would lose out, but that's not the case. The benchmark was run with all texture details set to "UltraHigh" (in the preferences "console" and at 1024x768x32. NVIDIA drivers used: 29.42. ATI drivers used: Catalyst.
System:
ASUS A7N266-C
AMD Athlon XP 2000+ @ 2115
512MB Samsung PC2100
80GB WD Caviar "SE"
Philips Siesmic Edge 5.1
3Com 100Mb NIC
Windows XP Pro
Note that this is NOT a flame directed at ATI or ATI users. To the contrary; I think ATI's been kicking *ss; nonetheless, I think this is an excellent example of real world benchmarks vs. synthetic benchmarks and how the results of one, having as little real meaning as it does, can disillusion so many.