Hi guys,
So I've been working on something of a research project lately, and this research project has required that I go through about 50 different reviews from various review sites, and recording several thousand benchmark scores. After all this, I've come to a couple of conclusions:
1) Almost every reviewer in this industry save a very select few seems incapable of listing whether or not they benchmarked with trilinear, bilinear, or brilinear filtering.
2) Fairly often reviewers neglect to mention what map or demo they used with a specific bencmark. UT2003 and Quake3 benchmarks specifically suffer from this. Many reviews of UT2003 for example will say "botmatch" or "flyby", but not specify the map. Others will give the map, but not specify if it was benchmarked with botmatch, flyby, or in some cases, HardOCP's benchmarking utility.
3) Some reviewers don't mention what kind of memory their system is running. Of the ones that actually report the memory type (i.e PC2700), most neglect to state at what speed that memory is actually running at. (i.e if the PC2700 is actually running at 2.7GB/s, or say 2.1GB/s).
4) Some reviewers reuse benchmark scores from previous reviews without stating that they are doing such. On the surface this seems reasonable, but when looking for distinct recordings this can skew comparisons if recorded more than once.
5) It's pretty tough to get a good impression of how older cards (GF4 series) fair with newer drivers/cpus in comparison to newer cards. R300 based cards are in better shape simply because they have been so resilient over the last 18 months.
6) The same arguement can be said for benchmarks. UT2003 (and to a certain extent SS:SE) seem to be the only ones that really have stood up to time.
7) Entering in the attributes (CPU, CPU Speed, Ram speed, GPU, GPU Speed, VRAM speed, Driver, display settings, benchmark, scores, etc) for about 4000 scores is really really tiring.
Nite_Hawk
So I've been working on something of a research project lately, and this research project has required that I go through about 50 different reviews from various review sites, and recording several thousand benchmark scores. After all this, I've come to a couple of conclusions:
1) Almost every reviewer in this industry save a very select few seems incapable of listing whether or not they benchmarked with trilinear, bilinear, or brilinear filtering.
2) Fairly often reviewers neglect to mention what map or demo they used with a specific bencmark. UT2003 and Quake3 benchmarks specifically suffer from this. Many reviews of UT2003 for example will say "botmatch" or "flyby", but not specify the map. Others will give the map, but not specify if it was benchmarked with botmatch, flyby, or in some cases, HardOCP's benchmarking utility.
3) Some reviewers don't mention what kind of memory their system is running. Of the ones that actually report the memory type (i.e PC2700), most neglect to state at what speed that memory is actually running at. (i.e if the PC2700 is actually running at 2.7GB/s, or say 2.1GB/s).
4) Some reviewers reuse benchmark scores from previous reviews without stating that they are doing such. On the surface this seems reasonable, but when looking for distinct recordings this can skew comparisons if recorded more than once.
5) It's pretty tough to get a good impression of how older cards (GF4 series) fair with newer drivers/cpus in comparison to newer cards. R300 based cards are in better shape simply because they have been so resilient over the last 18 months.
6) The same arguement can be said for benchmarks. UT2003 (and to a certain extent SS:SE) seem to be the only ones that really have stood up to time.
7) Entering in the attributes (CPU, CPU Speed, Ram speed, GPU, GPU Speed, VRAM speed, Driver, display settings, benchmark, scores, etc) for about 4000 scores is really really tiring.
Nite_Hawk