Firingsquad Fires Back

Razor1 said:
Depends on the engine, but if the margin is within 5 FPS it possible could. But as Baron has said its benchmarkable now. FS's and [H] methods strive to show different aspects of hardware both benchmark methods are relevent and have thier own merits. I don't support the [H] article becuase one paticular benchmark or set of tests isn't better then another since each type of test suite shows different aspects.

Exactly. I find both to be wrong, [H] a bit more due to name calling.

But, I've been busy outlining my own benchmarking process and I've quickly found out there is no perfect way to benchmark, therefore you must use a variety.
 
  • Like
Reactions: Geo
The way [H] tests video cards makes a lot of sense, IMO, because that's how you run them. If your card is slow, you turn down settings. More over, this will hold thoughout the life of a video card. If video card A is 25% faster than B, then maybe today you have to bump AA down a notch at 1600x1200 on B. A year from now on a more intensive game engine, you'll probably have to do the same at 1024x768.

You can't do that with CPUs. You can't turn down settings, so when games become more CPU intensive, you feel the full drop in framerate.


For example, look at UT2003 and UT2004.

In early 2004, you see a 2GHz A64 beating a 2.8GHz P4 by 37% in UT2003. But the P4 still got 76 fps, so does that mean there's no advantage to choosing an A64?

Later that year, you see a similar advantage in UT2004. However, now the P4 only gets 48 fps. Suddenly that decision makes a huge difference in playability.


CPU evaluation is a lot more black and white than GPU evaluation. There's no settings to choose or image quality to assess. Just test at 640x480, and if the framerate of the slower CPU is good enough for gameplay, go ahead and state it. The reader gets a lot more information for a buying decision than from tests at 1600x1200.
 
Mintmaster said:
CPU evaluation is a lot more black and white than GPU evaluation. There's no settings to choose or image quality to assess. Just test at 640x480, and if the framerate of the slower CPU is good enough for gameplay, go ahead and state it. The reader gets a lot more information for a buying decision than from tests at 1600x1200.
Bingo - we have a winner. A CPU test should be just that, a test to see if the CPU is up to the job. After that, it's down to the rest of the system. It's basically the same as to why we run fill rate results - to see if the GPU is up to the job.
 
Mintmaster said:
CPU evaluation is a lot more black and white than GPU evaluation. There's no settings to choose or image quality to assess. Just test at 640x480, and if the framerate of the slower CPU is good enough for gameplay, go ahead and state it. The reader gets a lot more information for a buying decision than from tests at 1600x1200.

I don't see it that way though. I don't play games at 640x480, so I have zero interest in which CPU is faster at that resolution. I play at 1280x1024 with 4X/6X AA and 16X AF. I want to see what difference, if any, exist at the settings I play at between these new CPUs. I want to know if there's any point in me upgrading considering I play at relatively high settings where the GPU may bottleneck.
 
SsP45 said:
I don't see it that way though. I don't play games at 640x480, so I have zero interest in which CPU is faster at that resolution. I play at 1280x1024 with 4X/6X AA and 16X AF. I want to see what difference, if any, exist at the settings I play at between these new CPUs. I want to know if there's any point in me upgrading considering I play at relatively high settings where the GPU may bottleneck.

Ok, so say you play at 1280x1024 in your favorite game, and it just happens to be gpu limited. You could have 1 cpu be 100x faster than the other, but have no difference in that bencmark. However, it could be that next year's game may be unplayable on the slower cpu, whereas it kicks ass on the newer cpu. It is important to make the benchmarks cpu limited so that you'll know which cpu will play you games acceptably tomorrow.
If you put the bar you want to aim for at 60 fps, you'll still find out if the cpus can do that or not at 640x480, which would hold true for 9000x8000. What you won't find out is if your graphics card is the limiting factor at the settings you play at, in which case you should be looking at a gpu benchmark, not a cpu benchmark.

Though I'd say the emergence of 64-bit could negate core 2 duo's advantages. It's looking like once 64-bit is enabled, the core 2 duo falls down to the level of the athlon 64 or slightly behind, though it can still achieve an absolute higher clockspeed. In fact, I'd say once 64 bit is enabled, the core 2 duo performs more like the core duo, what limited benchmarks I've seen show a decrease in performance for the core 2 duo while an increase for the athlon 64.
 
Mintmaster said:
The way [H] tests video cards makes a lot of sense, IMO, because that's how you run them. If your card is slow, you turn down settings. More over, this will hold thoughout the life of a video card. If video card A is 25% faster than B, then maybe today you have to bump AA down a notch at 1600x1200 on B. A year from now on a more intensive game engine, you'll probably have to do the same at 1024x768.

You can't do that with CPUs. You can't turn down settings, so when games become more CPU intensive, you feel the full drop in framerate.


For example, look at UT2003 and UT2004.

In early 2004, you see a 2GHz A64 beating a 2.8GHz P4 by 37% in UT2003. But the P4 still got 76 fps, so does that mean there's no advantage to choosing an A64?

Later that year, you see a similar advantage in UT2004. However, now the P4 only gets 48 fps. Suddenly that decision makes a huge difference in playability.


CPU evaluation is a lot more black and white than GPU evaluation. There's no settings to choose or image quality to assess. Just test at 640x480, and if the framerate of the slower CPU is good enough for gameplay, go ahead and state it. The reader gets a lot more information for a buying decision than from tests at 1600x1200.
Ya but what was the min frame rates ;)
Nearly 50 fps avg isn't bad at all either, unless it hits 20 fps it some spots.
 
Last edited by a moderator:
The only thing more amusing about Hardocp's reviews is their logic for justification. I've always found their reviews to be "unique."

Why one would be testing CPU PERFORMANCE in GPU bottlenecked scenarios is beyond me. It's ok to get back to your "real world testing" once you do the "CPU performance evaluation" out of the way. Then you can state something along the lines of "while Conroe is a clear winner in terms of raw CPU power, it's contribution to overall system performance is minial when put GPU bottlenecked scenarios at higher resolutions as is the case with most newer games...etc..etc..."

Ofcourse, logic and common sense are generally absent for Hard reviews as the text space is reserved to call other reviews "BS" and what not.....
 
In another discussion forum, I've discussed this HardOCP article with other gamers (less technically savvy than here..) and I believe the whole problem revolves around the fact the target "goal" of HardOCP's Conroe tests wasn't clear... nor does the article attempt to clarify that goal, which creates the possibility of agenda. This assumption is strengthened by the stark contrast of prior AMD AM2 benchmarking methods.

In HardOCP's rebuttal article ("Firing Squid Wastes Their Ink"), they clarify the goal of the article:
I wanted to know what Core 2 meant to many of our readers that already own high end Athlon 64 gaming machines. As our Core 2 Gaming Performance spells out, there is not much of a reason for many current gamers to upgrade.
I think just about anyone with a high-end AMD X2(or even Pentium 4 EE) would agree that an upgrade to the Core 2 Duo would be smallish at best for current games compared to their current systems using current games.

Unfortunately, hardware reviews aren't typically aimed just at those with cutting-edge, high-end systems... but also those in the market for a NEW PC, or those with severely dated PC's about to partake the path right now (regardless of who's on top) and want the best value. These 'real world gaming tests' at HardOCP, unfortunately, skew the performance potential for new PC buyers... much how synthetic benchmarks may potentially skew the performance value for existing high-end owners considering whether to upgrade or not.

At the end of the day, I do believe both sides should be represented, but that target goal must be clearly identified and accompany the article... and to be totally objective in relation to the reader's expertise (or lack thereof) should also clearly stipulate that these "cpu performance tests" absolutely filter or negate a tremendous amount of the cpu's power in order to be specific only to current high-end hardware owners considering whether to upgrade or now. If we do not make the assumption that HardOCP has now specified (i.e. high-end owners considering upgrade value for gaming), it's easy to cry foul.
 
And running cpu limited tests wouldn't show high end system owners what they want? If they see an athlon 64 at 130 fps, and a core 2 duo at 160 fps, do they really think they'll get anything out of that? Not only is it common knowledge that most games will be graphics card limited at higher resolutions, but everyone knows (especially with vsync) that there will be no noticable difference between those 2 framerates, it's just that the core 2 duo will continue to hold that lead in newer games as well.

If they're going to do such a skewed, flawed test such as they did, they'd be much better off just showing the percentage of the time each processor dropped below 60fps, and basing their recommendation off of that.
 
Back
Top