Kyle/Brent:
One of the problems I see with your approach is that you, as test subjects, will be somewhat tainted. This is a problem we ran into a lot of times when I worked in a cogsci visualization lab. Most of the people in the lab had worked with so many different vision related projects (3D graphics, optical illusions, etc), that they had been trained to observe or ignore certain things in the scene. Things the vision researchers could pick out may go un-noticed by the "average joe". but in other cases things that would look unnatural to the average person may no longer look unnatural to the researcher.
You for example, may be able to tell the difference between bilinear and trilinear filtering, but how much does it *actually* matter to the end user, and in what situations? At the same time, does it matter if one card is getting about 60fps on average and another one is getting 70? What if the one getting 70fps actually looks worse than the one getting 60fps due to refresh/tearing issues? Will you test with vsync on? Will the reviewer know the actual framerates from the games? How will you control for bias in what the reivewer "knows" the framerate should be, versus what they "feel" when running the game?
I don't want you to get me wrong, what you are doing definately has merit. The problem you may run into is, that your data may be accidently falsified due to problems with the observer and methodology used. You'll need to be really careful about controlling variables, and if you really want your results to be good, you should perform the tests double-blind, and do so on a number of "average" observers rather than just a single reviewer like brent. I'm sure you could probably get university students to be subjects for $5-$10 an hour if you really want something publishable.
Anyway, I'll be interested to see where this goes.
Nite_Hawk
One of the problems I see with your approach is that you, as test subjects, will be somewhat tainted. This is a problem we ran into a lot of times when I worked in a cogsci visualization lab. Most of the people in the lab had worked with so many different vision related projects (3D graphics, optical illusions, etc), that they had been trained to observe or ignore certain things in the scene. Things the vision researchers could pick out may go un-noticed by the "average joe". but in other cases things that would look unnatural to the average person may no longer look unnatural to the researcher.
You for example, may be able to tell the difference between bilinear and trilinear filtering, but how much does it *actually* matter to the end user, and in what situations? At the same time, does it matter if one card is getting about 60fps on average and another one is getting 70? What if the one getting 70fps actually looks worse than the one getting 60fps due to refresh/tearing issues? Will you test with vsync on? Will the reviewer know the actual framerates from the games? How will you control for bias in what the reivewer "knows" the framerate should be, versus what they "feel" when running the game?
I don't want you to get me wrong, what you are doing definately has merit. The problem you may run into is, that your data may be accidently falsified due to problems with the observer and methodology used. You'll need to be really careful about controlling variables, and if you really want your results to be good, you should perform the tests double-blind, and do so on a number of "average" observers rather than just a single reviewer like brent. I'm sure you could probably get university students to be subjects for $5-$10 an hour if you really want something publishable.
Anyway, I'll be interested to see where this goes.
Nite_Hawk