Different Ways to Cheat ?

Dio said:
Unfortunately, vsync is off during performance runs, so a hash value of a 'whole frame' is some number of composite frames....
So, triple-buffer and vsync it. With sufficiently high refresh rate ( 100 HZ or above ) should not be a problem.
Any benchmark that does 100+fps is kinda uninteresting anyways.

The real quesion is, which device would be capable of calculating CRC or hash values of 1600x1200x32bit frames ( 7,5Meg ) 100 times in a second, in a DVI port.
 
I would imagine a TMDS decoder circuit could be created to do this fairly easily, but it won't be cheap. Might even require custom silicon.
 
DVI is the only options that can truely be used once again you are still limited to the maxium rate that the TDMS can encoded the data but with things like GT4 it shouldn't be a problem.

Reason 1. Write backs from AGP to system memory is EXTREMELY LIMITED by poor drivers a few hundred megabytes/second MAX

Reasion 2. Because on drivers can detect screen capatures and render at full IQ.

Now before I go any futher I don't know the answer to this question and it makes a big difference.

Does the DVI use full data rate or does it use lossy compression IE MPEG2 like HDTV does??? if the answer is yes then the only way to find out is about the IQ is monitoring the data between GPU and TDMS encoded if its not intergrated or between GPU and memory in which case the you might miss post processing ( i.e. AA ).
 
no_way said:
The real quesion is, which device would be capable of calculating CRC or hash values of 1600x1200x32bit frames ( 7,5Meg ) 100 times in a second, in a DVI port.

Hmm .. now when i think about it .. such device would be a very useful "hardware FRAPS" in itself... only cooperation that it would require from application, is that app has to be deterministic, so that the rendered frames can be re-rendered for quality inspection later on.
Of course, calculation imprecision within 3D pipeline could be so severe that it could be impossible to always render the frame with exactly the same contents or CRC, but this can be somewhat ameliorated with splitting the CRC values up on screen, for instance by color channel and by tiling or striping the screen, so inconsistencies can be localized.

Im no hardware guru, but i got the feeling that some existing generic DSP should be capable of doing this kind of stuff ?
 
demalion said:
:oops: :-? :p :p :arrow: Dave H.

;) 8)

I was thinking of putting a ;) in there somewhere, but couldn't quite figure out how to direct it at you. In any case, I figured you wouldn't be too offended, imitation being the sincerest form of flattery and all...

Of course, you could have continued the tradition with <post mode="Kristof"> tags in your reply...

:D
 
no_way said:
Dio said:
Unfortunately, vsync is off during performance runs, so a hash value of a 'whole frame' is some number of composite frames....
So, triple-buffer and vsync it. With sufficiently high refresh rate ( 100 HZ or above ) should not be a problem.
Any benchmark that does 100+fps is kinda uninteresting anyways.

And if the game runs 95FPS without vsync, it will be synced to 50FPS with vsync on with a refresh rate at 100 Hz.

Benchmarking with vsync on is useless no matter what refresh rate you are running.
 
Tim said:
no_way said:
Dio said:
Unfortunately, vsync is off during performance runs, so a hash value of a 'whole frame' is some number of composite frames....
So, triple-buffer and vsync it. With sufficiently high refresh rate ( 100 HZ or above ) should not be a problem.
Any benchmark that does 100+fps is kinda uninteresting anyways.

And if the game runs 95FPS without vsync, it will be synced to 50FPS with vsync on with a refresh rate at 100 Hz.

Benchmarking with vsync on is useless no matter what refresh rate you are running.

Not necessarily with tripple buffering.
 
Back
Top