Is any investigation going into this at B3D..... ???

Regarding future reviews...quite Important you know :?

Screen Grab Specific rendering


Image3.jpg
 
http://www.tech-report.com/etc/2003q3/valve/index.x?pg=1

As you can tell from looking at the list in the slide above, Newell was concerned particularly with some of the techniques NVIDIA has used in recent driver releases, although he didn't exempt other graphics hardware makers from his complaints. He said they had seen cases where fog was completely removed from a level in one of Valve's games, by the graphics driver software, in order to improve performance. I asked him to clarify which game, and he said it was Half-Life 2. Apparently, this activity has gone on while the game is still in development. He also mentioned that he's seen drivers detect screen capture attempts and output higher quality data than what's actually shown in-game.
 
Well it will take a website some time, and there is none better than this. I see no reason for valve to lie...forget the OEM deal BS, a developer would not cut its customer base .

I always suspected this, actually talked about here about a year ago.
 
I spent some time the other day staring at the screen and at the image I captured and there were no differences. I believe NVIDIA are doing something, but only in order to ensure the captured AA output is the same as whats displayed on screen, since some samples are combined in the RAMDAC, hence the full averaged samples are nevr stored in the frame buffer.
 
It is dead simple to prove/disprove this claim.

Set up a camera on a tripod in front of a monitor.

Photograph the game screen

make a screengrab, display it in the correct resolution using an imageviewer such as irfanview. Photograph it also.

You now have two directly comparable images to work with. Any distortion introduced by photographing the screen won't matter so long as the camera and monitor were not moved between shots.
 
radar1200gs said:
It is dead simple to prove/disprove this claim.

Set up a camera on a tripod in front of a monitor.

Photograph the game screen

make a screengrab, display it in the correct resolution using an imageviewer such as irfanview. Photograph it also.

You now have two directly comparable images to work with. Any distortion introduced by photographing the screen won't matter so long as the camera and monitor were not moved between shots.
thats not a bad idea, since you could even write a simple program to change a few pixels in a test image to see if the camera could detect a very small change.
Im wondering though if drivers could raise/lower IQ based on how fast your moving. Say your moving like a banshee through UT2003, how much of the IQ could be lowered without any visual detection, and when you stop the IQ could be raised. Just a thought

later,
epic
 
This probably won't work. You can't photograph a single frame with a camera as the image isn't displayed 1 frame at a time, but 1 pixel at a time. A photograph will show the most recently drawn pixels as quite bright , and the rest of the image much dimmer. As you can't make the camera trigger on a particular time in the screen display then you can't get two shots that you could compare.

If you take a longer exposure (say 1/2s) then you would get multiple frames blended together, reducing the intensity changes, but the drivers might be altering the image on a frame by frame basis e.g. increasing quality when no change in scene data occurs.


You would really need some kind of external framegrabber. Or an app whcih can access the framebuffer without the drivers detecting them. The way I suspect most apps work is to do a lock on the DDraw surface, as the request for a lock will go through the drivers it is very easy for them to detect this. Instead you need to find the linear address of the surface some other way, and get yourself mapped into the ddraw drivers address space. Not sure how you do this but it is possible.

CC
 
Captain Chickenpants said:
This probably won't work. You can't photograph a single frame with a camera as the image isn't displayed 1 frame at a time, but 1 pixel at a time. A photograph will show the most recently drawn pixels as quite bright , and the rest of the image much dimmer. As you can't make the camera trigger on a particular time in the screen display then you can't get two shots that you could compare.
Not quite. If you set your exposure to closely match the refresh rate (say 1/90s exposure vs. 85 Hz refresh rate) then nearly every pixel on the screen will be redrawn. It comes out pretty good actually.

Pixel fade won't affect the camera because it will capture the pixel at its brightest.
 
It works and works very well.

Ideally you need a camera with a good macro mode and good manual exposure controls. A remote shutter release also helps.
 
It won't work well enough to allow a binary diff of the images though, which I thought was the point? You will be unable to state for certain that differences are caused by people changing the quality or if they are caused by mismatches caused by when you took the photo.

CC
 
Couldn't you just use the video out and take a snapshot of the output with a video grabber? It might limit resolution to 800x600 or 1024x768 or so but it seems more accurate than a photocamera..
 
Florin said:
Couldn't you just use the video out and take a snapshot of the output with a video grabber? It might limit resolution to 800x600 or 1024x768 or so but it seems more accurate than a photocamera..
The output that you can grab will be NTSC resolution; most video grabbers are set up to grab NTSC also.

320x244 or such. Not so good.
 
Frame grabber cards are discussed in the second and third pages of this topic: http://www.beyond3d.com/forum/viewtopic.php?t=7903

One card like this is the Unigraf UFG-01 Frame Grabber described here: http://www.unigraf.fi/PAGES/Framegrabber.htm

Another card is the iView iVD-RGB here: http://www.iviewdata.com/html/ivd-rgb.html

These cards are able capture from DVI so the image would be pixel perfect. The above cards grab the entire frame all at once so there is no motion blur or other artifacts.

These cards also appear to be able to do motion capture. However, these cards appear to lack a notch in the front of the edge connector limiting them to 5V operation. This means the card can only run at 33MHz 32-bit operation with a maximum tranfer rate of 133MB/s. Uncompressed video at 1024x768 at 60Hz with 24-bit color would take 135 MB/s plus maybe 20 MB/s overhead. A standard 33MHz PCI bus would not be able to tranfer data at rate of 155 MB/s without compression, which would be unacceptable when comparing image quality. So full rate uncompressed motion capture is out of the question, but still any sort of tool that bypasses the driver is a good thing.

Granted you would have to have a second computer on the desk to do anything like this.
 
I think the best "affordable" comparison solution is to put two identical TFT monitors side by side and have one show a screenshot and the other one show the 3D rendered scene. It would be even better if you had a monitor that could switch quickly from one input to another.

Unfortunately, my monitor takes too long (about 1 second) to switch from DVI to analog input.

epicstruggle said:
Im wondering though if drivers could raise/lower IQ based on how fast your moving. Say your moving like a banshee through UT2003, how much of the IQ could be lowered without any visual detection, and when you stop the IQ could be raised. Just a thought

later,
epic
That would be very hard to do, as the graphics card/driver has no concept of "moving inside a scene", in fact not even of a "scene".
 
Back
Top