Hey all, haven't been here in a while, but Tahiti has rekindled my passion for 3D graphics and hardware. I'm planning on doing a report on Tahiti and modern games with the same kind of insight as my
bandwidth analysis from a few years ago.
For now though, I thought I'd make a quick remark on the TechReport review:
This would be bringing us back to the microstutter argument all over again. It's there. Some people see it, some don't. The fact is on the graph I linked, the 7970 is obviously more "messy" than either of the other two single GPU cards that it's compared against.
You have to understand that what TechReport records is not necessarily what you see. They are using FRAPS to log individual frame times, and this works by a D3D software hook that monitoring the times that IDirect3DDevice9:
resent() is called. It doesn't monitor changes on the screen like Digital Foundry does in their Eurogamer faceoffs. Games usually take the past few frame steps to predict the next one, so stuttered Present() calls would still have objects move in steps that match unstuttered calls.
If a driver is to minimize input latency, it wants to get information from the game for the next frame as fast as possible (within reason, as it only queues a frame or two). This could result in stuttered calls from the view of FRAPS, but the driver can put the frames on the screen smoothly with delays at its discretion. If, OTOH, the driver wanted to ace TechReport's test, it would have a heuristic to insert variable artificial delays somewhere so that calls to Present() were more even, and then more delays at the end of the pipeline to make sure the frames went to the display smoothly.
You should read their
Battlefield 3 Performance article. They get differing results in the same game. With their
Fear no Evil test, AMD shows lots of microstutter in FRAPS, but "the Radeons just don't feel much choppier than the GeForces overall." In the
Rock and a hard place test, it was the other way around, and "with a GeForce, the latency spikes were very palpable, causing animations seemingly to speed up and slow down wantonly".
Even if the frame times reported by the driver matched those of the display, there's no easy way to summarize it all. Consider the Skyrim 99th percentile numbers you referenced where the 7970 was "worse". Now, look at the actual plot Kaotik posted above. The 99% percentile is virtually the same for all DX11 cards because the slowest 1% were all within the first 600 frames, where it looks like it is CPU/PCI-E bound. After that, however, it is the GTX 580 which shows vastly more stuttering according to FRAPS, despite you claiming the opposite due to the 99th percentile time.
Looking closely, it appears that up to 2500 frames are CPU limited. If you cut those out of the test, the 7970 is then 20% faster than the 580 instead of 10%.