View Single Post
Old 01-Jan-2013, 22:30   #7
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,488
Default

Quote:
Originally Posted by Andrew Lauritzen View Post
7) This is precisely why it is important to measure frame latencies instead of frame throughput. We need to keep GPU vendors, driver writers and game developers honest and focused on the task of delivering smooth, consistent frames. Allowing things like a driver to cause a spike one frame so that it can optimize the shaders for the next few and get a higher FPS number for the benchmarks is not okay!
I'm not sure if the frame graph example is strong support of this claim.
If this is an example of the driver running a shader compilation in the middle of a frame trying to use the shaders, it is either not in the service of improving successive frames or it's doing a very poor job of it.
The next few "fast" frames as you've noted are probably ready and processed, just waiting on the final submission of the frame. The rest of the graph shows zero change in the optimality of the shaders the driver allegedly optimized at the cost of jacking one frame's latency.

The otherwise steady-state nature of the graph and zero real improvement beyond frames whose submission is held up by the engine makes me think that this isn't a case of trying to optimize the next few frames for benchmark numbers, but some kind of glass jaw in whatever the system is doing to maintain the average performance numbers.

When talking about tens of milliseconds, I was thinking if there is some kind driver problem not directly attributable to the GPU or CPU silicon for which the timescales here are glacial (doesn't rule out a software or system issue at a low level to have caused it).
Is something being juggled over the PCIe bus, or some kind of memory buffer issue?
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is online now   Reply With Quote