sireric said:There's a rather elaborate system in place, but the issue is that access patterns vary greatly even within one application -- One seen might be dominated by a shader, while another by a single texture and another by geometry (imagine spinning around in a room). You could optimize on a per scene basis, but that's more than we plan at this point (a lot of work). But we do plan on improving the "average" for each application. The basis for this, btw, is us measuring the internal performance of apps (in real time), and then adjusting things based on this. Multiple levels of feedback and thinking involved
Oh, but optimizing on a per scene basis would be what makes the project fun!
Still, I'm glad to hear that you are actually measuring apps in realtime and adjusting things based on that! Here's what I was thinking though:
You could start out writing an internal program to randomly vary your input parameters, take configurations you've written (or better yet, try to generate them based on some hueristic), and then get a quantitative score (memory throughput, fps, etc). You export the data, use something like weka (a nice data mining tool) to build model trees (basically decision trees with a statistical component at the root nodes, which makes them able to handle co-dependent data), and do a 10-fold cross validation test against your input data.
Once you have the model trees built, you are golden. You incorperate them into your drivers and do your internal realtime test, run your model tree, and given all the attributes, pick the configuration that has the best performance. It would be even more interesting to dig into configurations and not treat them as a black box, but rather find out why certain configurations behave better than others and dynamically modify them...
Oh, this sounds fun. You are lucky.
Nite_Hawk