What things break current implementations of SLI?

Hello All,

Dave just same something in a recent thread that got me thinking. What kinds of things that are used today break NVidia's SLI and will continue to do so in SLI's current form? Are these things important and will they be likely used for years to come? Are there good alternatives so games will be more SLI friendly?


I think Dave mentioned something about rendering to a texture (no idea what that is). He seamed to elude that it was useful and would likely be used more often than less often if I understood him correctly. What is an example of rendering to a texture? What types of things will we loose if developers stopped using render to a texture? Are their any good alternatives so that apps can be more SLI friendly?

It would be nice to get a list of SLI incompatible technologies (can't think of a better word). We then could better be able to tell what games will likely never support SLI. Also once we got a list, maybe we could think of some areas where SLI could be improved to support some of these features.

Just a few thoughts,
Dr. Ffreeze
 
Render to texture, which is THE main problem case for SLI, is increasingly used to post-process full frames, for e.g. HDR rendering (where you run a final pass to reduce the dynamic range to something displayable once you are done with everything else), Motion Trail-type effects (where you blend the current frame with the previous frame to fake a motion blur), blur/DOF effects etc. For some of these full-frame effects, it isn't completely impossible to do SLI, but I imagine it must be a rather arduous task for e.g. Nvidia's driver guys to determine which cases of RTT can be SLIed and which can not, and add the proper application/behavior detection code (e.g. HDR and Motion Trail can easily be SLIed, but Blur can not) - I suspect this is the main reason why Nvidia is so slow to produce SLI profiles for games.

In the case where RTT is use for other purposes than full-frame effects, e.g. to do dynamic reflection effects in a cubemap, SLI won't help performance at all, as both/all cards potentially need every rendering result.
 
I think another example would be areas in Doom3 that have monitors that show your player moving around(a security room has one). I believe that is done via RTT.

The background in the Pokemon Stadium level in Super Smash Bros Melee on Gamecube is another example.
 
Just like with dual/multi-CPU systems, there are inherent complexities. But just like in that situation, it's the application programmers that should be aware of the architecture. Graphics drivers could be considered part of the application in this case, but they can only do this much.

With CPUs, we first had the 486 where everything is executed sequentially, and optimization meant reaching your goals with the least instructions. With the Pentium, two sequential instructions could be executed in one clock cycle, and scheduling was the optimization keyword. Starting from the Pentium Pro, execution is out-of-order and instructions got higher latencies. To optimize for this architecture it's very important to break dependencies. Hyper-Threading and dual-core require the programmer to write multi-threaded software.

My point is that even though raw processing power increased anyway, the application programmer has to be aware of the architecture to make a difference. The new Programming Guide summarizes the optimization approaches for GPUs in SLI configuration very well in my opinion.

Anyway, things can only get better in the next months. Game programmers are very aware of new technologies and the next generation of NVIDIA drivers surely will have resolved some more issues.

What interests me more is, what will the next generation SLI look like? How will render-to-target problems be handled, or is this the end of the story? What will ATI offer and can it really catch up, in what time?
 
Back
Top