Am I wrong here in my understanding, but isn't Killzone's MP still rendering half the frame each frame and then reprojecting(whatever) the rest? It's still rendering 50-60 fps, but half the image for each frame is created out of information from previous frames. Correct me if I'm wrong.
So first, no, the MP does not use the same temporal technique as the the paper (nor the force unleashed prototype, nor what sebbi had mentioned). This might introduce a 1 frame latency, or as sebbi had mentioned, unpredictable cost, and scaling artifacts. The way I see it, the amount of missing pixels that needs to be computed between the frames fluctuates depends on the scene, and while you get the camera vector to compensate the motion of the whole scene, predicting moving objects in the scene are costly (motion analysis?), so the compute time is hard to manage.
The KZ MP probably uses something far simpler than that, the principle is rendering each frame at 960x1080, slices the current frame vertically into 960 odd lines, and combine that with the previous frame (sliced vertically into 960 even line). It's similar to how TV panels combing the 1080i signal into a full frame (but not exactly).
There might also be some motion compensation or reprojection used on the previous frame, or they could just judder the camera every so slightly to achieve the same effect to get more details into the combined 1920x1080 output. There is probably some resampling involved on the merge as well. It produces more details than you can get out of straight upscaling; however saying that this reproduce the same level of detail as native 1080p is just ludicrous.
Given that the amount of pixels being operated on are the same every frame (it's just a blending, essentially, like a motion blur), the compute cost is rather consistent, so it's way more predictable and manageable.