The motion interpolation technology was not used in the final Force Unleashed 2 product. It was only a prototype that had some issues (links can be found in the game's Wikipedia page describing the issues). We also had a motion interpolation tech prototype running in Trials HD (2009), but we also removed it before the launch, because of some image quality issues (in some specific corner cases).
It's not a simple as that. If you want to use vsync (to prevent tearing), you need to wait for the vertical refresh to swap the frame. If you start waiting too late, you have lost a refresh (wait lasts until the start of the next frame). Under normal circumstances (when not using low level hacks or multiple HW ring buffers of recent hardware), you cannot submit any draw calls when you are waiting for vsync (vsync blocks the CPU core, and only one core can submit the calls). As the GPU runs asynchronously, the CPU doesn't know how much GPU time the draw calls will take, and thus doesn't know exactly when to stop submitting draw calls and start waiting for the vsync. If you submit too many draw calls before the vsync, you will drop a frame (16.6ms lost time), if you submit too few, you will lose some GPU time (GPU just waits idling). That's the reality when you are using a standard graphics API. On PC you need to accept that, but on consoles, you could do some low level hacks to improve the situation. I don't know enough PS3 specifics to guess how it could be done with it, but I definitely have some ideas how you could hack around this limitation on x360 (using XPS and predicates). Of course you could improve the situation using adaptive vsync instead, but it just trades the issue with tearing: The more you guess wrong the draw call timings on CPU side, the more you will tear. It is impossible to exactly predict how much time the GPU will spend on a certain draw call (as exact prediction of any complex program sequence takes as much time as executing the sequence).
Of course you don't need to have perfect prediction to use majority of the interpolated (cheap) frame's GPU time to perform draw calls. You will only lose some of it. But that's not the only downside of motion vector interpolation. In order to have good quality, you need to interpolate between two frames (extrapolation will give you bad artifacts when objects bounce/collide). This kind of interpolation will add half of 30 fps frame's length of extra latency, as you can only show the interpolated frame after you have finished both frames. The real second frame will be shown half frame (one 60 fps frame later). So in the end, you will see slightly higher input lag than you would see in a normal 30 fps game.
In Trials HD we used an alternative way to add "extra" frame without running the complex shaders and lighting. We did render the geometry again on the "extra" frame, but we used a dirt cheap texture re-projection shader instead of the heavy material/lighting shaders. This way the "extra" frame had the most recent object position/rotation data from the physics engine (which was running at 60 fps). The frame latency was identical to a real 60 fps game. The downside of course compared to motion vector interpolation is the extra geometry pass cost (around 2 ms in our case), but that's still a huge gain compared to the 16.6 ms full frame cost. But that wasn't the case why the technique was ultimately dropped (as performance was almost doubled). We dropped it because of the graphics glitches caused by the reprojection. With these older DX9 era GPUs the performance wasn't good enough to refresh only the parts of the scene that were over a predermined error metric (or a dynamic one calculated by estimating the scene cost). A modern DX11 compute shader driven rendering pipeline can do that much more efficiently. I expect to see similar techniques becoming more popular in the future (if 1080p + 60fps becomes the preferred choice).