I was talking with a friend and he mentioned that all of their talk about "perception science" mumbo jumbo may mean that they were planning on using the Z-buffer to adjust coding in blocks, so that stuff further away would get lossier coding.
That sounded like a good idea that's semi workable (you don't always want distant stuff to be lossy), I'm suprised this 'revolutionary' compression they've supposedly created is just a more mundane parallelized codec.
The problem is "distant" is not a good predictor for "unimportant"; your enemies in a FPS, if you're playing the sniper, are much more important than the rock you are hiding behind.
Also, it requires significant cooperation with the game to access the depth buffer; OnLive claim games will be easily "ported" to their platform. Someone like NVIDIA can access the depth buffer to make stereo 3D, because they control the driver - and it still doesn't work perfectly for every game. Depth buffers aren't what they used to be in the 3dfx days, with custom formats, many passes, deferred rendering etc. ;-)
And do you trust the OnLive guys to make a custom driver that actually works?