Having recently purchased a modern Samsung TV, I was struck by how efficient the image processing capability is oftentimes. Watching ordinarily juddering 24fps movies, the screen updates totally smoothly much of the time, despite the low original framerate.
So I've been thinking... Considering the TV is able to smooth the framerate quickly, cheaply and without using much power (the full TV set draws less than 70W, for a panel over a meter across, diagonally), why use billions of transistors and a hundred watts or more of power to draw 60 unique, high-resolution frames that won't really be uniquely recognizable anyway.
Why not settle for half framerate, and then make up the rest with image processing?
I'm not sure what technology is used to accomplish the results, some kind of realtime morphing I suppose, but it's obviously much much less resource intensive, so surely there could be a big benefit here.
Interpolating pixels to raise the resolution often causes blurryness, but at least I can't see any artefacts at all by interpolating/morphing entire frames. It's not as sexy as drawing unique frames, and there might be some issues with latency - a TV set probably buffers 2-3 frames in order to perform its magic. However if this tech was integrated at a more fundamental level with the whole rendering chain designed around it, with it in mind, then surely a lot of latency could be eliminated.
What do you people think? Maybe I should go take out a patent right now for a method and system for improving frame rates in a 3D hardware rendering device?
So I've been thinking... Considering the TV is able to smooth the framerate quickly, cheaply and without using much power (the full TV set draws less than 70W, for a panel over a meter across, diagonally), why use billions of transistors and a hundred watts or more of power to draw 60 unique, high-resolution frames that won't really be uniquely recognizable anyway.
Why not settle for half framerate, and then make up the rest with image processing?
I'm not sure what technology is used to accomplish the results, some kind of realtime morphing I suppose, but it's obviously much much less resource intensive, so surely there could be a big benefit here.
Interpolating pixels to raise the resolution often causes blurryness, but at least I can't see any artefacts at all by interpolating/morphing entire frames. It's not as sexy as drawing unique frames, and there might be some issues with latency - a TV set probably buffers 2-3 frames in order to perform its magic. However if this tech was integrated at a more fundamental level with the whole rendering chain designed around it, with it in mind, then surely a lot of latency could be eliminated.
What do you people think? Maybe I should go take out a patent right now for a method and system for improving frame rates in a 3D hardware rendering device?