A thought on optimising scrolling in handheld UIs?

Nevod

Newcomer
Seems that this place and forum is the most appropriate place for such an question, as there's enough people who really know what's under the hood in handheld sphere.

As phone and tablet's screen resolutions rise to FullHD and beyond, the higher is the load on their CPU and GPU associated with updating the screen (obviously).
A lot of time, they are used for browsing the internet or reading books, activities which suppose a lot of scrolling. During the scroll, the information on screen is partially retained, however, the whole picture has to be recomposed and sent to framebuffer.

Modern browsers use backing store with ahead-render, which allows to reduce CPU usage by drawing the page into buffer and then cut out part to be composited into screen from the buffer. Hence, no constant total re-rendering, much lower CPU use and higher frame rate.

However, still, the cut-out and compose operation has to be performed for each frame, which tasks the GPU and memory interface. Having recently read about various kinds of frame buffering and associated lags/tearing/etc (which was spawned by a discussion with a friend about why Android is still not as fluid enough) I have noticed that, apparently, a buffer swap can occur pretty much instantly - easily within any point of a buffer - and the next buffer would start the read on the same offset as before the swap - effectively composing the real output frame from two frames.

I wonder, if it could be possible to exploit that high speed of swapping to reduce scrolling's toll on GPU etc? Use several buffers for different parts of screen (the rule on how to split can be derived from the interface layout - i.e. in a browser we might have upper non-scrollable area - the status bar, middle scrollable area - the browser's view itself, and lower non-scrollable area with controls. In that case there would be three buffers. The medium buffer would be a screen-width-aligned backing store, and would have much higher height that the actual view area. During the scanout(not sure if the term is right), the scanout module will be first given the location of the first buffer, then, right as it ends, it would be swapped to an offset in the second buffer corresponding with the beginning of actually viewed area, then, at the end of said area, it would be swapped to the third buffer. That way there is no need to re-cut and recompose the whole frame, just move the offset in the second buffer, and update it if we're too close to the end of the buffer. Of course, there's also horizontal scrolling in browser, but vertical is still much more prevalent, so the second scrolling buffer might be updated from backing store on horizontal scrolling, and used as is on vertical. After scrolling stops, GPU can compose an normal frame in normal buffer, and point scanout system at it, so explicit swap control would be no more needed.

It seems theoretically possible, but from practical standpoint, there's a couple of problems. First, it would require tight realtime control of scanout system. Probably not impossible, but would require low level programming. Second, how much of the power consumption during scrolling actually comes from SoC? How much energy is required for pixel state change? Seems no tests have been done on that (at least, I wasn't able to find any). The question is open..
 
Back
Top