How does my Android app draw to the screen?

A little bit of background

A short time ago (which feels like another lifetime these days) I spent a lot of time porting the Android platform to new systems. Specifically, I worked for the manufacturer of a MIPS based SOC that was completely unsupported by Android. Over the course of a few years, I ported Cupcake (1.5), Eclair (2.1), Froyo (2.2) and Gingerbread (2.3) and had a good look at the internals of the OS. I have not spent a lot of time with Honeycomb, Ice Cream Sandwich or Jellybean so I will not be discussing them here. I have heard that some things were improved in Jellybean but I would guess that the fundamentals remain the same.

I was recently discussing different ways that applications and platforms draw to the screen with Paul Hammant. In that discusson we touched on a few platforms including Android. The implementation of the UI toolkit and rendering engine in Android is fairly unique and, having spent some quality time with it, I decided to elaborate.

Why not Swing?

When Google developed Android, they made two big design decisions. First, chose to create the JVM and libraries internally. Second, they would license as much of the OS as they could under the Apache license. This precluded them from using any existing Java libraries, including Swing, and led to a new graphics library. The main reason for these decisions were to encourage vendors to use the platform without fear of the GPL requiring them to release their source code. I imagine that they were also interested in some level of creative control and wanted to enhance the GUI toolkit in ways that Sun (later Oracle) may not have supported.

Rectangles to Triangles

In addition to a new GUI toolkit, Google’s Android engineers created their own rendering engine for it. Instead of any of the standard X Window implementations or one of the new upstarts, they created something unique. Each displayable application is given a 2D surface. Once the application paints its widgets and graphics to this surface, it is passed to an OpenGL ES rendering engine to be composited onto the screen. Since any surface may be transparent, all viewable surfaces must be maintained in memory and composited on every change. This includes the launcher application and the single running full-screen application.

While this seems like an elegant and flexible implementation it is, in my opinion, the primary cause of the perceived sluggishness and short battery life of Android devices. The original launcher application used by Cupcake through Froyo had no fewer than twelve layers that had to be composited. The often-derided on-screen keyboard was made up of at least four. Factor in an application and the system may have to render up to 20 layers for a keypress! This approach would work well on a desktop where power is not a concern, but in the mobile space computation costs battery and hardware is limited in capability.

A little history

When Android Cupcake was launched, most mobile devices did not have OpenGL hardware available. Google provided an OpenGL ES software emulation package for those that did not. While functional, my first port was to a CPU that lacked both OpenGL hardware and a floating-point unit. My customer wanted to use Android on a digital picture frame with a 1024x768 screen. Once running, simple operations like opening the application “drawer” were a performance disaster. My Windows CE counterparts had a few good laughs at my fancy new OS’s expense. I ended up having to disable the alpha-blending algorithms by shorting them to full opacity just to get a demo running.

Android Eclair represented a major shift. Android now required OpenGL hardware; the emulation library was removed. To their credit, the Android developers made it very easy to use a pre-built OpenGL ES library. This prevented many devices on the market from upgrading to Eclair, leading to the first forced fragmentation of the Android handset market. I would have preferred an alternate layering implementation that did not require hardware acceleration for legacy or low-end devices and an advanced library for those devices that could support it.

A better future…?

It may be too late for Android to address this problem going forward. Most of the current hardware platforms have adequate rendering power to overcome the shortfalls in the software design. It is unlikely that the effort will be spent to optimize a part of the operating system that is not a roadblock. I believe that software improvement could increase device runtime and improve the user experience. I would prefer a more holistic approach that pre-determines the relevant surfaces before any rendering anything. This is difficult to achieve in the application-driven rendering model. The surfaces have little to no knowledge of their peers or parents, preventing any collaboration or optimization. A wholesale redesign of the rendering engine might be required for any improvement. It appears that Microsoft may be heading to a more unified approach with Metro by using a DOM, as mentioned in Paul’s post.

Conclusion

Android is an amazing popular and powerful platform whose success cannot be denied. In their attempt to keep a favorable license in place, Google engineers re-solved the UI toolkit and rendering problem. It offered the ability to build a beautiful UI including transparency and layering, but missed the mark on efficiency and scalability for a mobile platform. Hardware has caught up to the software making it unlikely to change but it will always leave me wondering what could have been.

Published: February 02 2013

blog comments powered by Disqus