Skip to content

Floating point exact representation of integers

There are two main floating point formats: single-precision (float in Java) which stores a value in 4bytes, and double-precision (double) using 8bytes.

The question is what range of integers can be represented exactly by these floating point formats?
In other words, what is the maximum value for which such a statement holds:

long v;
(long)(float) v == v;
(long)(double) v == v;

float represents integers exactly up to 2^24 (16,777,216), while double represents them exactly up to 2^53 (9,007,199,254,740,992). These numbers are consistent with the mantissa size being 23bits for single precision and 52bits for double precision.

Apple is bad

I’ll never buy an Apple product again.

I find the patent situation in the US profoundly sick. Microsoft is making money on the back of Android with patent fees.
Apple and Microsoft team up together to buy the Nortel patents for 4.5billion, to be used against competitor Android. Apple suing HTC and Motorola over Android. Oracle suing Google over Java patents used in Android.

Really, the software patents as they are used now in the US are a cancer. They are not good in any way, and most of all they do not promote innovation — quite the opposite, they clearly are used to stifle innovation when it happens.

The real fix is to abolish the software patents in the US. I deeply hope this will happen, but I don’t quite hold my breath for it.

In the meantime, the least I can do is vote with my money, as a humble consumer. I’ll never buy an Apple product again!

PS: Of course, there is quite some time since last I used any Microsoft product. But with Apple it’s new.

RenderFX: color vs. power

Android render effects is a way to globally change the colors used when rendering, for example by using only the Red channel (disabling Green and Blue). It was first presented on Jeff Sharkey’s blog, and it is now implemented and available in CyanogenMod.

Instead of using the three color channels (RGB) you may chose to use only one, let’s say Red. White becomes Red, while the disabled channels (Green, Blue) become black. The colors are very weird, but the screen remains sort-of readable in general. Why would anybody want to do something so horrible to a display?

The first reason is to save power. A phone’s display uses a significant portion of the battery. For OLED displays, making pixels (or subpixels) black saves power. Turning color channels off saves power. For example using only the Red channel instead of all three RGB cuts the display power to less than 50%.

Automatic Brightness

Android phones have a Display Brightness setting, which is usually set to “Automatic Brightness”. This reduces the brightness of the display in dark indoors (when the screen is easy to read), while increasing the brightness in sunny outdoors, when the screen is barely readable.

On OLED displays, which have deep blacks, increased brightness usually looks better. So why there exists this adaptive brightness at all, instead of keeping the display simply set to maximum brightness all the time?
Mainly for two reasons:

  • lower brightness saves battery power
  • lower brightness increases the display lifetime (for OLED).

Yes, that’s right: the “brightness” setting is not there to improve the visual quality — it is there to increase the battery life. Try setting your phone to “maximum brightness” for one day, it’ll look great but will empty the battery much faster.


Traditional LCD displays need a backlight, a source of white light that is filtered through the liquid cristal layer to produce the colors. The power usage of an LCD display is independent of the colors displayed (e.g. redardless whether the screen is white, red or blue, the power usage of the display is the same).

OLED displays do not have a backlight. In OLED the pixels themselves emit light, and each individual subpixel uses power in doing so. OLED power usage depends on the colors displayed. A white screen (which has all subpixels Red, Green, Blue lighten up at maximum intensity) uses the most power, while a black screen uses almost no power at all.

In addition, OLEDs have a general difficulty with Blue. The Blue subpixels use much more power (compared to the Red or Green channels). In addition to the increased power usage (or because of it) the Blue subpixels have a much shorter lifetime. If the OLED display is kept on, bright white, for a long time, the Blue subpixels will wear off visibly and irreversibly.

So a way to increase the battery lifetime of the phone is to disable the Blue channel. Of course, this is a pretty high price to pay for power as the colors don’t look normal anymore. A more extreme is to disable two channels, e.g. Blue and Green, resulting in more battery life and a Red-only display.

But hacking is fun, and Jeff Sharkey first implemented a proof-of-concept of the color-reduction. It is implemented as an OpenGL transform in SurfaceFlinger. This means that the pixel colors undergo a final modification (e.g. zeroing-out the Blue and Green channels) just before they’re sent to the display panel. The problem is that this solution uses CPU or GPU resources (due to the additional OpenGL processing), and this means more power usage. So while some power is gained by reducing OLED colors, some power is lost because of the additional GPU usage.

Now the question presents itself: is it possible to implement this color warping, a.k.a. RenderFX, without the burden of the additional OpenGL transform? Well.. yes, there is a cool trick that does it, but it only works for OLED displays. But anyway we’re only interested in RenderFX on OLEDs (because only there it saves power).

Colored backlight

The trick concerns the backlight color. Traditional LCDs have a real backlight, a source of white light, whose intensity can be changed (more or less bright) but the color stays white.

On the other hand OLEDs do not have a real backlight. The brightness setting for OLED is not implemented by changing the backlight intensity, but simply by remapping the intensity of the pixels in the display controller. You can think of an OLED backlight, but that’s an imaginary concept simulated by the display controller.

It turns out, the OLED panel can control the intensity of the backlight color channels (RGB) independently. This can be represented as if having a backlight which can be set to any color and intensity, not only grayscale. And this is the solution: by setting the backlight to various non-white colors it is possible to implement RenderFX without the OpenGL overhead.

For example, to disable the Blue channel, it is enough to set the backlight to Yellow (Red+Green). To disable both Blue and Green we set the backlight to Red. And so on, many intermediate variations are possible.

I tested this in practice on Nexus One with CyanogenMod by modifying the kernel file
which implements the “backlight intensity setting” for the OLED panel, to allow for non-white backlight colors — and it works nicely!

For a general and clean solution, a bit more work is needed throughout the framework in order to expose in the API the concept of “LCD backlight color” instead of “LCD backlight intensity” as it is now. This is a generalization, as an RGB color can easily be mapped to a grayscale intensity, but not the other way around (i.e. RGB is a superset of grayscale).

Cyanogen Rules

I use a Nexus One phone. I have been using it for more than one year. I’ve written apps for it.

Yet until yesterday I didn’t know it has an FM receiver. Yes, the Nexus One has a perfectly good functional FM receiver.

The hardware (FM radio) is there in the phone. It’s just that the software is not enabling the radio. It’s such a pity: you pay for the “hard” part, the electronics, but the “soft” part is not making any use of it.

But yesterday, when I installed CyanogenMod for the first time, suddenly the FM receiver was there, and I was shocked: FM radio? working? my phone has an FM receiver?!

CyanogenMod is a fork of the Android project. It is so much better than the “stock Android” that comes with your phone.

And CyanogenMod is open. Really OPEN. You can take the source code, compile/hack it and install it on the phone, and you get a working phone. This is unlike the AOSP (android open source project), where you can get the source code, compile it, but it won’t work on any device.

And CyanogenMod is accepting contributions from external developers. And is open to source-code change, and to improvement. CyanogenMod is putting into life the open that Google is only talking about.

So, if you have an Android phone, put CyanogenMod on it and be enthrilled! If you are a developer, and you want to propose a change/improvement to Android source code, send it to CyanogenMod for inclusion.

On the other hand.. thinking about sending a change to Google AOSP? think twice.. Most likely your change is not good enough, and anyway nobody cares — the Google android developers are too busy to bother.

Kudos to CyanogenMod for showing us all what an open Android really can be.

How to work around Android’s 24 MB memory limit

The Android framework enforces a per-process 24 MB memory limit. On some older devices, such as the G1, the limit is even lower at 16 MB.

What’s more, the memory used by Bitmaps is included in the limit. For an application manipulating images it is pretty easy to reach this limit and get the process killed with an OOM exception:

E/dalvikvm-heap(12517): 1048576-byte external allocation too large for this process.
E/GraphicsJNI(12517): VM won't let us allocate 1048576 bytes
D/AndroidRuntime(12517): Shutting down VM
W/dalvikvm(12517): threadid=1: thread exiting with uncaught exception (group=0x4001d7f0)
E/AndroidRuntime(12517): FATAL EXCEPTION: main
E/AndroidRuntime(12517): java.lang.OutOfMemoryError: bitmap size exceeds VM budget

This limit is ridiculously low. For a device, like the Nexus One, with 512MB of physical RAM, setting the per-process memory limit for the foreground activity to only 5% of the RAM is a silly mistake. But anyway, that’s how things are and we have to live with it — i.e. find how to work around it.

There are two ways to allocate much more memory than the limit:

One way is to allocate memory from native code. Using the NDK (native development kit) and JNI, it’s possible to allocate memory from the C level (e.g. malloc/free or new/delete), and such allocations are not counted towards the 24 MB limit. It’s true, allocating memory from native code is not as convenient as from Java, but it can be used to store some large amounts of data in RAM (even image data).

Another way, which works well for images, is to use OpenGL textures — the texture memory is not counted towards the limit.

To see how much memory your app has really allocated you can use android.os.Debug.getNativeHeapAllocatedSize().

Using either of the two techniques presented above, on a Nexus One, I could easily allocate 300MB for a single foreground process — more than 10 times the default 24 MB limit.

Nexus One display: 16 or 24 bits per pixel?

There has been some discussion recently about whether the Nexus One display panel hardware has 24 bits per pixel or 16 bits per pixel. 24bpp means 8 bits per color RGB888 and allows a total of about 16 million different colors. 16bpp is RGB565 (Red and Blue have 5 bits and Green has 6 bits), and allows a total of 64 thousand different colors.

I’m happy to bring the good new and to clear this question once for all:

Yes, the Nexus One display is 24 bits per pixel!

Now, in software things look a bit different. In Eclair by default a new window is created with a PixelFormat of RGB565 (16bpp), and thus a naive application will get the surprising result of displaying in 16bits even if drawing for example an RGBA_8888 Bitmap in full color.

Happily this is easy to fix by requesting a different “pixel format” when the window is initialized in the Activity.onCreate():

public void onCreate(Bundle b) {

After this simple operation the drawing will happen in full-splendor 24bpp. Of course, take care not to use some intermediary RGB_565 bitmaps, or some 565 config in opengles.

And by the way, the Droid panel is 24 bits per pixel as well.

Android: OpenGL ES screenshot

Here is how you can get a .png screenshot of an OpenGL image on Android:

// GL10 gl;
// int width, height;

int size = width * height;
ByteBuffer buf = ByteBuffer.allocateDirect(size * 4);
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, buf);
int data[] = new int[size];
buf = null;
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
bitmap.setPixels(data, size-width, -width, 0, 0, width, height);
data = null;

short sdata[] = new short[size];
ShortBuffer sbuf = ShortBuffer.wrap(sdata);
for (int i = 0; i < size; ++i) {
    //BGR-565 to RGB-565
    short v = sdata[i];
    sdata[i] = (short) (((v&0x1f) << 11) | (v&0x7e0) | ((v&0xf800) >> 11));

try {
    FileOutputStream fos = new FileOutputStream("/sdcard/screeshot.png");
    bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
} catch (Exception e) {
    // handle

Android application backwards compatibility

Say you are an Android application developer. You have a last-generation Nexus or Droid phone, and you use for development a recent Android SDK version 2.1. Yet more than a half of your potential users have older phones with older versions of Android: 1.5 (Cupcake) or 1.6 (Donut).

You can simply ignore the not-up-to-date user base, and write your application exclusively for Android 2.0 (Eclair) or more. For this you set minSdkVersion=5 in your AndroidManifest, and you’re done. This is the simplest solution from the developer point of view, but also the worst solution from the user point of view (since half the users don’t have access to the application at all).

Or you can think about making your application backwards compatible, so that it can still be used on Cupcake and Donut. How to do this?

First, you need to look carefully at the API version for the SDK classes and methods that you use. This is indicated in the Android SDK documentation, on the right. For example looking at MotionEvent.getPointerCount() you read Since API level 5, which means that this method is available starting with Eclair — it is not available in Cupcake or Donut, although the class MotionEvent itself is available.

If you mark your application minSdkVersion=3 (Cupcake) but still use the method MotionEvent.getPointerCount(), the application will crash at runtime with a Dalvik VerificationError exception, because the method is not found. Note that you’ll only see this error at runtime and not earlier — not at compile time, and not at Market upload time. So you really have to test your application on Cupcake and Donut in order to catch these crashes. If you don’t test, users with Cupcake will download and try out the application, which will crash on them, and they’ll give you 1-star rating in exchange. Sometimes a helpful user may even drop you an email informing that your app is having a FC (forced close) on G1 with Android 1.5.

Let’s say your application wants to use multiple-touch where available, otherwise the application is still usable with single touch. For multi-touch you use, among others, the method MotionEvent.getPointerCount(). Multiple finger support was introduced in 2.0, Eclair.

You test on Cupcake and you realize that MotionEvent.getPointerCount() does not exist in Cupcake, although MotionEvent with most of the other methods is available. What to do to take advantage of getPointerCount() and multi-touch on Eclair, while not crashing on Cupcake? There is a trick related to how Dalvik loads the Java classes.

Dalvik uses a delayed class loading. The class is only verified when it is used for the first time in the application, i.e. when one of its methods or members is called/accessed. Simply referencing the class name somewhere in the code does not count as class use, and does not produce a class verification.

The second element that we use is android.os.Build.VERSION.SDK_INT. This is an integer giving the OS version installed on the phone (the same as minSdkVersion, 3 for Cupcake, 4 for Donut, 5 Eclair). We use this information to know which of the recent APIs are available and to adapt the code behavior on different OS versions depending on the available APIs.

And now you hit the API joke. By testing on Cupcake, you realize that the app crashes when accessing Build.VERSION.SDK_INT! You double-check the documentation, and indeed SDK_INT is not available on Cupcake, it was introduced in Donut. In other words, the mechanism for version-checking itself is not available on older versions, nice eh?

The workaround is to use Build.VERSION.SDK, which is a String containing the same integer as SDK_INT (e.g. “3″ on Cupcake), and this is available on Cupcake. You wonder why some developer thought that using a String for storing an integer is a good API choice… an API mistake, in other words.

Anyway, let’s go back to the code.

class WrapNew {
    static int getPointerCount(MotionEvent event) {
        return event.getPointerCount();

You create a class that wraps the calls to the new Eclair API. Instead of calling the MotionEvent.getPointerCount() directly, you call through the wrapper class. You only call the new method after you verified that the Build.VERSION.SDK is hight enough that the new API exists.

public boolean onTouchEvent(MotionEvent event) {
    boolean hasMultiTouch = Integer.parseInt(Build.VERSION.SDK) >= 5;
    int nPointer = hasMultiTouch ? WrapNew.getPointerCount(event) : 1;
    // ...

The Dalvik trick is: as long as we don’t call WrapNew.getPointerCount(), the WrapNew class is not loaded by Dalvik and does not generate a VerificationError by referencing the non-existing API MotionEvent.getPointerCount().

You test again the application on Cupcake, and it does not crash anymore with VerificationError! It also supports multi-touch on Eclair, and the solution is pretty clean and simple code.

One more note: if you use Proguard to optimize your Android application, be sure to mark the wrapper class as non-inlinable, because otherwise Proguard would optimize it out of existence, denying it’s purpose.
-keep ,allowobfuscation class package.WrapNew { *; }

You can see all this in practice, real-life working code, in the Arity Calculator application, which is Open Source so you can freely explore the code. Arity Calculator is available on Market on Android phones, for Cupcake, Donut, Eclair and on.

And here is the missing table from the Android documentation:

Codename   Version   API level 
Cupcake 1.5 3
Donut 1.6 4
Eclair 2.0 5
Eclair update 2.0.1 6
Eclair MR1 2.1 7

The free AppEngine: how much does it cost?

AppEngine is a Google product that allows to write web applications in Python or Java and host them on the Google infrastructure. AppEngine takes care of the typically difficult tasks of distribution, scalability and fail-over. It offers easy to use APIs for web request handling, data storage, memcache, etc. One of the biggest benefits is the integration with Google accounts — a user simply signs in to an AppEngine application with their existing Google account.

Perhaps prototyping a web application, together with hosting/publishing, has never been as easy as with AppEngine. Add to that the Python language, and it’s a lean-mean web application machine.

AppEngine is free up to some quotas. If the application is successful and you need more resources, you have to pay. Here is the table of the free quota, and the incremental cost:

Free quota per day Cost above quota
CPU Time 6.50 CPU hours $0.10/CPU hour
Bandwidth Out 1.00 GBytes $0.12/GByte
Bandwidth In 1.00 GBytes $0.10/GByte
Stored Data 1.00 GBytes $0.005/GByte-day
Recipients Emailed 2,000.00 Emails $0.0001/Email

The question is, how much would it cost if there were no free quota, and you’d have to pay for the free quota at the same incremental cost. In other words, what is the dollar value of the quota that Google is offering for free?

Per day: CPU $0.65, Bandwidth Out $0.12, Bandwidth In $0.1, Storage $0.005, Emails $0.2. Summing them comes to $1.07 / day.

So the “cost” of the free quota on AppEngine is about $1/day/application. Consider that Google is offering up to 10 free applications per account, so the cost could be argued is $10/day/account.

Nexus One display and subpixel pattern

According to the Nexus One specs, it has a 800×480 AMOLED display.

According to Wikipedia, this is what makes an AMOLED display different from standard LCD:

  • It does not need a backlight, the pixels emit light themselves. This allows for thinner display.
  • Only the turned-on pixels consume power. In average AMOLED uses less power than LCD. The power usage is dependent on the color displayed — a black screen uses much less power than a white screen.
  • The contrast is very good. The black is very deep (as there’s no backlight leakage).
  • The viewable angle is very large; this is often a problem with LCD, particularly TN (twisted-nematic), which have small viewing angles.
  • OLED has better reponse rate than LCD.
  • The lifetime of OLED is shorter than LCD. Only the period a pixel is lightened (turned-on) counts towards the lifetime. The blue subpixels have the shortest lifetime.

Glossing over the technical details, the result is that the Nexus One has a *great* display. It is the highest quality display I’ve ever seen (but I can’t say how it compares to the Droid display as I have not seen a Droid phone yet). The Nexus display is large and high-density, is very crisp, the colors are very saturated, and the black is indeed very deep. WVGA, 800×480 is quite a high resolution for a display of this size (a more typical resolution for this size would be HVGA, 480×320 as on the G1).

After getting used to the Nexus display, when you look back to your previous phone’s display you’ll be shocked by how low-quality the old LCD seems now.

The AMOLED display on Nexus has one more surprise on hold: On Nexus, each pixel is composed of only two subpixels instead of the usual three (Red, Green, Blue) subpixels of an LCD. The following picture should help understand the subpixel pattern on a Nexus: (source)

OLED subpixel matrix

So every pixel contains a green subpixel, and alternating a red or a blue subpixel. A pixel has either red or blue subpixels but not both.

Why is it done this way? This technique allows for a larger physical area for the blue and red subpixels, thus increasing their lifetime. It also allows to implement the high pixel count (800×480) by using only two thirds of the subpixels that would normally be needed (2subpixels/pixel instead of 3subpixels/pixel).

What is the impact of this unusual subpixel pattern? Natural images (such as pictures, movies etc) are very well reproduced and likely the subpixel impact is not perceivable. Synthetic images that contain pixel-aligned thin and saturated lines (red or blue) allow to discern the “alternating” subpixel pattern. But as the display is very high density and the pixels are very small, most likely you would never become aware of the subpixel pattern if not looking explicitly for it.

What is the impact of AMOLED for application developers:

  • First of all, you may not care at all about AMOLED vs LCD, and everything will work just fine.
  • You can use darker colors to save power. You may prefer white-text-on-black-background to black-on-white-background, as the dominant black color uses less power on AMOLED. (the dominant color makes no power difference on classical LCD).
  • You may prefer using darker colors to increase the display lifetime. You may prefer avoiding displaying intense blue (as it has the lowest lifetime) for long periods of time.
  • If you want to draw highest-resolution thin lines, they are best rendered in green — because the green subpixels have double the resolution of red and blue subpixels.

It seems the choice of subpixel pattern is also related to human eye physiology, as the human eye is more sensible to green than to blue, and keeping green at full resolution gives rich image information to the eye.