It’s a Sharp, Sharp World…

…or Some Tips on How to Bring Your Big iPad App to the Even Bigger Retina Display


I’ve just spent the the past several weeks updating Bobo Explores Light for iPad’s new retina screen.  It was a tricky problem to solve right, but I’ve learned a couple of tricks along the way that you might find useful.  If so, read on…

The Problem

For vector-based or parametric iOS apps, ones that rely on 3D models or that perform some clever run-time rendering of 2D assets, the retina conversion is pretty straight forward.  All they need to do is introduce an x2 multiplier somewhere in the rendering path and the final visuals will take advantage of the larger screen automagically.  There might still be a couple of textures and Heads-Up-Display images to update, but the scope of these changes is quite small.

My problem was different.

The Bobo app contains well over 1400 individual illustrations and image components that look pixelated when scaled up.  It features several bitmap fonts that don’t scale well either.  When I introduced the x2 multiplier over the entire scene, the app technically worked as expected, but it appeared fuzzy:

Fuzzy Problem

My first impulse was to replace all of the illustrations and fonts by their x2 resampled sharp equivalents.  This line of thinking, however, presented two immediate challenges:

1) I needed to manually upscale 1400 images.  That’s a lot!

Even though the illustrator behind the project, Dean MacAdam, kept high-res versions of all the images in the app, the process of creating the individual retina assets was very tedious:

  • Open every low-res (SD) image in Photoshop
  • Scale it to 200%
  • Overlay it with an appropriately scaled, rotated, and positioned high-res version of that same image
  • Discard the layer containing the original SD image
  • Save the new high-res image (HD) with a different filename
  • Repeat 1399 times

If each image were to take 5min to convert, and that’s pretty fast, this conversion alone would take well over three weeks.  Yikes!

2) I needed to keep the size of the binary in check.

4in1Bobo Explores Light already comes with a hefty 330MB footprint.  Not all of it is because of illustrations since the app includes a number of videos, tons of sounds and narratives, etc.  But a good 200MB is.

Now, the retina display includes 4x as many pixels as a non-retina display.  If I were to embed an HD image for every SD image used in the app, the size of the Bobo binary would exceed 1GB (130MB for non-image content + 200MB for all SD images and 4 x 200MB for all HD images).  That just wasn’t an option.

The saving grace

When I calculated the above numbers, I’ve reached the conclusion that in the case of Bobo, retina conversion was a futile effort.  Nonetheless, I got myself the latest iPad and did some experimenting.  My secret hope was that I could mix SD images with HD images and come up with an acceptable hybrid solution.  My secret fear, however, was that the few HD images would only highlight the pixelation of SD images still on the screen and that it would be an all-or-nothing type of a scenario.

I uploaded a few mock-up images onto the new device, iterated over several configurations, and I was pleasantly surprised.  Not all, but some combination of SD and HD images actually worked beautifully together.  In certain cases, the blurry SD images even added a sense of depth to the overall scene, resulting in a cheap man’s depth of field effect.

I was excited because these results helped me address both of the problems I outlined above.  By being selective about which images I needed to convert, the total number of retina assets I needed shrunk to 692.  Still a large number, but less than half of the original.  Also, the ballooning of the binary size would be diminished.  That problem would not be solved, mind you, but it would certainly help.


Text was the number one item in the app that screamed “I’m pixelated!”.  The native iOS code renders such beautifully sharp text on the new iPad that any text pixelation introduced in the Bobo app stuck out like a sore thumb.  This part was easy to fix, though.  By loading a larger font on retina devices, all of the text that was dynamically laid out suddenly snapped to focus.  Unfortunately for me, not all of the text in the app was dynamically laid out.

Bobo features well over 100 pages of text with images in the form of side articles and interesting factoids.  For the sake of saving time when we worked on v1.0 of the app, we baked some of that text and images together and rendered the entire page as a single image.  This approach really helped us streamline the creation process and push the app out in time.  All in all, these text-images amounted to about 80MB of the final binary, but given the time it saved us, it was the right approach at the time.  Now, however, it presented a problem.

If we were to re-sample all these text-images for the retina display, we would gain ~80Mb x 4 = ~320Mb of additional content just from the text alone.  That was way too much.  But, we *needed* to render sharp text.  So, we bit the bullet, separated the text from its background, and dynamically laid out all the text at run-time.

This conversion took well over two weeks, but it was worth the effort.  The text became sharp without requiring any more space.  At the same time, we were able to keep all the photographs interleaved with the text as SD images.  Because these were photographs that were visually fairly busy and because they were positioned next to sharp text that drew the attention of the eyes, the apparent blurring from the pixelation was minimal.  Additionally, without any baked text the background images compressed into much smaller chunks, giving us about 50MB worth of savings.  That was not only cool, but very necessary.

Home-Brewed Cocos2D Solution

Bobo is built on top of the open-sourced Cocos2D framework (an awesome framework with a great community of developers – I highly recommend it!).  Out of the box, Cocos2D supports loading of retina-specific images using a naming convention.  However, this functionality is somewhat limited.  If all of the images in an app are either HD or SD, this works great.  But my needs were such that I required mixing and matching of the two, often without knowing ahead of time which images needed upscaling without trying it out first.  I needed a solution that would allow me to replace HD images with SD images on a whim without having to touch the code every time I did so.

Way back when, when I was working on The Little Mermaid and Three Little Pigs, I created an interactive book framework where I separated the metadata of each page (text positioning, list of images, etc.) from the actual Cocos2D sprites and labels that would render them on the screen.  This is a fairly common development pattern, but I can never remember what it’s officially called (View-Model separation maybe?).  Anyway, I used this separation to my advantage in Three Little Pigs to create the x-ray vision feature.  Render the metadata one way and the page appears normal; render that same data another way and you are looking at the x-ray version of that page.  Super simple and super effective.

With this mechanism in place, I was able to modify a single point in the rendering code to load differently scaled assets based on what assets were available.  In pseudo-code, the process looked something like this:

Sprite giveMeSpriteWithName(name) {
    if (retina && nameHD exists in the application bundle) {
        sprite = sprite with name(nameHD);
        sprite.scale = 1;
        return sprite;
    else {
        sprite = sprite with name(name);
        sprite.scale = retina ? 2 : 1;
        return sprite;

It got a little more complicated because of parenting issues (if SD and HD images belonged to different texture atlases, they each needed their own parents), but this was the core of it.  What this meant for me was that all of the pages, by default, took SD images and scaled them up.  Apart from appearing pixelated, the pages looked and behaved correctly.  Then, I could go in and, image-by-image, decide which assets needed to be converted to HD, testing these incremental changes on retina device as I went along.

There was some tediousness involved for sure.  However, I quickly got the sense of what portions of what pages needed updating and I came up with the following rough rules, that hopefully might come handy to you as well.

Things That Scream “I’m pixelated!”

1) Type

At the very least, convert all your fonts, whether they baked into images or laid out dynamically.  Your eye focuses almost instantly on the text on the screen, if some exists, and the fuzzy curves on letters become immediately noticeable.  By that same token, convert *all* of your fonts – don’t skimp out just by converting the main font that you use in 90% of the cases.  The other fuzzy 10% would essentially nullify the entire effort.

2) Small parts that are the focus of attention

When converted to HD, cogs, wheels, pupils, and tiny components all make a huge difference in giving the app the *appearance* of fine detail even if the larger images, however bright and prominent, are still in SD.  Moreover, because these smaller images are … uhm… small, scaling them up doesn’t take that much extra space, so it’s a win-win setup.

3) High-contrast boundaries

Bobo’s head is a perfect example.  Most of the time, Bobo moves across dark colors with his bright green bulbous head in sharp contrast with the background.  Even though Bobo’s head was relatively large, it begged for a razor-sharp edge on most pages.

Things That You Can Probably Ignore

1) Action sequences

This one can sometimes go either way, but it’s still worth mentioning.  If something pixelated moves across the screen, the movement will mask that pixelation enough so that no one will really care.  However, if you have an action sequence that draws the attention of the eye and the sequence contains at least some amount of stillness, the pixelation will show.

2) Shadows, glows, and fuzzy things

All of these guys *benefit* from pixelation – definitely don’t bother with them.  If anything, downscale them even for the SD displays and no one will be the wiser.  Seriously, this is a great trick.  Anything that has a nondescript texture without sharp outlines (either because the outlines should be fuzzy or because the outlines are covered with other images), store it as a 50% version, and scale it up dynamically in code to 200% on non-retina displays and 400% on retina displays.  The paper image behind all side articles in Bobo Explores Light is a perfect example.  The texture itself is a little fuzzy, but because it is lined with sharp metal edges and overlaid with sharp text, nobody cares.

When All Else Fails…

A few times I found myself in situations where the SD image was too fuzzy on the retina display, but the HD image took way too much space to store efficiently.  What I ended up doing in those cases was to create a single 150% version of the image, and scaled it down to 66% for SD displays and 133% for HD displays.  The results were perfectly passable in both cases.

Final Tallies

When all was said and done and my eyes were spinning from some of the more repetitive tasks, I was very curious to see how much the binary expanded.  I kept an on-going tally as I went through this process, but because of various reasons, it wasn’t super accurate.  When I compiled the finished version, I discovered that not only did the binary not expand, it *shrunk* by a whooping 50 MB!  This whole process took one freakishly tedious month to complete, but in the end the retina-enabled version of the app was significantly smaller than it’s non-retina original.

I don’t know whether that says more about my initial sloppiness or the effectiveness of the retina conversion.  I’ll leave that as a question for the reader.  Nonetheless, the results were exciting and Bobo Explores Light looks, if I dare say, pretty darn sharp on the new iPad.  Check it out!



Join the discussion and tell us your opinion.

June 14, 2012 at 12:43 am

Very interesting and nicely illustrated article.
(Did you mean MVC for the separation of model and view?)

June 14, 2012 at 10:12 am

Yeah, probably. 😉

Leave a reply