Axl & Tuna a Winner at PocketGamer’s BIG Indie Pitch!

BIG Indie PitchThis week, I attended PocketGamer’s BIG Indie Pitch event in Seattle and I’m super excited to share that:

Axl and Tuna was the 1st prize winner!

 

Here’s a big Thank You to the judges and organizers of the event which was a total blast. And huge congrats to all the other indies presenting their awesome games! Pitching your app to a series of judges with a 3min timer ticking in the background can be a nerve-wracking experience and I was very impressed with how polished people’s games and presentations ended up being. It’s been truly an inspiring and humbling evening to be surrounded by so many creative minds of the indie world. Thanks again, stay in touch, and for those of you who couldn’t make it, check out some of the pictures below.

 

10631244_550188685092966_6308758233918177536_o 1537911_550189261759575_1196455909184084041_o
10623934_550189228426245_7415014553862171619_o 10623994_550189085092926_585066189912450093_o
10582820_550188711759630_1187404134146531830_o 10571946_550190931759408_2666512479498753697_o
10562646_550188988426269_1475729504939946353_o 10537208_550189205092914_1438582709519105764_o
10548101_550190081759493_804367621227201811_o 10537490_550188921759609_3927591374972699089_o
10549965_550189341759567_30447007708978632_o 10506992_550189901759511_8981276265168200398_o
10286920_550190468426121_8608735456001546496_o The Venue

Check out more images at the official event website here.


Leave a comment

Interview with Steve Paris

Steve ParisAxl and TunaI’ve recently had a chance to virtually sit down with Steve Paris and chat about Axl and Tuna. Steve writes for MacWorld UK and he was kind enough to write about the game and the story behind it. You can read the interview on his site here.

Comments Off

The Super Super Secret Process

Recently, @abitofcode asked me to do a post about my super secret use of tools, frameworks, and shortcuts while developing apps for the iOS. So, here goes.

If you’re not up for reading a long post, here is the short answer: I invest a lot of my effort into physics simulations and procedural animations which, when executed correctly, result in organically-feeling worlds that appear to have a mind of their own.

The long answer is, … well, a little longer.

I find that good-looking and well-polished apps share two characteristics:

  1. The obvious animations and interactions appropriate to the functioning of the app and
  2. The more subtle movements and interactions that are non-essential but make the app appear seamless, comforting, and alive.

I tend to spend a lot of time working on point #2, perhaps too much, but to me that’s where the juice is. I also like to use fuzzy words such as comforting and empowering because I do believe that apps are much more about how they make you feel rather than what they actually do.

Slightly off topic - Disney's 12 Principles of Animation

I recently found this super useful article on principles of effective animation. If you aren’t familiar with the basics of animation, it’s a good and quick read. Here is an excerpt:

“In the real world, the basic laws of physics were first described by Sir Isaac Newton and Albert Einstein. In the world of animation, however, we owe the laws of physics to Frank Thomas and Ollie Johnston. Working for the Walt Disney Company in its heyday of the 1930s, these two animators came up with the 12 basic principles of animation, each of which can be used to produce the illusion of characters moving realistically in a cartoon.”

Read the full article here…

Now, there are many ways to approach this second point. My preferred way is to inject physics simulation and procedural animation into elements that move on the screen. This can be an animated character, a button, or a simple label.

In the case of a character, a few springs and pivots can add a whole slew of beautiful behaviors for free. Well, “free” may actually be a little strong of a term, especially since rigging up a characters can be a time consuming task, but in certain cases it will save you a ton of time down the road.

Bobo the Robot was a great example. I took a static (non-simulated) body that I moved around the screen parametrically (ie. moved it from point A to point B over the duration of C seconds, easing in and out of each position). To that body I attached a second body, this one simulated using physics. I added a pivot joint and three springs to hold it in place. Bobo Menu ScreenOn the static body I placed an image of Bobo’s wheels, on the dynamic body I placed an image of Bobo’s bulbous green head and blue torso and I let the physics engine do its magic. When Bobo moved across the screen, his head ended up swaying so naturally from side to side, that he immediately jumped out of the screen as alive. Magic.

And as I said before, you are not limited to applying this principle to just characters. Buttons, labels, and dialogs can all benefit from a similar approach. Just know that it’s possible to over-do it. When in doubt, ask a passer-by for an opinion.

But I’m getting ahead of myself. First, the frameworks.

For my rendering and physics needs I rely on two open source frameworks – Cocos2D and Chipmunk2D.

Many, many years ago, at least five or so, some very smart and very dedicated people created the Cocos2D for iOS. You probably already know this, but if you don’t, here is a very brief overview.

What is this Cocos2D anyway?

When you create a new iOS app, you get a lot of goodies from Apple for free in the form of the iOS SDK. I’m talking about ways to manipulate data, show UI elements (buttons, tables, and such), talk over the network, etc. All of the visual goo (UIKit, part of iOS SDK) is built on top OpenGL, a standardized way to talk to graphics chips to display individual pixels on the screen. That’s cool when you have a couple of buttons to deal with or if you’re using optimized widgets that ship with UIKit directly. However, for any other visually intensive experiences such as what you might want to expose in a game, you will want to talk to OpenGL directly to get better performance.

OpenGL is pretty low level and requires you to deal with frame buffers and matrices and encoding types and what not, which is fine, but if you work with OpenGL, you end up spending more of your time making the framework do your bidding and less time developing your game.

That’s where Cocos2D comes in.

Cocos2D

With useful and highly-optimized abstractions such as sprites (moving images), animations, and batches of particles, it takes care of a lot of the OpenGL stuff for you. It also gives you entry points all along the way, so if you want to hack into OpenGL you can, but you don’t have to. Awesome!

Now, in iOS7 Apple introduced something called SpriteKit, which is also part of the iOS SDK and it’s basically Apple’s version of Cocos2D. That’s cool, so why should you use Cocos2D, you ask? Maybe you shouldn’t, but I can tell you that Cocos2D is much more mature a framework than SpriteKit, at least right now, which means you can do a lot more with it straight out of the box. With some recent efforts, Cocos2D v3 came out which, I believe, allows you to port your games onto the Android OS fairly easily. I haven’t actually tried this, though, so don’t take my word for it. I'm cool...Finally, Cocos2D is an indie effort, which is inherently cool, but more importantly you can hack into its codebase, meaning it’s easy to tweak the framework to suit your needs. While I’m sure the purists in the crowd are giving the evil eye right now, I do it …uhm… all the time. Another secret leaked…

Then, some other very smart people, or perhaps just one person – I’m not 100% sure about the full origin story, forgive me Scott – got together and created Chipmunk Physics, also open source and also free. Once again, this is likely old news, but in case it isn’t, below you will find more.

Chipmunk Who?

Chipmunk2D

Chipmunk Physics, or Chipmunk2D as it has recently been renamed, is a free-ish, portable 2D physics engine written in C. It is the brain child of Scott Lembcke and his Howling Moon Software company. It’s lean, it’s fast, it’s well written, it’s predictable, it’s extensible, it’s actually kind of awesome. It handles rigid-body collisions, springs, motors, hinges, and a slew of other stuff. I would recommend you pay a little bit of money to get the Pro version as well. It comes with a few extra goodies. However, and more importantly, you’d support Scott in his super awesome physics efforts so you should definitely do that.

So, then, to make something interesting, you need to stitch the two together. There have been a couple of efforts to bridge the two  in some standardized way, but they all seemed to have lacked in their own ways until Cocos2D v3.0 came along, which brought Chipmunk2D and Cocos2D together in one, unified framework. All hail the folks involved in that effort! Going forward, you should definitely consider investing time to learn the ways of v3.0 because it will simplify your work on games and other Cocos2D apps significantly. However, since Cocos2D v3.0 is still a relatively new effort not available in my heyday, I ended up creating into my own, home-brewed solution which taught me a couple of things:

Physics Simulations Take Time to Setup Correctly

That’s another way of saying that physics simulations take in a lot of variables which, if not well balanced, can lead into unstable outcomes – ie. your physics rig explodes. There is no easy way around this, but here are some steps that make the process less frustrating:

  1. Read the documentation – Many a time I would be struggling with a particular component of the simulation only to realize that there is an errorBias property somewhere that lets me adjust the very thing that’s unstable. Chipmunk has okay documentation on its website and you should definitely read through it. Also, look through and understand tutorial code posted there. You will discover simpler alternatives of doing whatever it is that you need to do. If all else fails, dig into the framework code itself and read through the comments.
  2. Create the simplest rig possible – Chances are it will be good enough. It will also simulate quicker, you will understand it better, and you will minimize potential places for things to go wrong. Can you fake the more complicated bits with a pre-rendered animation or some visual overlay? Do it!
  3. Ask questions on forums – Both Cocos2D and Chipmunk2D have great forums (here and here, respectively) and if you post a clear, thoughtful, and complete question, you will most likely receive a clear, thoughtful, and complete answer. The converse is also true. I often encounter questions of the type “my code is broken. why?” without much other information offered which makes answers very difficult to come up with. You will get more applicable responses if you clearly state your issue, list your expectation, and your actual outcome and ask why the two don’t match. Posting a small snippet of code can also be helpful. Just don’t dump your entire project into the post, lest you want people to roll their eyes and move on. Finally, if you get your bearings on how to use a particular feature, go back to the forums and pay your karmic debt by answering questions for other people.
  4. Test your rigs on actual devices – Getting physics to feel “right” means that you need to test it in the same conditions as those of your final product. If you tweak some constants on the simulator running at 30 FPS and then play your game on a device running at 60 FPS, what felt natural might now feel too fast and you need to go back to the drawing board.
  5. Be patient – Tweaking takes time and often you will have to try several approaches to find the one that works the best. When I was working on Axl and Tuna, for example, I found that Axl was gliding along a track fairly well at slow speeds, but tended to bounce off the ground and not make much contact with it at higher speeds. I tried a few things to fix this behavior: I tried intercepting and modifying velocities during axl-track collision callbacks, I tried adding an invisible wheel underneath the track connected to Axl’s rig by a loose spring, etc. but none of these looked quite right. In the end, I simulated magnetic behavior by applying a small amount of force to Axl, pushing him towards the track surface along its normal, whenever Axl was within some threshold distance from it. That approach finally did the trick, but it took several head-scratching sessions to get there.

This brings me to my next point, which is…

Editors Are Your Friends

Now, don’t get me wrong. I love tweaking constants in code and recompiling and re-tweaking and recompiling as much as the next guy, but if your game / app is reasonably complex, doing this process over and over is a major pain in the butt. Especially, if there are twenty different things to tweak and you don’t know where to start.

Fortunately for you, there are some editors such as R.U.B.E. and SpriteBuilder out there which, as I understand it, allow you to build CCNode hierarchies and plug them into the underlying physics mechanics. I’ve never actually used either because they are still fairly new tools, but they both look promising, especially because they appear extensible and seem to have a solid-looking visual interface that allows you to tweak values quickly and intuitively.

The extensibility component is very important because, inevitably, you’ll come up with some cool idea that the tools won’t support natively and extending the existing tools, rather than reinventing the wheel and building your own from scratch, will be your only path to salvation.

Unfortunately for me, when I began app development, some of these tools didn’t exist and I had to resort to building my own.

My MO approach was to bake an editor directly into the app I was building and that worked fairly well. Here are a couple of examples:

Editor1 Editor3 Editor2

It started as a necessity to lay out text for interactive books that I was working on, but then, with a few extra tweaks, I started editing physics shapes, simulation constants, sprite placement, and the works. It was very helpful to momentarily pause a game (or a page in a book) tweak some values and then restart it without having to recompile the code. I also found this setup to work well as a great debugging tool that allowed me to quickly dive into complex bugs just by swiping my finger across the iPad screen. Sadly, there is adownside…

Editors Are Your Enemies

SkeletonIt turns out that when you invest your time into an editor, you don’t spend that time working on your game. Who knew, right? And if you are like me and, in the process of creating a crude editing environment for your game, you discover cool ways to constantly rewrite your framework to make it “easier to use”, you will get lost in your own rathole with no end in sight. In other words, sometimes it can be difficult to break yourself away from creating the tool and spend time creating the app.

It’s a balance. Editors can save time and frustration but they also take time to build (and debug), so I find it useful to constantly ask myself – can I achieve what I need to achieve with the tools that I already have? If you are like me and building tools is exiting for you, the previous question is a good one to write in permanent marker above your monitor.

However, always consider the power of editors that you already have. Will Sprite Builder work for you? Can you export coordinates from a photo-editing program? Design your physics setup in Inkscape? The image on the right, for example, is the design of a character rig for Axl. Use whatever tools you already have whenever you can.

The other problem is that editors tend to end up being project-specific. They will likely end up sharing a common infrastructure from one project to the next, but I find that each game / app has its own needs that require at least some form of a custom editing experience. In the past I always ended up tweaking and rewriting editors ever so slightly as I progressed in my creation of new apps.

So, Editors?

While working on Axl and Tuna, I asked myself whether I could create a run-time editor that was truly universal without having to spend a year writing the most flexible and extensible framework ever. Was there a compromise that delivered minimal, but necessary editing capabilities for a wide range of scenarios, one that was simple to use and integrate into any project?

I’m happy to say that I found an answer that worked for me. I’m sure I’m not the first one to have thought of this, but the solution I came up with is relatively easy to construct but still powerful enough to do what it needs to do.

Editor4What I’ve done is create a very simple editing framework with a corresponding editor that allows me to edit primitive values (ints, floats, vectors, etc.) organized into arbitrary hierarchies right within the app itself. Using a simple macro, any class can expose properties for editing. These can be backed by actual variables or just by named constants. If, during the app execution, an editor is invoked, I simply create a top visual layer and place it over the entire screen that scours a given hierarchy of objects, looking for and exposing any and all properties marked as editable. The editor displays the value for each property and, if you select it, you can use your finger to change its value directly on the iPad / iPhone screen. If no property is selected, the touches are passed into the scene underneath for normal app execution.

Once you find the right values for the properties you care about you can either copy those values back into the code manually or you can dump the property tree into a plist that can be read in and applied to your app during its next execution.

Very crude, but very effective because it applies to a wide array of scenarios. So, there you have it – another secret exposed!

Procedural Behaviors

Remember that organic feeling for apps I was talking about earlier? A lot of that comes from animation. I talked about physics-based animation already, but there is more you can do.

Sadly for me, I’m not an animator. I also don’t have one working with me. So I have to tackle animations programmatically.

This approach can be a work-intensive way to add movement into your apps. However, it can breathe unexpected life that you wouldn’t be able to achieve otherwise. Let me give you an example.

Bobo, my favorite robot character, has two mechanical arms. He uses his right arm to help you, the user, pull down pages from the top of the screen when you tap on their heading. Being curious and all, Bobo sometimes gets interested in whatever gizmo happens to be on a given page and may end up using his right arm to do something else for a moment (pull on a switch, reach up to tickle a monkey, etc.). If at that same moment you, the user, tap on a pull down menu and Bobo’s right arm is occupied, he will just switch and use his left arm to pull down the page instead. If Bobo was a traditionally animated character, with predefined keyframes, this type of an interaction would either be impossible or it would result in one animation being abruptly cut off while the other played itself out. However, because Bobo is monitoring his whereabouts and can make simple decisions on how to behave in a given situation, he doesn’t always do the same thing to achieve a given result. Instead, he dynamically changes his behavior based on the circumstances and exhibits a much more varied array of movements, emotions, and animations.

 To make development of this type of interactivity easier and avoid the pitfall of a whole bunch of spaghetti code, I invested a little bit of time to create a behavior system. Basically, a character (or a menu button for that matter) can perform a certain set of behaviors. Take Bobo as an example again. Bobo knows how to blink, how to look around, how to look at the user, how to sing a song, how to move to a requested location, along with a bunch of other things. A given behavior can be either ON or OFF and often several behaviors are ON at the same time. Some behaviors have higher priority (user directing Bobo to go somewhere) than others (Bobo checking out a location on his own). Some behaviors deactivate others. Bobo singing and Bobo saying “Ouch!” are mutually exclusive and because “Ouch!” has a higher priority it will overshadow and automatically deactivate the singing behavior.

Anyway, you throw all these rules together, each one defined and governed by an isolated piece of code, and (if you are lucky) you get an experience that feels spontaneous and real and gives you a huge variety of responses to a set of conditions.

Parting Words

Before I go, here are a few final lessons I learned that you might find helpful in your own projects.

  1. Code structure – whatever you do, structure your code well and refactor it as you go along. Building code in isolated components makes testing, bug fixing, refactoring, and maintenance not only easier but possible. If you make your code structure bullet-proof, good code will follow.
  2. Bugs – fix your bugs early and when you encounter them, even if you are in the middle of something else. I constantly interrupt my work because I notice that something is not happening the way it should be. Waiting all the way until the end means that you will find yourself facing a mountain of issues a week before you want to go live and that you will end up shipping a product that will fail in your users’ hands.
  3. Profiling – do it through out your development to understand where your code is spending most of its time and what operations are costly. That practice will help you come up with design decision that won’t corner you into an app that runs at 10 frames/sec. However, I would suggest not optimizing your app until the end. That way you won’t waste time perfecting code that you might not end up using in the final product.
  4. User feedback – get it early and get it often. Stand back and watch people get frustrated with your app without offering a single word of advice. That will take nerves of steel, but it will allow you to identify the parts of your app with which people struggle.
  5. Have your app be playable from day 1 - even if most of your app’s behavior is initially faked, seeing the final product in your hands early will help tremendously in guiding your design decisions going forward.

Finally, whatever you do, work on something that you love. While it’s possible to mess up a project that you really believe in, it’s nearly impossible to make a project you don’t believe in successful.

Now go and code your hearts out.


1 Comment

Axl and Tuna hits #1 Spot in Tunisia!

Thank you Tunisia!

TunisiaGames

And hello China!

ChinaBanner

Comments Off

Axl and Tuna Make an Official Debut

Axl and Tuna

Axl and Tuna, our latest iOS title, is now officially out! The app is featured in this week’s “Best New Games” and “Games We’re Playing” in countless countries around the world.

App Store

Look for it on the App Store or find out more here!


 

Comments Off

Lasers and Mirrors

Recently, someone asked me for tips on how to bounce a simulated ray of light around reflective surfaces. I thought it might be fun to post my answer here in case other folks find it useful.

In Bobo Explores Light, there are a couple of interactive pages that show rays of light bouncing dynamically around the screen:

Laser1 Laser2

To achieve a similar effect in your game, you need to do the following:

  1. Figure out all the reflection points for your ray of light
  2. String an image along those points

Figuring Out Reflection Points

The first part is pretty straight forward. Simply follow these steps.reflectionVectors

  1. Figure out the initial position and direction of your light ray. Let’s call that ray L with a starting position at point P.
  2. Figure out the width, position, and orientation of your mirror. Let’s call that segment M and the normal vector to it N.
  3. You test whether L crosses through M. You can use simple linear algebra equations to get your answer and those are discussed in detail here. If the two indeed intersect, let’s call the intersection point P’.
  4. Your reflected ray L’ begins at P’, in the direction of L reflected along N. Here is the equation to figure out the direction of the reflected vector.
  5. Let P = P’, L = L’ and repeat steps 1-5 as many times as you need.

In the process, you will create a series of points P, P’, P”, P”’, etc. The next thing you have to do is draw a line connecting those points

Stringing an Image Along

You have several options on how to proceed here.

Option 1: Use glDrawLine()

You can draw some low level lines in OpenGL using a single call to glDrawLine(). The problem you might run into is line aliasing. Instead of the image on the LEFT, you get the image on the RIGHT:

lineAntiAlias lineAlias

Option 2: Use Core Graphics

To be perfectly honest, I haven’t played with the Core Graphics libraries on iOS. However, I hear they are pretty powerful. Ray Wenderlich has some cool Core Graphics tutorials on his site, such as this one by Brian Moakley, which might come in handy if this option is available to you. Some people swear by it.

Option 3: Stretch an image along each line segment

For this option you can use images (using UIKit’s UIImage for example) or sprites (using the built-in SpriteKit framework or external libraries similar to Cocos2D) or something different yet. Basically you position the bottom of stretchable image at point A and adjust its length so that it reaches all the way to point B. The cool thing is that this technique allows you to add custom glow effects and such which can be quite neat. It’s also pretty simple to setup. You’re just stretching and rotating images. You will run into problems at the reflection points, however, especially when you are using wide and / or transparent images:

stretchedSprites

For thin lines, you might be able to get away with this glitch. But if not, I’d suggest…

Option 4: Draw a custom polygon using OpenGL

This is by far the most labor intensive option, but it will yield sharp results. You want to draw a triangle strip using OpenGL, tracing points P, P’, P”, … thusly:

PolyStrip2

In the image above, the triangle strip traces points 1, 2, 3, 4, 5, 6, 7, 8 in that order. Note that in order for this method to work, your light ray image needs to be symmetrical. Otherwise you might get incorrect-looking image flipping at the reflection points.

This is essentially the method I ended up implementing in Bobo. The trick is in reflecting the polygon along the reflection points smoothly. I utilized two methods to get that part done.

Method A:

This method of folding the polygon onto itself works well when the angle of incidence is < 45º, but it becomes increasingly poor as you approach 90º at which point the folding edge (points 3-4 in the images below) becomes infinitely long.

hSplit2_2

hSplit2_1

Method B:

This method of reflecting the polygon, on the other hand, works well when the angle of incidence is > 45º, but, again, the folding edge approaches infinity as the angle gets closer and closer to 0º.

 vSplit2_1

vSplit2_2

As you can see, in this case the image of the light ray penetrates the reflecting surface, but generally speaking the ray image is much thinner than the reflection surface, so it’s not really a problem. The extra ray width often comes from the glow part of the image and if that part spills over the reflecting surface, it still visually works.

So, since both of the methods have their shortcomings, you combine them. You use Method A when the angle of incidence is <  45º and Method B when the angle of incidence is >  45º. Note that Method A actually reverses or reflects the order of your vertices. So if your light ray is dynamic (ie. it changes with time and context) and one reflection point switches from using Method A to using Method B or vice-versa, you will need to follow your triangle strip down and reverse the order of vertex points from that point onward (ie. switch the left and right vertices at each “kink”).

Another thing to note is that when you switch from Method A to Method B for a given point, the reflection fold switches from being perpendicular to being parallel with the surface normal. That change produces a visible jump akin to a glitch. To avoid  drawing attention to it, you can overlay each reflection point with a glowing ball thusly:

ballExample

That’s it! Definitely a lot of work, but the technique works very well in practice. I hope it works for you as well!


Comments Off

Bobo Makes a Facebook Debut

Bobo and Social Media

Bobo the Robot is now so cool that he has his very own Facebook page.  Check it out and learn all about where he came from, where he’s going, and what he’s up to this very moment!  Have you ever seen Bobo’s year book picture?  Log onto Facebook and take a peek right now.


Comments Off

Apple Design Awards and Airport Security Don’t Mix

Airport SecurityAs I was heading back to Seattle from WWDC, I was only traveling with a small backpack.  I bundled the Apple Design Award into a t-shirt when I packed that morning, shoved it into my backpack, and forgot all about it when I got the airport.  The backpack went through the x-ray machine and showed up as containing a perfect, black square.

The guy watching the screen from the x-ray machine called for another guy, and another guy, and pretty soon there was small crowd scrutinizing the image.  The backpack came out, I sheepishly admitted to being the owner, and I was taken aside.  When the TSA folks pulled the cube from my bag, it glowed.

Whispers passed over the crowd.  After I explained that the cube is from Apple, the security folks reverently placed the futuristic artifact into its own plastic bin and ran it again through the x-ray machine.  This time, the other passengers got a glimpse of what the commotion was about and, once again, it was the glow of the cube readily visible as it entered and exited the x-ray machine that sent a wave of whispers through the sizable gathering.

Apple Design AwardEventually, the cube made its way back into my bag, but the curious gazes kept coming.  I suspect I will be reading about “intercepted alien technology of unknown origin or purpose” on the blogosphere soon.  What can I say?  Apple knows how to design their products.


1 Comment

Apple Design Award

Apple Design AwardA little over a year ago, a picture of a robot was scribbled on a piece of brown craft paper.  He was named “Bobo”. Last September, that same little robot made his debut exploring the science behind light and delighting children and adults around the globe.

At WWDC Apple recognized the mountain of work and polish that went into Bobo’s adventure with an Apple Design Award.

Bobo would never have become a reality without the incredible support of Apple’s platform and without the excitement and endorsement of the thousands of kids around the world.

Thank you.

Every day, they adopt Bobo and invite him into their lives. Excitement builds curiosity, curiosity powers learning, and learning drives us all forward.  Keep learning.  Keep thinking.  And most of all, keep exploring!


Comments Off

It’s a Sharp, Sharp World…

…or Some Tips on How to Bring Your Big iPad App to the Even Bigger Retina Display

I’ve just spent the the past several weeks updating Bobo Explores Light for iPad’s new retina screen.  It was a tricky problem to solve right, but I’ve learned a couple of tricks along the way that you might find useful.  If so, read on…

The Problem

For vector-based or parametric iOS apps, ones that rely on 3D models or that perform some clever run-time rendering of 2D assets, the retina conversion is pretty straight forward.  All they need to do is introduce an x2 multiplier somewhere in the rendering path and the final visuals will take advantage of the larger screen automagically.  There might still be a couple of textures and Heads-Up-Display images to update, but the scope of these changes is quite small.

My problem was different.

The Bobo app contains well over 1400 individual illustrations and image components that look pixelated when scaled up.  It features several bitmap fonts that don’t scale well either.  When I introduced the x2 multiplier over the entire scene, the app technically worked as expected, but it appeared fuzzy:

Fuzzy Problem

My first impulse was to replace all of the illustrations and fonts by their x2 resampled sharp equivalents.  This line of thinking, however, presented two immediate challenges:

1) I needed to manually upscale 1400 images.  That’s a lot!

Even though the illustrator behind the project, Dean MacAdam, kept high-res versions of all the images in the app, the process of creating the individual retina assets was very tedious:

  • Open every low-res (SD) image in Photoshop
  • Scale it to 200%
  • Overlay it with an appropriately scaled, rotated, and positioned high-res version of that same image
  • Discard the layer containing the original SD image
  • Save the new high-res image (HD) with a different filename
  • Repeat 1399 times

If each image were to take 5min to convert, and that’s pretty fast, this conversion alone would take well over three weeks.  Yikes!

2) I needed to keep the size of the binary in check.

4in1Bobo Explores Light already comes with a hefty 330MB footprint.  Not all of it is because of illustrations since the app includes a number of videos, tons of sounds and narratives, etc.  But a good 200MB is.

Now, the retina display includes 4x as many pixels as a non-retina display.  If I were to embed an HD image for every SD image used in the app, the size of the Bobo binary would exceed 1GB (130MB for non-image content + 200MB for all SD images and 4 x 200MB for all HD images).  That just wasn’t an option.

The saving grace

When I calculated the above numbers, I’ve reached the conclusion that in the case of Bobo, retina conversion was a futile effort.  Nonetheless, I got myself the latest iPad and did some experimenting.  My secret hope was that I could mix SD images with HD images and come up with an acceptable hybrid solution.  My secret fear, however, was that the few HD images would only highlight the pixelation of SD images still on the screen and that it would be an all-or-nothing type of a scenario.

I uploaded a few mock-up images onto the new device, iterated over several configurations, and I was pleasantly surprised.  Not all, but some combination of SD and HD images actually worked beautifully together.  In certain cases, the blurry SD images even added a sense of depth to the overall scene, resulting in a cheap man’s depth of field effect.

I was excited because these results helped me address both of the problems I outlined above.  By being selective about which images I needed to convert, the total number of retina assets I needed shrunk to 692.  Still a large number, but less than half of the original.  Also, the ballooning of the binary size would be diminished.  That problem would not be solved, mind you, but it would certainly help.

Text

Text was the number one item in the app that screamed “I’m pixelated!”.  The native iOS code renders such beautifully sharp text on the new iPad that any text pixelation introduced in the Bobo app stuck out like a sore thumb.  This part was easy to fix, though.  By loading a larger font on retina devices, all of the text that was dynamically laid out suddenly snapped to focus.  Unfortunately for me, not all of the text in the app was dynamically laid out.

Bobo features well over 100 pages of text with images in the form of side articles and interesting factoids.  For the sake of saving time when we worked on v1.0 of the app, we baked some of that text and images together and rendered the entire page as a single image.  This approach really helped us streamline the creation process and push the app out in time.  All in all, these text-images amounted to about 80MB of the final binary, but given the time it saved us, it was the right approach at the time.  Now, however, it presented a problem.

If we were to re-sample all these text-images for the retina display, we would gain ~80Mb x 4 = ~320Mb of additional content just from the text alone.  That was way too much.  But, we *needed* to render sharp text.  So, we bit the bullet, separated the text from its background, and dynamically laid out all the text at run-time.

This conversion took well over two weeks, but it was worth the effort.  The text became sharp without requiring any more space.  At the same time, we were able to keep all the photographs interleaved with the text as SD images.  Because these were photographs that were visually fairly busy and because they were positioned next to sharp text that drew the attention of the eyes, the apparent blurring from the pixelation was minimal.  Additionally, without any baked text the background images compressed into much smaller chunks, giving us about 50MB worth of savings.  That was not only cool, but very necessary.

Home-Brewed Cocos2D Solution

Bobo is built on top of the open-sourced Cocos2D framework (an awesome framework with a great community of developers – I highly recommend it!).  Out of the box, Cocos2D supports loading of retina-specific images using a naming convention.  However, this functionality is somewhat limited.  If all of the images in an app are either HD or SD, this works great.  But my needs were such that I required mixing and matching of the two, often without knowing ahead of time which images needed upscaling without trying it out first.  I needed a solution that would allow me to replace HD images with SD images on a whim without having to touch the code every time I did so.

Way back when, when I was working on The Little Mermaid and Three Little Pigs, I created an interactive book framework where I separated the metadata of each page (text positioning, list of images, etc.) from the actual Cocos2D sprites and labels that would render them on the screen.  This is a fairly common development pattern, but I can never remember what it’s officially called (View-Model separation maybe?).  Anyway, I used this separation to my advantage in Three Little Pigs to create the x-ray vision feature.  Render the metadata one way and the page appears normal; render that same data another way and you are looking at the x-ray version of that page.  Super simple and super effective.

With this mechanism in place, I was able to modify a single point in the rendering code to load differently scaled assets based on what assets were available.  In pseudo-code, the process looked something like this:

Sprite giveMeSpriteWithName(name) {
    if (retina && nameHD exists in the application bundle) {
        sprite = sprite with name(nameHD);
        sprite.scale = 1;
        return sprite;
    }
    else {
        sprite = sprite with name(name);
        sprite.scale = retina ? 2 : 1;
        return sprite;
    }
}

It got a little more complicated because of parenting issues (if SD and HD images belonged to different texture atlases, they each needed their own parents), but this was the core of it.  What this meant for me was that all of the pages, by default, took SD images and scaled them up.  Apart from appearing pixelated, the pages looked and behaved correctly.  Then, I could go in and, image-by-image, decide which assets needed to be converted to HD, testing these incremental changes on retina device as I went along.

There was some tediousness involved for sure.  However, I quickly got the sense of what portions of what pages needed updating and I came up with the following rough rules, that hopefully might come handy to you as well.

Things That Scream “I’m pixelated!”

1) Type

At the very least, convert all your fonts, whether they baked into images or laid out dynamically.  Your eye focuses almost instantly on the text on the screen, if some exists, and the fuzzy curves on letters become immediately noticeable.  By that same token, convert *all* of your fonts – don’t skimp out just by converting the main font that you use in 90% of the cases.  The other fuzzy 10% would essentially nullify the entire effort.

2) Small parts that are the focus of attention

When converted to HD, cogs, wheels, pupils, and tiny components all make a huge difference in giving the app the *appearance* of fine detail even if the larger images, however bright and prominent, are still in SD.  Moreover, because these smaller images are … uhm… small, scaling them up doesn’t take that much extra space, so it’s a win-win setup.

3) High-contrast boundaries

Bobo’s head is a perfect example.  Most of the time, Bobo moves across dark colors with his bright green bulbous head in sharp contrast with the background.  Even though Bobo’s head was relatively large, it begged for a razor-sharp edge on most pages.

Things That You Can Probably Ignore

1) Action sequences

This one can sometimes go either way, but it’s still worth mentioning.  If something pixelated moves across the screen, the movement will mask that pixelation enough so that no one will really care.  However, if you have an action sequence that draws the attention of the eye and the sequence contains at least some amount of stillness, the pixelation will show.

2) Shadows, glows, and fuzzy things

All of these guys *benefit* from pixelation – definitely don’t bother with them.  If anything, downscale them even for the SD displays and no one will be the wiser.  Seriously, this is a great trick.  Anything that has a nondescript texture without sharp outlines (either because the outlines should be fuzzy or because the outlines are covered with other images), store it as a 50% version, and scale it up dynamically in code to 200% on non-retina displays and 400% on retina displays.  The paper image behind all side articles in Bobo Explores Light is a perfect example.  The texture itself is a little fuzzy, but because it is lined with sharp metal edges and overlaid with sharp text, nobody cares.

When All Else Fails…

A few times I found myself in situations where the SD image was too fuzzy on the retina display, but the HD image took way too much space to store efficiently.  What I ended up doing in those cases was to create a single 150% version of the image, and scaled it down to 66% for SD displays and 133% for HD displays.  The results were perfectly passable in both cases.

Final Tallies

When all was said and done and my eyes were spinning from some of the more repetitive tasks, I was very curious to see how much the binary expanded.  I kept an on-going tally as I went through this process, but because of various reasons, it wasn’t super accurate.  When I compiled the finished version, I discovered that not only did the binary not expand, it *shrunk* by a whooping 50 MB!  This whole process took one freakishly tedious month to complete, but in the end the retina-enabled version of the app was significantly smaller than it’s non-retina original.

I don’t know whether that says more about my initial sloppiness or the effectiveness of the retina conversion.  I’ll leave that as a question for the reader.  Nonetheless, the results were exciting and Bobo Explores Light looks, if I dare say, pretty darn sharp on the new iPad.  Check it out!


2 Comments