Archive | How-To RSS feed for this section

The Super Super Secret Process

Recently, @abitofcode asked me to do a post about my super secret use of tools, frameworks, and shortcuts while developing apps for the iOS. So, here goes.

If you’re not up for reading a long post, here is the short answer: I invest a lot of my effort into physics simulations and procedural animations which, when executed correctly, result in organically-feeling worlds that appear to have a mind of their own.

The long answer is, … well, a little longer.

I find that good-looking and well-polished apps share two characteristics:

  1. The obvious animations and interactions appropriate to the functioning of the app and
  2. The more subtle movements and interactions that are non-essential but make the app appear seamless, comforting, and alive.

I tend to spend a lot of time working on point #2, perhaps too much, but to me that’s where the juice is. I also like to use fuzzy words such as comforting and empowering because I do believe that apps are much more about how they make you feel rather than what they actually do.

Slightly off topic - Disney's 12 Principles of Animation

I recently found this super useful article on principles of effective animation. If you aren’t familiar with the basics of animation, it’s a good and quick read. Here is an excerpt:

“In the real world, the basic laws of physics were first described by Sir Isaac Newton and Albert Einstein. In the world of animation, however, we owe the laws of physics to Frank Thomas and Ollie Johnston. Working for the Walt Disney Company in its heyday of the 1930s, these two animators came up with the 12 basic principles of animation, each of which can be used to produce the illusion of characters moving realistically in a cartoon.”

Read the full article here…

Now, there are many ways to approach this second point. My preferred way is to inject physics simulation and procedural animation into elements that move on the screen. This can be an animated character, a button, or a simple label.

In the case of a character, a few springs and pivots can add a whole slew of beautiful behaviors for free. Well, “free” may actually be a little strong of a term, especially since rigging up a characters can be a time consuming task, but in certain cases it will save you a ton of time down the road.

Bobo the Robot was a great example. I took a static (non-simulated) body that I moved around the screen parametrically (ie. moved it from point A to point B over the duration of C seconds, easing in and out of each position). To that body I attached a second body, this one simulated using physics. I added a pivot joint and three springs to hold it in place. Bobo Menu ScreenOn the static body I placed an image of Bobo’s wheels, on the dynamic body I placed an image of Bobo’s bulbous green head and blue torso and I let the physics engine do its magic. When Bobo moved across the screen, his head ended up swaying so naturally from side to side, that he immediately jumped out of the screen as alive. Magic.

And as I said before, you are not limited to applying this principle to just characters. Buttons, labels, and dialogs can all benefit from a similar approach. Just know that it’s possible to over-do it. When in doubt, ask a passer-by for an opinion.

But I’m getting ahead of myself. First, the frameworks.

For my rendering and physics needs I rely on two open source frameworks – Cocos2D and Chipmunk2D.

Many, many years ago, at least five or so, some very smart and very dedicated people created the Cocos2D for iOS. You probably already know this, but if you don’t, here is a very brief overview.

What is this Cocos2D anyway?

When you create a new iOS app, you get a lot of goodies from Apple for free in the form of the iOS SDK. I’m talking about ways to manipulate data, show UI elements (buttons, tables, and such), talk over the network, etc. All of the visual goo (UIKit, part of iOS SDK) is built on top OpenGL, a standardized way to talk to graphics chips to display individual pixels on the screen. That’s cool when you have a couple of buttons to deal with or if you’re using optimized widgets that ship with UIKit directly. However, for any other visually intensive experiences such as what you might want to expose in a game, you will want to talk to OpenGL directly to get better performance.

OpenGL is pretty low level and requires you to deal with frame buffers and matrices and encoding types and what not, which is fine, but if you work with OpenGL, you end up spending more of your time making the framework do your bidding and less time developing your game.

That’s where Cocos2D comes in.


With useful and highly-optimized abstractions such as sprites (moving images), animations, and batches of particles, it takes care of a lot of the OpenGL stuff for you. It also gives you entry points all along the way, so if you want to hack into OpenGL you can, but you don’t have to. Awesome!

Now, in iOS7 Apple introduced something called SpriteKit, which is also part of the iOS SDK and it’s basically Apple’s version of Cocos2D. That’s cool, so why should you use Cocos2D, you ask? Maybe you shouldn’t, but I can tell you that Cocos2D is much more mature a framework than SpriteKit, at least right now, which means you can do a lot more with it straight out of the box. With some recent efforts, Cocos2D v3 came out which, I believe, allows you to port your games onto the Android OS fairly easily. I haven’t actually tried this, though, so don’t take my word for it. I'm cool...Finally, Cocos2D is an indie effort, which is inherently cool, but more importantly you can hack into its codebase, meaning it’s easy to tweak the framework to suit your needs. While I’m sure the purists in the crowd are giving the evil eye right now, I do it …uhm… all the time. Another secret leaked…

Then, some other very smart people, or perhaps just one person – I’m not 100% sure about the full origin story, forgive me Scott – got together and created Chipmunk Physics, also open source and also free. Once again, this is likely old news, but in case it isn’t, below you will find more.

Chipmunk Who?


Chipmunk Physics, or Chipmunk2D as it has recently been renamed, is a free-ish, portable 2D physics engine written in C. It is the brain child of Scott Lembcke and his Howling Moon Software company. It’s lean, it’s fast, it’s well written, it’s predictable, it’s extensible, it’s actually kind of awesome. It handles rigid-body collisions, springs, motors, hinges, and a slew of other stuff. I would recommend you pay a little bit of money to get the Pro version as well. It comes with a few extra goodies. However, and more importantly, you’d support Scott in his super awesome physics efforts so you should definitely do that.

So, then, to make something interesting, you need to stitch the two together. There have been a couple of efforts to bridge the two  in some standardized way, but they all seemed to have lacked in their own ways until Cocos2D v3.0 came along, which brought Chipmunk2D and Cocos2D together in one, unified framework. All hail the folks involved in that effort! Going forward, you should definitely consider investing time to learn the ways of v3.0 because it will simplify your work on games and other Cocos2D apps significantly. However, since Cocos2D v3.0 is still a relatively new effort not available in my heyday, I ended up creating into my own, home-brewed solution which taught me a couple of things:

Physics Simulations Take Time to Setup Correctly

That’s another way of saying that physics simulations take in a lot of variables which, if not well balanced, can lead into unstable outcomes – ie. your physics rig explodes. There is no easy way around this, but here are some steps that make the process less frustrating:

  1. Read the documentation – Many a time I would be struggling with a particular component of the simulation only to realize that there is an errorBias property somewhere that lets me adjust the very thing that’s unstable. Chipmunk has okay documentation on its website and you should definitely read through it. Also, look through and understand tutorial code posted there. You will discover simpler alternatives of doing whatever it is that you need to do. If all else fails, dig into the framework code itself and read through the comments.
  2. Create the simplest rig possible – Chances are it will be good enough. It will also simulate quicker, you will understand it better, and you will minimize potential places for things to go wrong. Can you fake the more complicated bits with a pre-rendered animation or some visual overlay? Do it!
  3. Ask questions on forums – Both Cocos2D and Chipmunk2D have great forums (here and here, respectively) and if you post a clear, thoughtful, and complete question, you will most likely receive a clear, thoughtful, and complete answer. The converse is also true. I often encounter questions of the type “my code is broken. why?” without much other information offered which makes answers very difficult to come up with. You will get more applicable responses if you clearly state your issue, list your expectation, and your actual outcome and ask why the two don’t match. Posting a small snippet of code can also be helpful. Just don’t dump your entire project into the post, lest you want people to roll their eyes and move on. Finally, if you get your bearings on how to use a particular feature, go back to the forums and pay your karmic debt by answering questions for other people.
  4. Test your rigs on actual devices – Getting physics to feel “right” means that you need to test it in the same conditions as those of your final product. If you tweak some constants on the simulator running at 30 FPS and then play your game on a device running at 60 FPS, what felt natural might now feel too fast and you need to go back to the drawing board.
  5. Be patient – Tweaking takes time and often you will have to try several approaches to find the one that works the best. When I was working on Axl and Tuna, for example, I found that Axl was gliding along a track fairly well at slow speeds, but tended to bounce off the ground and not make much contact with it at higher speeds. I tried a few things to fix this behavior: I tried intercepting and modifying velocities during axl-track collision callbacks, I tried adding an invisible wheel underneath the track connected to Axl’s rig by a loose spring, etc. but none of these looked quite right. In the end, I simulated magnetic behavior by applying a small amount of force to Axl, pushing him towards the track surface along its normal, whenever Axl was within some threshold distance from it. That approach finally did the trick, but it took several head-scratching sessions to get there.

This brings me to my next point, which is…

Editors Are Your Friends

Now, don’t get me wrong. I love tweaking constants in code and recompiling and re-tweaking and recompiling as much as the next guy, but if your game / app is reasonably complex, doing this process over and over is a major pain in the butt. Especially, if there are twenty different things to tweak and you don’t know where to start.

Fortunately for you, there are some editors such as R.U.B.E. and SpriteBuilder out there which, as I understand it, allow you to build CCNode hierarchies and plug them into the underlying physics mechanics. I’ve never actually used either because they are still fairly new tools, but they both look promising, especially because they appear extensible and seem to have a solid-looking visual interface that allows you to tweak values quickly and intuitively.

The extensibility component is very important because, inevitably, you’ll come up with some cool idea that the tools won’t support natively and extending the existing tools, rather than reinventing the wheel and building your own from scratch, will be your only path to salvation.

Unfortunately for me, when I began app development, some of these tools didn’t exist and I had to resort to building my own.

My MO approach was to bake an editor directly into the app I was building and that worked fairly well. Here are a couple of examples:

Editor1 Editor3 Editor2

It started as a necessity to lay out text for interactive books that I was working on, but then, with a few extra tweaks, I started editing physics shapes, simulation constants, sprite placement, and the works. It was very helpful to momentarily pause a game (or a page in a book) tweak some values and then restart it without having to recompile the code. I also found this setup to work well as a great debugging tool that allowed me to quickly dive into complex bugs just by swiping my finger across the iPad screen. Sadly, there is adownside…

Editors Are Your Enemies

SkeletonIt turns out that when you invest your time into an editor, you don’t spend that time working on your game. Who knew, right? And if you are like me and, in the process of creating a crude editing environment for your game, you discover cool ways to constantly rewrite your framework to make it “easier to use”, you will get lost in your own rathole with no end in sight. In other words, sometimes it can be difficult to break yourself away from creating the tool and spend time creating the app.

It’s a balance. Editors can save time and frustration but they also take time to build (and debug), so I find it useful to constantly ask myself – can I achieve what I need to achieve with the tools that I already have? If you are like me and building tools is exiting for you, the previous question is a good one to write in permanent marker above your monitor.

However, always consider the power of editors that you already have. Will Sprite Builder work for you? Can you export coordinates from a photo-editing program? Design your physics setup in Inkscape? The image on the right, for example, is the design of a character rig for Axl. Use whatever tools you already have whenever you can.

The other problem is that editors tend to end up being project-specific. They will likely end up sharing a common infrastructure from one project to the next, but I find that each game / app has its own needs that require at least some form of a custom editing experience. In the past I always ended up tweaking and rewriting editors ever so slightly as I progressed in my creation of new apps.

So, Editors?

While working on Axl and Tuna, I asked myself whether I could create a run-time editor that was truly universal without having to spend a year writing the most flexible and extensible framework ever. Was there a compromise that delivered minimal, but necessary editing capabilities for a wide range of scenarios, one that was simple to use and integrate into any project?

I’m happy to say that I found an answer that worked for me. I’m sure I’m not the first one to have thought of this, but the solution I came up with is relatively easy to construct but still powerful enough to do what it needs to do.

Editor4What I’ve done is create a very simple editing framework with a corresponding editor that allows me to edit primitive values (ints, floats, vectors, etc.) organized into arbitrary hierarchies right within the app itself. Using a simple macro, any class can expose properties for editing. These can be backed by actual variables or just by named constants. If, during the app execution, an editor is invoked, I simply create a top visual layer and place it over the entire screen that scours a given hierarchy of objects, looking for and exposing any and all properties marked as editable. The editor displays the value for each property and, if you select it, you can use your finger to change its value directly on the iPad / iPhone screen. If no property is selected, the touches are passed into the scene underneath for normal app execution.

Once you find the right values for the properties you care about you can either copy those values back into the code manually or you can dump the property tree into a plist that can be read in and applied to your app during its next execution.

Very crude, but very effective because it applies to a wide array of scenarios. So, there you have it – another secret exposed!

Procedural Behaviors

Remember that organic feeling for apps I was talking about earlier? A lot of that comes from animation. I talked about physics-based animation already, but there is more you can do.

Sadly for me, I’m not an animator. I also don’t have one working with me. So I have to tackle animations programmatically.

This approach can be a work-intensive way to add movement into your apps. However, it can breathe unexpected life that you wouldn’t be able to achieve otherwise. Let me give you an example.

Bobo, my favorite robot character, has two mechanical arms. He uses his right arm to help you, the user, pull down pages from the top of the screen when you tap on their heading. Being curious and all, Bobo sometimes gets interested in whatever gizmo happens to be on a given page and may end up using his right arm to do something else for a moment (pull on a switch, reach up to tickle a monkey, etc.). If at that same moment you, the user, tap on a pull down menu and Bobo’s right arm is occupied, he will just switch and use his left arm to pull down the page instead. If Bobo was a traditionally animated character, with predefined keyframes, this type of an interaction would either be impossible or it would result in one animation being abruptly cut off while the other played itself out. However, because Bobo is monitoring his whereabouts and can make simple decisions on how to behave in a given situation, he doesn’t always do the same thing to achieve a given result. Instead, he dynamically changes his behavior based on the circumstances and exhibits a much more varied array of movements, emotions, and animations.

 To make development of this type of interactivity easier and avoid the pitfall of a whole bunch of spaghetti code, I invested a little bit of time to create a behavior system. Basically, a character (or a menu button for that matter) can perform a certain set of behaviors. Take Bobo as an example again. Bobo knows how to blink, how to look around, how to look at the user, how to sing a song, how to move to a requested location, along with a bunch of other things. A given behavior can be either ON or OFF and often several behaviors are ON at the same time. Some behaviors have higher priority (user directing Bobo to go somewhere) than others (Bobo checking out a location on his own). Some behaviors deactivate others. Bobo singing and Bobo saying “Ouch!” are mutually exclusive and because “Ouch!” has a higher priority it will overshadow and automatically deactivate the singing behavior.

Anyway, you throw all these rules together, each one defined and governed by an isolated piece of code, and (if you are lucky) you get an experience that feels spontaneous and real and gives you a huge variety of responses to a set of conditions.

Parting Words

Before I go, here are a few final lessons I learned that you might find helpful in your own projects.

  1. Code structure – whatever you do, structure your code well and refactor it as you go along. Building code in isolated components makes testing, bug fixing, refactoring, and maintenance not only easier but possible. If you make your code structure bullet-proof, good code will follow.
  2. Bugs – fix your bugs early and when you encounter them, even if you are in the middle of something else. I constantly interrupt my work because I notice that something is not happening the way it should be. Waiting all the way until the end means that you will find yourself facing a mountain of issues a week before you want to go live and that you will end up shipping a product that will fail in your users’ hands.
  3. Profiling – do it through out your development to understand where your code is spending most of its time and what operations are costly. That practice will help you come up with design decision that won’t corner you into an app that runs at 10 frames/sec. However, I would suggest not optimizing your app until the end. That way you won’t waste time perfecting code that you might not end up using in the final product.
  4. User feedback – get it early and get it often. Stand back and watch people get frustrated with your app without offering a single word of advice. That will take nerves of steel, but it will allow you to identify the parts of your app with which people struggle.
  5. Have your app be playable from day 1 – even if most of your app’s behavior is initially faked, seeing the final product in your hands early will help tremendously in guiding your design decisions going forward.

Finally, whatever you do, work on something that you love. While it’s possible to mess up a project that you really believe in, it’s nearly impossible to make a project you don’t believe in successful.

Now go and code your hearts out.

Lasers and Mirrors

Recently, someone asked me for tips on how to bounce a simulated ray of light around reflective surfaces. I thought it might be fun to post my answer here in case other folks find it useful.

In Bobo Explores Light, there are a couple of interactive pages that show rays of light bouncing dynamically around the screen:

Laser1 Laser2

To achieve a similar effect in your game, you need to do the following:

  1. Figure out all the reflection points for your ray of light
  2. String an image along those points

Figuring Out Reflection Points

The first part is pretty straight forward. Simply follow these steps.reflectionVectors

  1. Figure out the initial position and direction of your light ray. Let’s call that ray L with a starting position at point P.
  2. Figure out the width, position, and orientation of your mirror. Let’s call that segment M and the normal vector to it N.
  3. You test whether L crosses through M. You can use simple linear algebra equations to get your answer and those are discussed in detail here. If the two indeed intersect, let’s call the intersection point P’.
  4. Your reflected ray L’ begins at P’, in the direction of L reflected along N. Here is the equation to figure out the direction of the reflected vector.
  5. Let P = P’, L = L’ and repeat steps 1-5 as many times as you need.

In the process, you will create a series of points P, P’, P”, P”’, etc. The next thing you have to do is draw a line connecting those points

Stringing an Image Along

You have several options on how to proceed here.

Option 1: Use glDrawLine()

You can draw some low level lines in OpenGL using a single call to glDrawLine(). The problem you might run into is line aliasing. Instead of the image on the LEFT, you get the image on the RIGHT:

lineAntiAlias lineAlias

Option 2: Use Core Graphics

To be perfectly honest, I haven’t played with the Core Graphics libraries on iOS. However, I hear they are pretty powerful. Ray Wenderlich has some cool Core Graphics tutorials on his site, such as this one by Brian Moakley, which might come in handy if this option is available to you. Some people swear by it.

Option 3: Stretch an image along each line segment

For this option you can use images (using UIKit’s UIImage for example) or sprites (using the built-in SpriteKit framework or external libraries similar to Cocos2D) or something different yet. Basically you position the bottom of stretchable image at point A and adjust its length so that it reaches all the way to point B. The cool thing is that this technique allows you to add custom glow effects and such which can be quite neat. It’s also pretty simple to setup. You’re just stretching and rotating images. You will run into problems at the reflection points, however, especially when you are using wide and / or transparent images:


For thin lines, you might be able to get away with this glitch. But if not, I’d suggest…

Option 4: Draw a custom polygon using OpenGL

This is by far the most labor intensive option, but it will yield sharp results. You want to draw a triangle strip using OpenGL, tracing points P, P’, P”, … thusly:


In the image above, the triangle strip traces points 1, 2, 3, 4, 5, 6, 7, 8 in that order. Note that in order for this method to work, your light ray image needs to be symmetrical. Otherwise you might get incorrect-looking image flipping at the reflection points.

This is essentially the method I ended up implementing in Bobo. The trick is in reflecting the polygon along the reflection points smoothly. I utilized two methods to get that part done.

Method A:

This method of folding the polygon onto itself works well when the angle of incidence is < 45º, but it becomes increasingly poor as you approach 90º at which point the folding edge (points 3-4 in the images below) becomes infinitely long.



Method B:

This method of reflecting the polygon, on the other hand, works well when the angle of incidence is > 45º, but, again, the folding edge approaches infinity as the angle gets closer and closer to 0º.



As you can see, in this case the image of the light ray penetrates the reflecting surface, but generally speaking the ray image is much thinner than the reflection surface, so it’s not really a problem. The extra ray width often comes from the glow part of the image and if that part spills over the reflecting surface, it still visually works.

So, since both of the methods have their shortcomings, you combine them. You use Method A when the angle of incidence is <  45º and Method B when the angle of incidence is >  45º. Note that Method A actually reverses or reflects the order of your vertices. So if your light ray is dynamic (ie. it changes with time and context) and one reflection point switches from using Method A to using Method B or vice-versa, you will need to follow your triangle strip down and reverse the order of vertex points from that point onward (ie. switch the left and right vertices at each “kink”).

Another thing to note is that when you switch from Method A to Method B for a given point, the reflection fold switches from being perpendicular to being parallel with the surface normal. That change produces a visible jump akin to a glitch. To avoid  drawing attention to it, you can overlay each reflection point with a glowing ball thusly:


That’s it! Definitely a lot of work, but the technique works very well in practice. I hope it works for you as well!

It’s a Sharp, Sharp World…

…or Some Tips on How to Bring Your Big iPad App to the Even Bigger Retina Display

I’ve just spent the the past several weeks updating Bobo Explores Light for iPad’s new retina screen.  It was a tricky problem to solve right, but I’ve learned a couple of tricks along the way that you might find useful.  If so, read on…

The Problem

For vector-based or parametric iOS apps, ones that rely on 3D models or that perform some clever run-time rendering of 2D assets, the retina conversion is pretty straight forward.  All they need to do is introduce an x2 multiplier somewhere in the rendering path and the final visuals will take advantage of the larger screen automagically.  There might still be a couple of textures and Heads-Up-Display images to update, but the scope of these changes is quite small.

My problem was different.

The Bobo app contains well over 1400 individual illustrations and image components that look pixelated when scaled up.  It features several bitmap fonts that don’t scale well either.  When I introduced the x2 multiplier over the entire scene, the app technically worked as expected, but it appeared fuzzy:

Fuzzy Problem

My first impulse was to replace all of the illustrations and fonts by their x2 resampled sharp equivalents.  This line of thinking, however, presented two immediate challenges:

1) I needed to manually upscale 1400 images.  That’s a lot!

Even though the illustrator behind the project, Dean MacAdam, kept high-res versions of all the images in the app, the process of creating the individual retina assets was very tedious:

  • Open every low-res (SD) image in Photoshop
  • Scale it to 200%
  • Overlay it with an appropriately scaled, rotated, and positioned high-res version of that same image
  • Discard the layer containing the original SD image
  • Save the new high-res image (HD) with a different filename
  • Repeat 1399 times

If each image were to take 5min to convert, and that’s pretty fast, this conversion alone would take well over three weeks.  Yikes!

2) I needed to keep the size of the binary in check.

4in1Bobo Explores Light already comes with a hefty 330MB footprint.  Not all of it is because of illustrations since the app includes a number of videos, tons of sounds and narratives, etc.  But a good 200MB is.

Now, the retina display includes 4x as many pixels as a non-retina display.  If I were to embed an HD image for every SD image used in the app, the size of the Bobo binary would exceed 1GB (130MB for non-image content + 200MB for all SD images and 4 x 200MB for all HD images).  That just wasn’t an option.

The saving grace

When I calculated the above numbers, I’ve reached the conclusion that in the case of Bobo, retina conversion was a futile effort.  Nonetheless, I got myself the latest iPad and did some experimenting.  My secret hope was that I could mix SD images with HD images and come up with an acceptable hybrid solution.  My secret fear, however, was that the few HD images would only highlight the pixelation of SD images still on the screen and that it would be an all-or-nothing type of a scenario.

I uploaded a few mock-up images onto the new device, iterated over several configurations, and I was pleasantly surprised.  Not all, but some combination of SD and HD images actually worked beautifully together.  In certain cases, the blurry SD images even added a sense of depth to the overall scene, resulting in a cheap man’s depth of field effect.

I was excited because these results helped me address both of the problems I outlined above.  By being selective about which images I needed to convert, the total number of retina assets I needed shrunk to 692.  Still a large number, but less than half of the original.  Also, the ballooning of the binary size would be diminished.  That problem would not be solved, mind you, but it would certainly help.


Text was the number one item in the app that screamed “I’m pixelated!”.  The native iOS code renders such beautifully sharp text on the new iPad that any text pixelation introduced in the Bobo app stuck out like a sore thumb.  This part was easy to fix, though.  By loading a larger font on retina devices, all of the text that was dynamically laid out suddenly snapped to focus.  Unfortunately for me, not all of the text in the app was dynamically laid out.

Bobo features well over 100 pages of text with images in the form of side articles and interesting factoids.  For the sake of saving time when we worked on v1.0 of the app, we baked some of that text and images together and rendered the entire page as a single image.  This approach really helped us streamline the creation process and push the app out in time.  All in all, these text-images amounted to about 80MB of the final binary, but given the time it saved us, it was the right approach at the time.  Now, however, it presented a problem.

If we were to re-sample all these text-images for the retina display, we would gain ~80Mb x 4 = ~320Mb of additional content just from the text alone.  That was way too much.  But, we *needed* to render sharp text.  So, we bit the bullet, separated the text from its background, and dynamically laid out all the text at run-time.

This conversion took well over two weeks, but it was worth the effort.  The text became sharp without requiring any more space.  At the same time, we were able to keep all the photographs interleaved with the text as SD images.  Because these were photographs that were visually fairly busy and because they were positioned next to sharp text that drew the attention of the eyes, the apparent blurring from the pixelation was minimal.  Additionally, without any baked text the background images compressed into much smaller chunks, giving us about 50MB worth of savings.  That was not only cool, but very necessary.

Home-Brewed Cocos2D Solution

Bobo is built on top of the open-sourced Cocos2D framework (an awesome framework with a great community of developers – I highly recommend it!).  Out of the box, Cocos2D supports loading of retina-specific images using a naming convention.  However, this functionality is somewhat limited.  If all of the images in an app are either HD or SD, this works great.  But my needs were such that I required mixing and matching of the two, often without knowing ahead of time which images needed upscaling without trying it out first.  I needed a solution that would allow me to replace HD images with SD images on a whim without having to touch the code every time I did so.

Way back when, when I was working on The Little Mermaid and Three Little Pigs, I created an interactive book framework where I separated the metadata of each page (text positioning, list of images, etc.) from the actual Cocos2D sprites and labels that would render them on the screen.  This is a fairly common development pattern, but I can never remember what it’s officially called (View-Model separation maybe?).  Anyway, I used this separation to my advantage in Three Little Pigs to create the x-ray vision feature.  Render the metadata one way and the page appears normal; render that same data another way and you are looking at the x-ray version of that page.  Super simple and super effective.

With this mechanism in place, I was able to modify a single point in the rendering code to load differently scaled assets based on what assets were available.  In pseudo-code, the process looked something like this:

Sprite giveMeSpriteWithName(name) {
    if (retina && nameHD exists in the application bundle) {
        sprite = sprite with name(nameHD);
        sprite.scale = 1;
        return sprite;
    else {
        sprite = sprite with name(name);
        sprite.scale = retina ? 2 : 1;
        return sprite;

It got a little more complicated because of parenting issues (if SD and HD images belonged to different texture atlases, they each needed their own parents), but this was the core of it.  What this meant for me was that all of the pages, by default, took SD images and scaled them up.  Apart from appearing pixelated, the pages looked and behaved correctly.  Then, I could go in and, image-by-image, decide which assets needed to be converted to HD, testing these incremental changes on retina device as I went along.

There was some tediousness involved for sure.  However, I quickly got the sense of what portions of what pages needed updating and I came up with the following rough rules, that hopefully might come handy to you as well.

Things That Scream “I’m pixelated!”

1) Type

At the very least, convert all your fonts, whether they baked into images or laid out dynamically.  Your eye focuses almost instantly on the text on the screen, if some exists, and the fuzzy curves on letters become immediately noticeable.  By that same token, convert *all* of your fonts – don’t skimp out just by converting the main font that you use in 90% of the cases.  The other fuzzy 10% would essentially nullify the entire effort.

2) Small parts that are the focus of attention

When converted to HD, cogs, wheels, pupils, and tiny components all make a huge difference in giving the app the *appearance* of fine detail even if the larger images, however bright and prominent, are still in SD.  Moreover, because these smaller images are … uhm… small, scaling them up doesn’t take that much extra space, so it’s a win-win setup.

3) High-contrast boundaries

Bobo’s head is a perfect example.  Most of the time, Bobo moves across dark colors with his bright green bulbous head in sharp contrast with the background.  Even though Bobo’s head was relatively large, it begged for a razor-sharp edge on most pages.

Things That You Can Probably Ignore

1) Action sequences

This one can sometimes go either way, but it’s still worth mentioning.  If something pixelated moves across the screen, the movement will mask that pixelation enough so that no one will really care.  However, if you have an action sequence that draws the attention of the eye and the sequence contains at least some amount of stillness, the pixelation will show.

2) Shadows, glows, and fuzzy things

All of these guys *benefit* from pixelation – definitely don’t bother with them.  If anything, downscale them even for the SD displays and no one will be the wiser.  Seriously, this is a great trick.  Anything that has a nondescript texture without sharp outlines (either because the outlines should be fuzzy or because the outlines are covered with other images), store it as a 50% version, and scale it up dynamically in code to 200% on non-retina displays and 400% on retina displays.  The paper image behind all side articles in Bobo Explores Light is a perfect example.  The texture itself is a little fuzzy, but because it is lined with sharp metal edges and overlaid with sharp text, nobody cares.

When All Else Fails…

A few times I found myself in situations where the SD image was too fuzzy on the retina display, but the HD image took way too much space to store efficiently.  What I ended up doing in those cases was to create a single 150% version of the image, and scaled it down to 66% for SD displays and 133% for HD displays.  The results were perfectly passable in both cases.

Final Tallies

When all was said and done and my eyes were spinning from some of the more repetitive tasks, I was very curious to see how much the binary expanded.  I kept an on-going tally as I went through this process, but because of various reasons, it wasn’t super accurate.  When I compiled the finished version, I discovered that not only did the binary not expand, it *shrunk* by a whooping 50 MB!  This whole process took one freakishly tedious month to complete, but in the end the retina-enabled version of the app was significantly smaller than it’s non-retina original.

I don’t know whether that says more about my initial sloppiness or the effectiveness of the retina conversion.  I’ll leave that as a question for the reader.  Nonetheless, the results were exciting and Bobo Explores Light looks, if I dare say, pretty darn sharp on the new iPad.  Check it out!

The Importance of Not Guessing

Bobo ExaminingYesterday, I went to Apple’s iOS Tech Talk held in Seattle out of all places (how could I not?) and was excited to meet a slew of other developers with whom I previously only interacted online or via their apps.  It was quite a trip – I guess I should climb from underneath my “rock” more often.

Besides all the socializing, I sat through a number of lectures delivered by the Apple folks on the wonders of the iOS technology.  One of the more interesting sessions centered on profiling, whose overarching message was “Don’t guess – measure!”.

If you’ve ever written code with tight performance requirements, “thou shalt measure” is a well known commandment.  What I recently discovered, however, is that measuring is equally important in the development of interactive books and apps in general.  Specifically, knowing how your customers use your product is paramount to figuring out what features are important and which ones are not.

Let me give you an example.

When I was writing Bobo, certain pages seemed more important to me.  For example, the book wouldn’t seem complete if it didn’t mention Edison at some point.  In addition, certain other pages appealed to me personally more than others.  For example, I was in love with the imagery of the Jungle / Photosynthesis page and spent good four days tweaking countless little details – from the blooming flowers to the swaying vines, paying particular attention to dynamically recreate jungle sounds from a collection of animal calls, avoiding repetitiveness yet mimic the overall impression of vibrant life.  In short, I really geeked out.

Bobo Jungle

Dean was similarly enamored with the Bioluminescence page.  It all started with him sketching a beautiful yet menacing-looking angler fish.  From that point on, he wouldn’t rest until I finally caved in and spent four days on that page as well.  There was plenty to keep me occupied – animated fins on the fishes, several particle systems, fading colors with water depth, Bobo’s swimming movement which was unlike his movement on any other page, bubbles, water sounds, chomping angler fish, … you name it.  It was another point where we geeked out because of our sheer excitement (mostly powered by Dean) about the topic at hand.

Bobo Bioluminescence

Once we released the book into the wild, however, we were surprised that our users responded with page preferences completely different from ours.  The Introduction to Photosynthesis (a.k.a. the Tomato page), for example, is among the more popular pages in the book, even though we slapped it together in a single day to provide a much needed transition between some of the other topics in the book.

If I order all the pages by how much each of the topics appealed to me as a developer/user, I get the list in the left column.  If I order them by how much time I spent creating each, the list looks a little different, but not entirely dissimilar (middle column).  However, if I order it by popularity of users from all around the world, the list looks completely different (right column):

My preference

  • Photosynthesis
  • Bioluminescence
  • Auroras
  • Disco
  • Fireworks
  • Glow in the Dark
  • Reflection
  • Sunset / Night / Sunrise
  • Lightning
  • Edison
  • RGB
  • Eyeball
  • Sun
  • LaserIntro
  • Caveman
  • Refraction
  • Telescopes
  • Photosynthesis Intro (Tomatoes)

Code Complexity

  • Reflection
  • Glow in the Dark
  • Photosynthesis
  • Bioluminescence
  • Disco
  • Sun
  • LaserIntro
  • Eyeball
  • Auroras
  • RGB
  • Fireworks
  • Edison
  • Sunset / Night / Sunrise
  • Lightning
  • Refraction
  • Caveman
  • Telescopes
  • Photosynthesis Intro (Tomatoes)

User Preference

  • Sun
  • Sunset / Night / Sunrise
  • Auroras
  • Lightning
  • Photosynthesis Intro (Tomatoes)
  • LaserIntro
  • RGB
  • Disco
  • Caveman
  • Photosynthesis
  • Fireworks
  • Reflection
  • Glow in the Dark
  • Eyeball
  • Edison
  • Bioluminescence
  • Telescopes
  • Refraction

The Tomato page says it all.

The moral of the story is that the time we spent on the different book parts is incongruous with the amount of time people spend using it and, if we paused and collected some of this data during development, we would probably have adjusted our internal schedules and priorities accordingly.  Live and learn but, most importantly, don’t guess – measure often and repeatedly.

The building of a robot

I scribbled the first sketch of Bobo on a piece of construction paper.  My first concern was to come up with a robot that would be easy to animate.  It looked like this:

Bobo v0.1

Bobo was constructed from simple parts that only needed to be translated and rotated to convey movement and behavior.  However despite my best initial hopes, Bobo turned out quite a bit more complicated in the end.  When Dean, an incredibly talented professional illustrator from San Diego, got a hold of the concept, he came up with a very different and infinitely cuter version of the robot:

Bobo v0.2

However as a consequence of the visual boost, the simple 2D character acquired a faux 3D look complete with a bulbous head, a key rotating in and out of the display plane, and arms that extended and retracted to snake all over the page.  Yikes!  But I couldn’t deny that Bobo looked adorable and so the idea of using only a couple of sprites from which to build our little hero went right out of the window.

Problem 1: The bulbous head

Despite the 2D nature of the robot, Bobo’s face is in perspective.  When looking left, Bobo’s left eye is smaller than the right.  The left eye is slightly rotated counter-clockwise, while his right eye is slightly rotated clockwise (following the contour of his tapered head).  If Bobo needs to turns his face in the other direction, the eyes and the mouth need to sweep an arc to their new positions which have the reverse scaling and rotation applied.  If he looks up, the eyes and the mouth once again need to follow the contour of the head to give his bulb the appropriate sense of shape.

To model this fairly complicated movement, I came up with the following system.  I created a hierarchy of empty CCNodes that correspond to the various features of the face – one CCNode for each eye, both parented to a CCNode representing the nose, another CCNode for the mouth, and an uber-parent CCNode that represented the face as a whole.  All of these nodes live in a coordinate space of a square from (-1, -1) to (1, 1), with the center being smack at (0, 0).  The idea was that if I wanted Bobo to look left, I’d animate the position of the face CCNode to move to (-0.5, 0).  If I wanted Bobo to look right, I’d animate the position to (0.5, 0).  Since the eyes and the mouth were descendants of the face node, they’d follow.

CCNodes themselves have no visuals associated with them.  So, I still needed to create a set of CCSprites to display the eyes, the eye lids, and the mouth, each of which was associated with its corresponding empty CCNode.  Every frame, I looked up the position on a given empty CCNode in the (-1, -1) – (1, 1) square and I applied a 3D transform to it to convert it to a position mapped along a tapered 3D cylinder.  I offset this cylindrical position by the position and rotation of Bobo’s body and voila!  Bobo looked around his virtual world in 3D coordinates while I retained the ability to animate his face with simple CCAction statements in 2D.

Problem 2: The turning key

One way to fake the turning of Bobo’s key in 3D would be to illustrate a handful of static frames, each representing a slightly different rotation of the key, and then use these frames to produce the desired effect.  The problem here was that since the animation of the rest of the robot was procedural, a frame-based component would immediately jump out as jerky.  Similarly, I wanted to be able to significantly slow the key down or speed it up to convey Bobo’s excitement or boredom and having only a limited number of frames to choose from would limit my ability to do so.  So, once again, I embarked on faking 3D.

Fortunately, this one wasn’t too hard.  The key consists of three components – the top leaf, the bottom leaf, and the key rod.  If you set up the anchor point of the top leaf to be at the bottom of the image and the anchor point of the bottom leaf to be at the top of the image, you can simply scale the image in the y-axis to create an illusion of rotation.  Add subtle scaling in the x-axis as well, depending on whether the leaf is pointing towards you or away from you, and the illusion gets even better.  The code for scaling looks like this:

float keyPos = some value from 0..1;
float angle = keyPos * M_PI * 2.0f;
topLeaf.scale = ccp(1 + 0.1f * sinf(angle), cosf(angle));
bottomLeaf.scale = ccp(1 - 0.1f * sinf(angle), cosf(angle));

To make this fake 3D look even better, I darken the color of the leaves when they are at a 45 degree angle, to simulate a reflection off of some distant light source:

float brightnessF = cosf(angle) < 0 ?
                   (sinf(angle - M_PI_4) + 1) * 0.5f :
                   (sinf(angle - M_PI_4 + M_PI) + 1) * 0.5f;
const float LOW_BRIGHTNESS = 140;
const float HIGH_BRIGHTNESS = 256;
float brightnessScaled = HIGH_BRIGHTNESS * brightness + LOW_BRIGHTNESS * (1.0 - brightness);
GLubyte b = (GLubyte) MIN(255, MAX(0, brightnessScaled));

topLeaf.color = ccc3(b, b, b);
bottomLeaf.color = ccc3(b, b, b);

So, why go through all this trouble when I could have just implemented the rotation in pure 3D?  A couple of reasons: I would need to write my own version of CCSprite that would support 3D or resort to using Cocos3D on a separate layer which would make z-ordering difficult and would consume an additional glDraw() call.  I would also need to switch the OpenGL camera from orthographic matrix to a perspective matrix.  That would, in turn, make all the text rendered through OpenGL appear fuzzy which was the opposite of what I needed.  So, all in all, faking 3D rotation was a more straight forward solution in this case.

Problem 3: Retractable arms

Having a robot with hoses for arms, once brought up by Dean, became a very important feature of the character.  For one, hose arms are very iconic of 50’s sci-fi comic books and they underline the mechanical workings of the robot.  For the other, they are very flexible because they allow the robot to interact with objects all over the screen.

It took some iterating on the development side of things to come up with the right look and feel.  I tried a number of approaches – vertlet rope simulation, dangling Chipmunk bodies distributed along a curve, a path following a free-form curve, … but nothing felt quite right.  The more complicated simulation I devised, the more unnatural the behavior seemed.  In the end I scrapped all of these approaches and went with a simple, 3-point cubic Bézier curve.

Cubic Béziers are defined thusly:

Now, if you are like me, instead of a clear equation you will see a jumble of Egyptian hieroglyphs.  Fortunately, Wikipedia, the source of all light and harmony in the universe, has a neat geometric explanation of Bézier curves which not only makes a lot more sense to me, it also gives me an algorithm of how to approximate such curve parametrically using simple linear interpolation:

All hail Wikipedia!  But back to Bobo’s arms.  As I mentioned already, they are constructed using these curves with three fixed points:

  1. The attachment point on Bobo’s body
  2. Desired claw position
  3. Center point, smack in the middle inbetween the two

In addition to these 3 pass-through points, in code I create 4 additional control points to model a basic-looking S curve for each arm – two control points between the body attachment and the center point and two control points between the center point and the claw.  Then, using the OpenGL triangle strip and a repeating texture, I draw the actual hose and place a CCSprite at each end of the hose to complete the illusion.

Animating the hose turned out being equally simple.  At any given point in time, I know where the body attachment is (based on where Bobo’s body is) and where the claw should be.  The rest of the hose curve is calculated for me through the Bézier approximation above.  To move the arm, I linearly interpolate between the claw’s current position and the claw’s desired position and, at a slightly slower rate, between the arm’s current center position and the arms’s desired center position to give the hose a nice movement lag.  The body attachment point is fixed by Bobo’s body.  The rest of the curve automagically animates along.

Put it all together, add cute sound effects, and you have yourself a robot!

Confidential to @atzoum: I’ll do a post on Chipmunk backing (including how Bobo moves cpBodies around) next.  Stay tuned!

Task Sequencer or how I was lazy to learn Lua

About half-way through coding of Bobo Explores Light, it became apparent that the book needed scripted sequences.  For example, on the Glow in the Dark page, Bobo pops up next to a fridge with a bunch of magnets on it.  He reaches out and arranges the magnets in such a way as to create a robot face with which he then proceeds to have a conversation.  The Reflection page has a similar need.  It shows a laser gizmo and four mirrors that the user can position and rotate, such that the laser light reflects and bounces all over.  The first time most people see this page, they don’t know what to do.  So, we script Bobo to demonstrate how the mirrors can be used.

From what I understand, Lua is the perfect language to achieve exactly that.  It is a simple and flexible script that gets interpreted at run-time and that can be hooked into objects and methods in the actual Objective-C code.  Most importantly, it’s already written.  Sadly for me, I have little patience learning yet another syntax for yet another language that I’ll most likely only use once.  So, I looked elsewhere.

Cocos2D comes with a really neat concept of “actions”.  Basically, every node in the scene (sprites, particles, the page itself, etc.) can execute an action.  Most commonly, actions allow you to modify a property of an object over time.  As an example, there is a CCRotateBy action which, when run on an object, rotates that object by a specified number of degrees over a specified amount time.  The following code would rotate mySprite by 90 degrees over the period of 3 seconds:

[mySprite runAction:[CCRotateBy actionWithDuration:3 angle:90]];

This is a really handy mechanism, because you can run an action on a node and forget about it.  Also, there are a slew of actions already defined, anything from actions that tween a property over time (such as an angle) to actions that execute callbacks or batch a bunch of other actions into a sequence.  The batching of actions into a sequence is especially useful, because you can, for example, move a sprite 100px to the right, rotate it by 180 degrees, move it 100px to the left, rotate it again by 180 degrees, and you have yourself a patrol loop for an enemy soldier that you can then repeat forever.

In the case of Bobo, however, I still needed a little more control.  Specifically, I needed to string together a sequence of actions for Bobo, some of which would be run concurrently, others of which would block until previous actions got finished, and so on.  On the Reflection page, for example, I needed the code to accomplish the following:

  1. Find Bobo’s unused arm and move mirror 1 to predefined position
  2. Find Bobo’s unused arm and move mirror 2 …
  3. Find Bobo’s unused arm and move mirror 3 …
  4. Find Bobo’s unused arm and move mirror 4 …
  5. Find Bobo’s unused arm and move the laser to predefined position
  6. Find Bobo’s unused arm and turn the laser on
  7. Say “Tada!”
  8. Wait 1 sec
  9. Find Bobo’s unused arm and turn off the laser

Since Bobo has two arms, actions 1 and 2 can be executed simultaneously.  The same can be said for actions 3 and 4 and actions 5 and 6.  However, action 7 needs to wait until action 6 (and all actions prior to it) have finished executing.  Finally, actions 8 and 9 need to happen in succession, not concurrently.

One solution would be to create a bunch of actions that execute callbacks that poll whether arms are available or not and that, depending on the result, execute other actions and their respective callbacks.  Hello spaghetti code!

Another solution, and the one that I ultimately ended up using, was writing a sequencer.

Imagine an object, a sequencer, that is a fancy wrapper around a FIFO list of other objects, tasks.  You can add tasks into a sequencer at any point and then periodically call the “execute” method that walks through all the tasks in the list and, given some simple rules, calls “execute” on them in turn.  Once a task signals that it has completed, it is removed from the list until, eventually, the sequencer ends up being empty.

To support multiple tasks being run concurrently, each task exposes a variable that defines its blocking behavior.  There are three possibilities:

  • BlockSelf – When sequencer encounters this task, it should stop executing any following tasks until the current task signals that it has finished.
  • BlockAll – When sequencer encounters this task, it should stop executing any following tasks until the current task and all previous tasks signal that they have finished.
  • PassThrough – When sequencer encounters this task, it should continue executing following tasks regardless of whether the current task is still in progress

Using a combination of these behaviors, the Reflection page script translates into the following sequencer calls:

[boboSequencer addTask:[MoveBodyTask taskWithBody:mirror1 position:pos1 rotation:rot1]
[boboSequencer addTask:[MoveBodyTask taskWithBody:mirror2 position:pos2 rotation:rot2]
[boboSequencer addTask:[MoveBodyTask taskWithBody:mirror3 position:pos3 rotation:rot3]
[boboSequencer addTask:[MoveBodyTask taskWithBody:mirror4 position:pos4 rotation:rot4]
[boboSequencer addTask:[MoveBodyTask taskWithBody:laser position:laserPos rotation:laserRot]
[boboSequencer addTask:[SwitchLaserTask taskWithLaserOn:YES] block:kBlockSelf];
[boboSequencer addTask:[SpeakTask taskWithSound:SOUND_BOBO_TADA] block:kBlockSelf];
[boboSequencer addTask:[WaitTask taskWithDelay:1] block:kBlockAll];
[boboSequencer addTask:[SwitchLaserTask taskWithLaserOn:NO] block:kBlockSelf];

The execute method on MoveBodyTask first checks to see if Bobo has a free arm available.  If so, it grabs it, marks it as unavailable, and starts using it however needed.  If there is no free arm, the execute method simply turns into a noop and keeps waiting in the sequencer’s task list until one of the arms becomes available.  Hence, the first four tasks move the mirrors in sequence and the laser doesn’t get switched on until all the mirrors as well as the laser get positioned correctly.

This made it very simple for me to see what’s going on in code, as well as to modify the sequence however necessary without too much trouble or fear of introducing obscure bugs.

Later I discovered that this simple sequencer / task combo was effective all over the place.  For example, each time the user navigates to the next (or previous) page, a set of tasks needs to be performed: pre-load assets for the next page, slide the UI over, unload data from the previous page, clean up UI, etc.  At first, I hard-coded this sequence into the code manually.  However, then I switched to using the sequencer which allowed me to quickly and easily reshuffle some of these tasks around to compare performance and responsiveness of the app during page turns and come up with the most optimal arrangement.  Butter!

Anyway, enough of me talking – take a look (and feel free to use) the code itself:

Sequencer.h Sequencer.m

For the demo project, open up the Console window in Xcode and watch the logs (nothing actually happens on screen).  Let me know if it comes in handy!

So you want to write an interactive book

Ever since I started working on interactive books for kids, I began receiving emails from authors, illustrators, hobbyists, moms, and dads, all excited about creating their own interactive books. They were looking for development partners as well as general pointers, especially thoughts on how to dive into software development on the iOS. So, if you find yourself in the same boat, here are my suggestions on the subject acquired through a lot of trial and error (and I’m sad to admit that it was mostly error).

Point 1: There are tons of interactive books out there

Just like the rest of the App Store, the competition among book apps in sheer numbers is fierce.  It doesn’t really matter if your app is colorful and interactive or if it has actual content to offer.  It doesn’t matter if it fills a niche that appears vacant.  It also doesn’t matter if your creation is really cool or if the rest of the books pale in comparison.  One way or another, your still need to fight through the thousands and thousands of book apps out there.

To give you an example, last year I created a mechanical version of The Three Little Pigs on the iPad.  I released the app two weeks before Christmas in hopes of making the book a great stocking stuffer for the holidays.  Apparently, I wasn’t the only one with that thought.  That week alone, over 100 interactive book apps were released EVERY SINGLE DAY.  Before I noticed, The Three Little Pigs app was buried under an avalanche of a thousand apps deep.  Most of them were simple and not particularly exciting but without much trouble they pushed TLPs right out of the skinny sliver of spotlight.

There are other things that went wrong with the release of TLPs (bad timing, ineffective marketing, many versions of that same story, etc.) but underestimating the sheer volume of book apps was definitely a biggie.

Point 2: Creating an interactive book is technically challenging

As of now, there are no user-friendly tools that I know of that will allow you to create and publish an interactive book on the iPad.  This means you will need to dive into actual coding.  If you are familiar with C, C++, Java, or even ActionScript, picking up Objective-C and writing native code for the iPhone will be mostly straight forward (after pulling your hair a little and asking the heavens “why, oh, why do we need another dialect of C?!?”).  However, if you are using this project to dive into programming for the first time, you should be ready for a steep learning curve.

Writing a book, even a simple one, is very, very technical.  The iPad has only so much memory, so you will need to deal with correctly loading and unloading pages and images as the user navigates through your book.  You will need to come up with a mechanism to position images, wrap text, and manage sound effects and background music.  In other words, you will have your hands full for a long time before you even begin approaching working on the story itself.  If this mountain of tasks doesn’t scare you, though, there are books, websites, and frameworks to help you along.

  • Unity 3D – Unity is a cross-platform 3D development tool with friendly user interface and minimal coding requirements.  Even though it’s a 3D tool, many people have used it to create 2D worlds, including interactive books (check out The Jungle Book).  The idea is that you place textured rectangles in front of an orthogonal camera and you have a layered representation of a 2D world.  Pros: quick and easy to jump in, minimal programming requirements, support for multiple platforms (iOS, web, Android, …).  Cons: costs money ($400 – $3,000), requires experiential know-how to squeeze max performance out of the tool.
  • Cocos2D – Cocos2D is a community-driven framework for 2D game development on iOS and, incidentally, my weapon of choice.  It’s well architected, extensible, and it does a great job abstracting all the OpenGL goo out so you don’t have to worry about the pesky little details (textures, buffers, gl draw calls, …).   It also comes with an excellent development forum for people to ask questions and share their creations.  Pros: excellent support and community backing, extensible to your heart’s desire, free, comes bundled with open source physics engines.  Cons: you need to know how to program in Objective-C.
  • Online tutorials – There are countless online tutorials that walk you through setting up your first iOS app.  In fact, when you sign up with Apple for your developer account (in order to be able to publish apps on the App Store), you will gain access to dozens of videos and code samples that will show you the basics of how iOS works.  I also recommend checking out Ray Wenderlich’s iPhone tutorials which are easy to follow and cover a huge variety of topics.
  • Books – If you pop into your local books store and find the computer section, you are bound to find at least two dozen volumes on iPhone programming.  I haven’t actually used these so I don’t have specific ones to recommend, but I know other people swear by them.  (Do you have a favorite one of your own?  Lemme know!)

If jumping into development is not the right option for you, I would encourage you to find a programmer (maybe half-way across the world) and pair-up with them to create your app together.  The community of programmer out there is large and chances are you will be able to find someone to suite your needs.  The other option is to hire a development studio or an individual, but if you decide to go that route keep the following point in mind:

Point 3: Interactive books don’t make (much) money

It’s true – some interactive book apps do manage to break free and walk away with a nice chunk of change. However, majority of books do not. It’s not necessarily a quality thing, a marketing thing, or a getting featured thing, although all those are definitely factors. Sometimes, however, the stars don’t align just right and your title bombs despite you holding all the right cards.

Going back to The Three Little Pigs, in the end the title pulled in only a couple of thousand dollars.  Even though the project didn’t have any expenses, a couple of thousand bucks didn’t come nearly close to justifying the amount of time spent developing it.  That said, it was a great project to create and it served as a great base for Bobo Explores Light that we published next.

The moral of the story is this:  Dive into creating interactive stories head first.  It’s a ton of fun and you’ll be guaranteed to make kids around the world giggle with glee.  However, expect to make no money and get your satisfaction from the experience alone.  Any money that you generate will come at you as an unexpected bonus.

Point 4: Marketing is a serious time hog

I’m a developer, so this was a new one for me.  Just creating something cool and interesting is, sadly, not enough.  You have to let the world know that it exists.  If you have the budget to hire a professional PR firm, that might be a way to go.   But if you do decide to market yourself, be prepared to spend days writing press releases, creating videos, sending out gobs and gobs of emails and, more often than not, be ignored.  After a while, you will figure out what language works in getting your point across.  There is also some great info on the web if you are new to the field, including Stuart Dredge’s awesome post on What annoys technology journalists about PRs.

However you slice it, marketing takes a lot of time, so budget for it from the beginning.

Point 5: Working on interactive books is super rewarding

When I finished my first book, The Little Mermaid, I was exhausted, excited, and just spent.  I poured my heart out into the code and the content for 6 weeks straight during which I barely had time to breathe.  The book went live on a Friday morning and I couldn’t stop pacing.  Eventually, I realized that I needed to take a break, walk away from the computer, and just turn off for a while.  Before I did just that, I checked my email one more time and I received the following message from a stranger in cyberspace:

We love your apps. Thank you! Zoe age 5, William age 8

I’ve never met Zoe or William, I don’t know where in the world they live, but that email alone was enough to have me work on interactive stories for kids ever since.

So, wanna create your own stories?  Go for it and let me know how it goes.  Just make sure you do it for the right reasons and know that, for an indie, it’s a consuming but rewarding process.