Huge congratulations to LinenLady49 for holding the best Axl & Tuna score
IN THE WORLD !
I mean, look at those 22,174 points! The highest I’ve managed to score was somewhere around 2,000. LinenLady49 has bested both me and Dean many times over. For that, and for being such a staunch supporter of our game, we hereby award her the title of “Most Awesome”.
As a way of saying “thank you,” we’re sending her a gift package containing a signed Certificate of Most Awesomeness, an authentic can of TUNA*, and an iTunes gift card to support her mad gaming skills. Please join us in cheering LinenLady49 into even greater heights of Awesomenessdom.
* No Fuzzbots were harmed in the production of said TUNA can.
Here’s a big Thank You to the judges and organizers of the event which was a total blast. And huge congrats to all the other indies presenting their awesome games! Pitching your app to a series of judges with a 3min timer ticking in the background can be a nerve-wracking experience and I was very impressed with how polished people’s games and presentations ended up being. It’s been truly an inspiring and humbling evening to be surrounded by so many creative minds of the indie world. Thanks again, stay in touch, and for those of you who couldn’t make it, check out some of the pictures below.
Check out more images at the official event website here.
I’ve recently had a chance to virtually sit down with Steve Paris and chat about Axl and Tuna. Steve writes for MacWorld UK and he was kind enough to write about the game and the story behind it. You can read the interview on his site here.
Recently, @abitofcode asked me to do a post about my super secret use of tools, frameworks, and shortcuts while developing apps for the iOS. So, here goes.
If you’re not up for reading a long post, here is the short answer: I invest a lot of my effort into physics simulations and procedural animations which, when executed correctly, result in organically-feeling worlds that appear to have a mind of their own.
The long answer is, … well, a little longer.
I find that good-looking and well-polished apps share two characteristics:
The obvious animations and interactions appropriate to the functioning of the app and
The more subtle movements and interactions that are non-essential but make the app appear seamless, comforting, and alive.
I tend to spend a lot of time working on point #2, perhaps too much, but to me that’s where the juice is. I also like to use fuzzy words such as comforting and empowering because I do believe that apps are much more about how they make you feel rather than what they actually do.
I recently found this super useful article on principles of effective animation. If you aren’t familiar with the basics of animation, it’s a good and quick read. Here is an excerpt:
“In the real world, the basic laws of physics were first described by Sir Isaac Newton and Albert Einstein. In the world of animation, however, we owe the laws of physics to Frank Thomas and Ollie Johnston. Working for the Walt Disney Company in its heyday of the 1930s, these two animators came up with the 12 basic principles of animation, each of which can be used to produce the illusion of characters moving realistically in a cartoon.”
Now, there are many ways to approach this second point. My preferred way is to inject physics simulation and procedural animation into elements that move on the screen. This can be an animated character, a button, or a simple label.
In the case of a character, a few springs and pivots can add a whole slew of beautiful behaviors for free. Well, “free” may actually be a little strong of a term, especially since rigging up a characters can be a time consuming task, but in certain cases it will save you a ton of time down the road.
Bobo the Robot was a great example. I took a static (non-simulated) body that I moved around the screen parametrically (ie. moved it from point A to point B over the duration of C seconds, easing in and out of each position). To that body I attached a second body, this one simulated using physics. I added a pivot joint and three springs to hold it in place. On the static body I placed an image of Bobo’s wheels, on the dynamic body I placed an image of Bobo’s bulbous green head and blue torso and I let the physics engine do its magic. When Bobo moved across the screen, his head ended up swaying so naturally from side to side, that he immediately jumped out of the screen as alive. Magic.
And as I said before, you are not limited to applying this principle to just characters. Buttons, labels, and dialogs can all benefit from a similar approach. Just know that it’s possible to over-do it. When in doubt, ask a passer-by for an opinion.
But I’m getting ahead of myself. First, the frameworks.
For my rendering and physics needs I rely on two open source frameworks – Cocos2D and Chipmunk2D.
Many, many years ago, at least five or so, some very smart and very dedicated people created the Cocos2D for iOS. You probably already know this, but if you don’t, here is a very brief overview.
When you create a new iOS app, you get a lot of goodies from Apple for free in the form of the iOS SDK. I’m talking about ways to manipulate data, show UI elements (buttons, tables, and such), talk over the network, etc. All of the visual goo (UIKit, part of iOS SDK) is built on top OpenGL, a standardized way to talk to graphics chips to display individual pixels on the screen. That’s cool when you have a couple of buttons to deal with or if you’re using optimized widgets that ship with UIKit directly. However, for any other visually intensive experiences such as what you might want to expose in a game, you will want to talk to OpenGL directly to get better performance.
OpenGL is pretty low level and requires you to deal with frame buffers and matrices and encoding types and what not, which is fine, but if you work with OpenGL, you end up spending more of your time making the framework do your bidding and less time developing your game.
That’s where Cocos2D comes in.
With useful and highly-optimized abstractions such as sprites (moving images), animations, and batches of particles, it takes care of a lot of the OpenGL stuff for you. It also gives you entry points all along the way, so if you want to hack into OpenGL you can, but you don’t have to. Awesome!
Now, in iOS7 Apple introduced something called SpriteKit, which is also part of the iOS SDK and it’s basically Apple’s version of Cocos2D. That’s cool, so why should you use Cocos2D, you ask? Maybe you shouldn’t, but I can tell you that Cocos2D is much more mature a framework than SpriteKit, at least right now, which means you can do a lot more with it straight out of the box. With some recent efforts, Cocos2D v3 came out which, I believe, allows you to port your games onto the Android OS fairly easily. I haven’t actually tried this, though, so don’t take my word for it. Finally, Cocos2D is an indie effort, which is inherently cool, but more importantly you can hack into its codebase, meaning it’s easy to tweak the framework to suit your needs. While I’m sure the purists in the crowd are giving the evil eye right now, I do it …uhm… all the time. Another secret leaked…
Then, some other very smart people, or perhaps just one person – I’m not 100% sure about the full origin story, forgive me Scott – got together and created Chipmunk Physics, also open source and also free. Once again, this is likely old news, but in case it isn’t, below you will find more.
Chipmunk Physics, or Chipmunk2D as it has recently been renamed, is a free-ish, portable 2D physics engine written in C. It is the brain child of Scott Lembcke and his Howling Moon Software company. It’s lean, it’s fast, it’s well written, it’s predictable, it’s extensible, it’s actually kind of awesome. It handles rigid-body collisions, springs, motors, hinges, and a slew of other stuff. I would recommend you pay a little bit of money to get the Pro version as well. It comes with a few extra goodies. However, and more importantly, you’d support Scott in his super awesome physics efforts so you should definitely do that.
So, then, to make something interesting, you need to stitch the two together. There have been a couple of efforts to bridge the two in some standardized way, but they all seemed to have lacked in their own ways until Cocos2D v3.0 came along, which brought Chipmunk2D and Cocos2D together in one, unified framework. All hail the folks involved in that effort! Going forward, you should definitely consider investing time to learn the ways of v3.0 because it will simplify your work on games and other Cocos2D apps significantly. However, since Cocos2D v3.0 is still a relatively new effort not available in my heyday, I ended up creating into my own, home-brewed solution which taught me a couple of things:
Physics Simulations Take Time to Setup Correctly
That’s another way of saying that physics simulations take in a lot of variables which, if not well balanced, can lead into unstable outcomes – ie. your physics rig explodes. There is no easy way around this, but here are some steps that make the process less frustrating:
Read the documentation – Many a time I would be struggling with a particular component of the simulation only to realize that there is an errorBias property somewhere that lets me adjust the very thing that’s unstable. Chipmunk has okay documentation on its website and you should definitely read through it. Also, look through and understand tutorial code posted there. You will discover simpler alternatives of doing whatever it is that you need to do. If all else fails, dig into the framework code itself and read through the comments.
Create the simplest rig possible – Chances are it will be good enough. It will also simulate quicker, you will understand it better, and you will minimize potential places for things to go wrong. Can you fake the more complicated bits with a pre-rendered animation or some visual overlay? Do it!
Ask questions on forums – Both Cocos2D and Chipmunk2D have great forums (here and here, respectively) and if you post a clear, thoughtful, and complete question, you will most likely receive a clear, thoughtful, and complete answer. The converse is also true. I often encounter questions of the type “my code is broken. why?” without much other information offered which makes answers very difficult to come up with. You will get more applicable responses if you clearly state your issue, list your expectation, and your actual outcome and ask why the two don’t match. Posting a small snippet of code can also be helpful. Just don’t dump your entire project into the post, lest you want people to roll their eyes and move on. Finally, if you get your bearings on how to use a particular feature, go back to the forums and pay your karmic debt by answering questions for other people.
Test your rigs on actual devices – Getting physics to feel “right” means that you need to test it in the same conditions as those of your final product. If you tweak some constants on the simulator running at 30 FPS and then play your game on a device running at 60 FPS, what felt natural might now feel too fast and you need to go back to the drawing board.
Be patient – Tweaking takes time and often you will have to try several approaches to find the one that works the best. When I was working on Axl and Tuna, for example, I found that Axl was gliding along a track fairly well at slow speeds, but tended to bounce off the ground and not make much contact with it at higher speeds. I tried a few things to fix this behavior: I tried intercepting and modifying velocities during axl-track collision callbacks, I tried adding an invisible wheel underneath the track connected to Axl’s rig by a loose spring, etc. but none of these looked quite right. In the end, I simulated magnetic behavior by applying a small amount of force to Axl, pushing him towards the track surface along its normal, whenever Axl was within some threshold distance from it. That approach finally did the trick, but it took several head-scratching sessions to get there.
This brings me to my next point, which is…
Editors Are Your Friends
Now, don’t get me wrong. I love tweaking constants in code and recompiling and re-tweaking and recompiling as much as the next guy, but if your game / app is reasonably complex, doing this process over and over is a major pain in the butt. Especially, if there are twenty different things to tweak and you don’t know where to start.
Fortunately for you, there are some editors such as R.U.B.E. and SpriteBuilder out there which, as I understand it, allow you to build CCNode hierarchies and plug them into the underlying physics mechanics. I’ve never actually used either because they are still fairly new tools, but they both look promising, especially because they appear extensible and seem to have a solid-looking visual interface that allows you to tweak values quickly and intuitively.
The extensibility component is very important because, inevitably, you’ll come up with some cool idea that the tools won’t support natively and extending the existing tools, rather than reinventing the wheel and building your own from scratch, will be your only path to salvation.
Unfortunately for me, when I began app development, some of these tools didn’t exist and I had to resort to building my own.
My MO approach was to bake an editor directly into the app I was building and that worked fairly well. Here are a couple of examples:
It started as a necessity to lay out text for interactive books that I was working on, but then, with a few extra tweaks, I started editing physics shapes, simulation constants, sprite placement, and the works. It was very helpful to momentarily pause a game (or a page in a book) tweak some values and then restart it without having to recompile the code. I also found this setup to work well as a great debugging tool that allowed me to quickly dive into complex bugs just by swiping my finger across the iPad screen. Sadly, there is adownside…
Editors Are Your Enemies
It turns out that when you invest your time into an editor, you don’t spend that time working on your game. Who knew, right? And if you are like me and, in the process of creating a crude editing environment for your game, you discover cool ways to constantly rewrite your framework to make it “easier to use”, you will get lost in your own rathole with no end in sight. In other words, sometimes it can be difficult to break yourself away from creating the tool and spend time creating the app.
It’s a balance. Editors can save time and frustration but they also take time to build (and debug), so I find it useful to constantly ask myself – can I achieve what I need to achieve with the tools that I already have? If you are like me and building tools is exiting for you, the previous question is a good one to write in permanent marker above your monitor.
However, always consider the power of editors that you already have. Will Sprite Builder work for you? Can you export coordinates from a photo-editing program? Design your physics setup in Inkscape? The image on the right, for example, is the design of a character rig for Axl. Use whatever tools you already have whenever you can.
The other problem is that editors tend to end up being project-specific. They will likely end up sharing a common infrastructure from one project to the next, but I find that each game / app has its own needs that require at least some form of a custom editing experience. In the past I always ended up tweaking and rewriting editors ever so slightly as I progressed in my creation of new apps.
While working on Axl and Tuna, I asked myself whether I could create a run-time editor that was truly universal without having to spend a year writing the most flexible and extensible framework ever. Was there a compromise that delivered minimal, but necessary editing capabilities for a wide range of scenarios, one that was simple to use and integrate into any project?
I’m happy to say that I found an answer that worked for me. I’m sure I’m not the first one to have thought of this, but the solution I came up with is relatively easy to construct but still powerful enough to do what it needs to do.
What I’ve done is create a very simple editing framework with a corresponding editor that allows me to edit primitive values (ints, floats, vectors, etc.) organized into arbitrary hierarchies right within the app itself. Using a simple macro, any class can expose properties for editing. These can be backed by actual variables or just by named constants. If, during the app execution, an editor is invoked, I simply create a top visual layer and place it over the entire screen that scours a given hierarchy of objects, looking for and exposing any and all properties marked as editable. The editor displays the value for each property and, if you select it, you can use your finger to change its value directly on the iPad / iPhone screen. If no property is selected, the touches are passed into the scene underneath for normal app execution.
Once you find the right values for the properties you care about you can either copy those values back into the code manually or you can dump the property tree into a plist that can be read in and applied to your app during its next execution.
Very crude, but very effective because it applies to a wide array of scenarios. So, there you have it – another secret exposed!
Remember that organic feeling for apps I was talking about earlier? A lot of that comes from animation. I talked about physics-based animation already, but there is more you can do.
Sadly for me, I’m not an animator. I also don’t have one working with me. So I have to tackle animations programmatically.
This approach can be a work-intensive way to add movement into your apps. However, it can breathe unexpected life that you wouldn’t be able to achieve otherwise. Let me give you an example.
Bobo, my favorite robot character, has two mechanical arms. He uses his right arm to help you, the user, pull down pages from the top of the screen when you tap on their heading. Being curious and all, Bobo sometimes gets interested in whatever gizmo happens to be on a given page and may end up using his right arm to do something else for a moment (pull on a switch, reach up to tickle a monkey, etc.). If at that same moment you, the user, tap on a pull down menu and Bobo’s right arm is occupied, he will just switch and use his left arm to pull down the page instead. If Bobo was a traditionally animated character, with predefined keyframes, this type of an interaction would either be impossible or it would result in one animation being abruptly cut off while the other played itself out. However, because Bobo is monitoring his whereabouts and can make simple decisions on how to behave in a given situation, he doesn’t always do the same thing to achieve a given result. Instead, he dynamically changes his behavior based on the circumstances and exhibits a much more varied array of movements, emotions, and animations.
To make development of this type of interactivity easier and avoid the pitfall of a whole bunch of spaghetti code, I invested a little bit of time to create a behavior system. Basically, a character (or a menu button for that matter) can perform a certain set of behaviors. Take Bobo as an example again. Bobo knows how to blink, how to look around, how to look at the user, how to sing a song, how to move to a requested location, along with a bunch of other things. A given behavior can be either ON or OFF and often several behaviors are ON at the same time. Some behaviors have higher priority (user directing Bobo to go somewhere) than others (Bobo checking out a location on his own). Some behaviors deactivate others. Bobo singing and Bobo saying “Ouch!” are mutually exclusive and because “Ouch!” has a higher priority it will overshadow and automatically deactivate the singing behavior.
Anyway, you throw all these rules together, each one defined and governed by an isolated piece of code, and (if you are lucky) you get an experience that feels spontaneous and real and gives you a huge variety of responses to a set of conditions.
Before I go, here are a few final lessons I learned that you might find helpful in your own projects.
Code structure – whatever you do, structure your code well and refactor it as you go along. Building code in isolated components makes testing, bug fixing, refactoring, and maintenance not only easier but possible. If you make your code structure bullet-proof, good code will follow.
Bugs – fix your bugs early and when you encounter them, even if you are in the middle of something else. I constantly interrupt my work because I notice that something is not happening the way it should be. Waiting all the way until the end means that you will find yourself facing a mountain of issues a week before you want to go live and that you will end up shipping a product that will fail in your users’ hands.
Profiling – do it through out your development to understand where your code is spending most of its time and what operations are costly. That practice will help you come up with design decision that won’t corner you into an app that runs at 10 frames/sec. However, I would suggest not optimizing your app until the end. That way you won’t waste time perfecting code that you might not end up using in the final product.
User feedback – get it early and get it often. Stand back and watch people get frustrated with your app without offering a single word of advice. That will take nerves of steel, but it will allow you to identify the parts of your app with which people struggle.
Have your app be playable from day 1 – even if most of your app’s behavior is initially faked, seeing the final product in your hands early will help tremendously in guiding your design decisions going forward.
Finally, whatever you do, work on something that you love. While it’s possible to mess up a project that you really believe in, it’s nearly impossible to make a project you don’t believe in successful.
Recently, someone asked me for tips on how to bounce a simulated ray of light around reflective surfaces. I thought it might be fun to post my answer here in case other folks find it useful.
In Bobo Explores Light, there are a couple of interactive pages that show rays of light bouncing dynamically around the screen:
To achieve a similar effect in your game, you need to do the following:
Figure out all the reflection points for your ray of light
String an image along those points
Figuring Out Reflection Points
The first part is pretty straight forward. Simply follow these steps.
Figure out the initial position and direction of your light ray. Let’s call that ray L with a starting position at point P.
Figure out the width, position, and orientation of your mirror. Let’s call that segment M and the normal vector to it N.
You test whether L crosses through M. You can use simple linear algebra equations to get your answer and those are discussed in detail here. If the two indeed intersect, let’s call the intersection point P’.
Your reflected ray L’ begins at P’, in the direction of L reflected along N. Here is the equation to figure out the direction of the reflected vector.
Let P = P’, L = L’ and repeat steps 1-5 as many times as you need.
In the process, you will create a series of points P, P’, P”, P”’, etc. The next thing you have to do is draw a line connecting those points
Stringing an Image Along
You have several options on how to proceed here.
Option 1: Use glDrawLine()
You can draw some low level lines in OpenGL using a single call to glDrawLine(). The problem you might run into is line aliasing. Instead of the image on the LEFT, you get the image on the RIGHT:
Option 2: Use Core Graphics
To be perfectly honest, I haven’t played with the Core Graphics libraries on iOS. However, I hear they are pretty powerful. Ray Wenderlich has some cool Core Graphics tutorials on his site, such as this one by Brian Moakley, which might come in handy if this option is available to you. Some people swear by it.
Option 3: Stretch an image along each line segment
For this option you can use images (using UIKit’s UIImage for example) or sprites (using the built-in SpriteKit framework or external libraries similar to Cocos2D) or something different yet. Basically you position the bottom of stretchable image at point A and adjust its length so that it reaches all the way to point B. The cool thing is that this technique allows you to add custom glow effects and such which can be quite neat. It’s also pretty simple to setup. You’re just stretching and rotating images. You will run into problems at the reflection points, however, especially when you are using wide and / or transparent images:
For thin lines, you might be able to get away with this glitch. But if not, I’d suggest…
Option 4: Draw a custom polygon using OpenGL
This is by far the most labor intensive option, but it will yield sharp results. You want to draw a triangle strip using OpenGL, tracing points P, P’, P”, … thusly:
In the image above, the triangle strip traces points 1, 2, 3, 4, 5, 6, 7, 8 in that order. Note that in order for this method to work, your light ray image needs to be symmetrical. Otherwise you might get incorrect-looking image flipping at the reflection points.
This is essentially the method I ended up implementing in Bobo. The trick is in reflecting the polygon along the reflection points smoothly. I utilized two methods to get that part done.
This method of folding the polygon onto itself works well when the angle of incidence is < 45º, but it becomes increasingly poor as you approach 90º at which point the folding edge (points 3-4 in the images below) becomes infinitely long.
This method of reflecting the polygon, on the other hand, works well when the angle of incidence is > 45º, but, again, the folding edge approaches infinity as the angle gets closer and closer to 0º.
As you can see, in this case the image of the light ray penetrates the reflecting surface, but generally speaking the ray image is much thinner than the reflection surface, so it’s not really a problem. The extra ray width often comes from the glow part of the image and if that part spills over the reflecting surface, it still visually works.
So, since both of the methods have their shortcomings, you combine them. You use Method A when the angle of incidence is < 45º and Method B when the angle of incidence is > 45º. Note that Method A actually reverses or reflects the order of your vertices. So if your light ray is dynamic (ie. it changes with time and context) and one reflection point switches from using Method A to using Method B or vice-versa, you will need to follow your triangle strip down and reverse the order of vertex points from that point onward (ie. switch the left and right vertices at each “kink”).
Another thing to note is that when you switch from Method A to Method B for a given point, the reflection fold switches from being perpendicular to being parallel with the surface normal. That change produces a visible jump akin to a glitch. To avoid drawing attention to it, you can overlay each reflection point with a glowing ball thusly:
That’s it! Definitely a lot of work, but the technique works very well in practice. I hope it works for you as well!
Bobo the Robot is now so cool that he has his very own Facebook page. Check it out and learn all about where he came from, where he’s going, and what he’s up to this very moment! Have you ever seen Bobo’s year book picture? Log onto Facebook and take a peek right now.
As I was heading back to Seattle from WWDC, I was only traveling with a small backpack. I bundled the Apple Design Award into a t-shirt when I packed that morning, shoved it into my backpack, and forgot all about it when I got the airport. The backpack went through the x-ray machine and showed up as containing a perfect, black square.
The guy watching the screen from the x-ray machine called for another guy, and another guy, and pretty soon there was small crowd scrutinizing the image. The backpack came out, I sheepishly admitted to being the owner, and I was taken aside. When the TSA folks pulled the cube from my bag, it glowed.
Whispers passed over the crowd. After I explained that the cube is from Apple, the security folks reverently placed the futuristic artifact into its own plastic bin and ran it again through the x-ray machine. This time, the other passengers got a glimpse of what the commotion was about and, once again, it was the glow of the cube readily visible as it entered and exited the x-ray machine that sent a wave of whispers through the sizable gathering.
Eventually, the cube made its way back into my bag, but the curious gazes kept coming. I suspect I will be reading about “intercepted alien technology of unknown origin or purpose” on the blogosphere soon. What can I say? Apple knows how to design their products.