When the great Cyborg uprising happens in 2051, you'll need to know 2 things. Firstly, how to monkey patch face-recognition software in order to modify Cyborgs to identify and attack their own kind, and secondly, how to make wine in a toilet. These are mandatory skill sets and... well... face recognition software is just awesome, readily available, and fun. In this post, I'm going to walk you through the basics of iOS Face Detection with a full working example in RubyMotion.
This tutorial is the basics! You're likely to learn a bit more about the drawing API than facial recognition, but hey, you've got to start somewhere.
All the source code of this project has been made open source on Github: https://github.com/IconoclastLabs/RubyMotion-SimpleFace
Getting Your Footing:
The meat and bones of this project is pretty straightforward. For solidarity and simplicity all the logic is done in a basic root view controller. All the magic you need is built into Core Image for iOS 5 and greater. The object we're working with is CIDetector. In our viewDidLoad we're going to simply prep and implement CIDetector which will do all the heavy lifting.
As you can see, we're creating a CIImage for the CIDetector, along with an options dictionary, which is limited to setting the accuracy of the CIDetector. Your two options are CIDetectorAccuracyHigh or CIDetectorAccuracyLow. Setting accuracy to high, will use more accurate detection techniques, but takes more time (I don't notice much difference for a single picture), while setting it to low is the inverse. Depending on your usage you'll choose one or the other. Since we're working with a static image, we'll leave the detector on high and save the speedy version for a project that would necessitate it.
The last few lines there, we use RubyMotions version of the Grand Central Dispatch (GCD), to fire off our print event asynchronously. So all this code did, was stick an image in CIDetector, and send the results off to be handled asynchronously.
The iOS API really makes reading features too easy. The above section sends the results of CIDetector off to be printed That's what we do here. Peruse the following code:
The previous code block goes through all found features (Every face it detected), and then prints the coordinate to the screen. So each set of coordinates is a detected face! It's really quite simple. Yes, the code could be condensed even further, but in the next section we're going to use those blocks to mark each feature.
Now that we have read the features and understand their location, let's make it known. We're going to draw boxes over the features that we have detected. Namely:
- Feature Bounds (Face box, but could be the bounds of any other feature if it detected more than faces... cyborgs anyone?)
- Feature Left Eye (Leftmost in the picture)
- Feature Right Eye (Rightmost in the picture)
- Feature Mouth
One very important note is that Quartz2D is planning on confusing us, and defending the Cyborgs. Rather than the traditional top left X and Y of 0,0 the origin is considered to be the bottom left! Fortunately, to keep things sane, everyone seems to acquiesce to this by moving and flipping the context of the drawing space. The following code helps you draw using the exact coordinates we have already received from the Features detected.
With the CGContext translated and scaled, you can use a simple draw_feature function to place boxes over the detected features. See the code I use below.
TADAAAA!!!! You're able to draw using your coordinates!
That's all folks! You can detect, and draw just like that! To see all this code in a cohesive format, check out the Github Repo. In the repo, I draw the image and the graphics on the same context, which is perfect for export. I also toss a box around the feature.bounds to identify the face boundries. And if you're interested in cleaning it up, adjusting it to handle scaled down images etc? Please send pull requests!
Definitely subscribe to our RSS feed, as we aspire to post a more advanced and fun Face Detection app in the near future :)
CIDetector Class Reference: http://developer.apple.com/library/ios/#documentation/CoreImage/Reference/CIDetector_Ref/Reference/Reference.html
Github Source: https://github.com/IconoclastLabs/RubyMotion-SimpleFace
Gist of all the above code: https://gist.github.com/3297986
RubyMotion Face recognition mustache app: https://github.com/HipByte/RubyMotionSamples/tree/master/Mustache