Featured Development Resources iOS Development Tutorials

Tutorial: Easy Face Detection With Core Image In iOS 5 (Example Source Code Provided)

With the face detection API included within Core Image in the iOS 5 SDK facial recognition is now dead simple on devices running iOS 5, and it works extremely well.

Using this new API you can quickly detect the size of the face along with the locations of the mouth and nose.  As illustrated in this image:

No more need to roll your own code or use a framework such as OpenCV for most face recognition needs.

With adjustable levels of accuracy the face recognition API can be used in situations demanding high accuracy or high speed (such as when working with live video).

Download The Complete Working Example Project

Before downloading the example project, please share this tutorial with your Twitter followers by clicking here.

You can download the complete example including the above image here.

You can follow the steps below and build the project yourself.

You will need an image file with at least one face.

I named the image facedetectionpic.jpg in the example.

You will need a basic understanding of Objective-C and how to set up an iOS project within Xcode.

1) Set Up The Project

a) Create a Single View Application, I named mine FaceDetectionExample.

b) Include the QuartzCore and CoreImage frameworks within the project.

c) Drag the facedetectionpic.jpg file into the project.

2) Import The Frameworks And Draw The Image

a) Import the Quartz and Core Image frameworks into the AppDelegate.m file.

[cc lang=”objc” escaped=”true”]#import <CoreImage/CoreImage.h>
#import <QuartzCore/QuartzCore.h>[/cc]

b) Add the following method to draw the image onto the screen. I placed the faceDetector, and markFaces methods above the application:didFinishLaunching: method.

[cc lang=”objc”]-(void)faceDetector
// Load the picture for face detection
UIImageView* image = [[UIImageView alloc] initWithImage:
[UIImage imageNamed:@”facedetectionpic.jpg”]];

// Draw the face detection image
[self.window addSubview:image];

// Execute the method used to markFaces in background
[self markFaces:image];

3) Detect the faces

a) Create a CIImage (Core Image image) using  the image in the UIImageView that we created in Step 2.

[cc lang=”objc”]-(void)markFaces:(UIImageView *)facePicture
// draw a CI image with the previously loaded face detection picture
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];[/cc]

b) Create the CIDetector.

Since we’re working with a still image here we will use a detector of high accuracy.  You can read about other CIDetector options available in Apple’s CIDetector documentation here.

[cc lang=”objc”] // create a face detector – since speed is not an issue we’ll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];[/cc]

c) Run the featuresInImage method in the CIDetector class on our CIImage to get an array containing the features of every face detected within the image.

[cc lang=”objc”] // create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];

4) Draw Shapes On The Found Faces

The CIFaceFeature class provides us with the bounds for the face, the location of each eye, and mouth, and also BOOL’s indicating whether each eye or the mouth is found for each face.

You can read more on CIFaceFeature in Apple’s documentation here.

a) Iterate through the array of face features

[cc lang=”objc”] // we’ll iterate through every detected face. CIFaceFeature provides us
// with the width for the entire face, and the coordinates of each eye
// and the mouth if detected. Also provided are BOOL’s for the eye’s and
// mouth so we can check if they already exist.
for(CIFaceFeature* faceFeature in features)

b) Create a red border around each face found in the image using the feature bounds. We’ll also store the face width which we’ll be using for drawing on the other features of the face.

[cc lang=”objc”] // get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;

// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];

// add a border around the newly created UIView
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];

// add the new view to create a box around the face
[self.window addSubview:faceView];[/cc]

Now over the two eyes we’ll draw green circles.

[cc lang=”objc”] if(faceFeature.hasLeftEyePosition)
// create a UIView with a size based on the width of the face
UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
// change the background color of the eye view
[leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
// set the position of the leftEyeView based on the face
[leftEyeView setCenter:faceFeature.leftEyePosition];
// round the corners
leftEyeView.layer.cornerRadius = faceWidth*0.15;
// add the view to the window
[self.window addSubview:leftEyeView];

// create a UIView with a size based on the width of the face
UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
// change the background color of the eye view
[leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
// set the position of the rightEyeView based on the face
[leftEye setCenter:faceFeature.rightEyePosition];
// round the corners
leftEye.layer.cornerRadius = faceWidth*0.15;
// add the new view to the window
[self.window addSubview:leftEye];

c) Finally we’ll draw a circle over the mouth.

[cc lang=”objc”] if(faceFeature.hasMouthPosition)
// create a UIView with a size based on the width of the face
UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
// change the background color for the mouth to green
[mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
// set the position of the mouthView based on the face
[mouth setCenter:faceFeature.mouthPosition];
// round the corners
mouth.layer.cornerRadius = faceWidth*0.2;
// add the new view to the window
[self.window addSubview:mouth];

5) Adjust For The Coordinate System

If you were to run the app now you might notice that the y-locations of the circles drawn over the eyes and mouth are off, this is because of the different coordinate system used by Core Image (and the default on Mac OS X).

Flip the image, and then flip the entire window containing our newly created circles to make everything right side up. Doing things this way only requires a couple of lines of code which we’ll add into the facedetector method.

[cc lang=”objc”] // flip image on y-axis to match coordinate system used by core image
[image setTransform:CGAffineTransformMakeScale(1, -1)];

// flip the entire window to make everything right side up
[self.window setTransform:CGAffineTransformMakeScale(1, -1)];[/cc]


Finally add the from the following code application: didFinishLaunchingWIthOptions: method before the return statement to run the face detector.

[cc lang=”objc”][self faceDetector];[/cc]

That’s all there is to it!  Thanks to Tom of b2cloud who’s tutorial on face detection I found after starting this one who’s code I used to simply this example. Also thanks to Tobyotter on Flickr for the monster face image.

One more thing…

Face detection can take awhile, especially on older devices so you may want to run your face detection method on the background.  You can simply change:

[cc lang=”objc”][self markFaces:image];[/cc]


[cc lang=”objc”][self performSelectorInBackground:@selector(markFaces:) withObject:image];[/cc]

and the face detection and drawing will run in a separate thread and the app will start up faster (some advice I picked up in the extensive Core Image section of the iOS 5 by Tutorials book (aff)).  Even on a newer device I can see the difference.

That’s all there is to it!  Please post any issues in the comments below.

More iOS 5 SDK Programming Tutorials

For more on iOS 5 programming check out the iOS 5 tutorial page.

19 replies on “Tutorial: Easy Face Detection With Core Image In iOS 5 (Example Source Code Provided)”

Hi, do you have the code of how I would copy the detected area and put in anotherUIImageView?


Yes, it can be used with the camera video feed, you will need to reduce the accuracy of the CIDetector. 

I’m not sure about the data required by algorithms that take a 2d image and turn it into 3D pose data (I think they require some kind of 3d motion tracking? which would mean definitely not), but from CIFaceFeature you will only get the face boundary along with eye and mouth locations.

i Like this tutorial and learn more but i m trying to detect all the things with capture the face image using UIImagePickerControllerSourceTypeCamera i get all the things and save image but above code not detect the eyes and mouth in NSArray* features = [detector featuresInImage:image]; so how can i do that please reply Thanks

For the code [self.window setTransform:CGAffineTransformMakeScale(1, -1)];

since i am not placing the code in appDelegate class, i cannot use self.window object.

any ideas of finding the conversion of the coordination of the core image and the current image?


I did 

 [self.view setTransform:CGAffineTransformMakeScale(1, -1)]; and that worked. But the problem with that or with window is that, if you have a toolbar as well it flips the toolbar to the top. Any ideas about that?

@idone you can adjust the CIDetector accuracy – maybe there are some filters you can run that specific image through to get better results.  Every image is different.. and you’re using a face detection algorithm pre-defined by Apple.  You may want to look at using OpenCV if the face detection in the iOS 5 SDK is not enough.

Hi Guys,

The tutorial is good one. One thing I would like to share here is that the rect values which you get from the API for faces will be the actual rects when the image is drawn in the context. I mean the rect size and location are related to the exact size of the image. If you are displaying the image in a smaller/bigger image view then you need to do scaling of the face rect values so that they can be shown properly on the image view.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: