Some time ago I posted a tutorial on the basics of using Core Image face detection on the iOS platform.
Something that was asked in the comments thread for that post was how to use Core Image face detection on a live video feed – and whether it was fast enough.
Recently I received a submission from Jeroen Trappers showing how to use Core Image face detection on a live video feed, and place a moustache on the faces found within the video stream.
Here’s an image of the example in action from Jeroen’s site:
You can find the tutorial here.
The speed is respectable on newer iOS devices with a 640×480 capture resolution.
Added to the Core Image page.