天天品尝iOS7甜点 :: Day 18 :: Detecting Facial Features with CoreImage

这篇文章是天天品尝iOS7甜点系列的一部分,你可以查看完整的系列目录:天天品尝iOS7甜点


Introduction - 介绍

运用AVFoundationCoreImage,从iOS5开始就介绍了有关脸部识别的功能。在iOS7中,运用CoreImage进行脸部识别已经改进了许多新的识别特性(包括微笑和眨眼).这个API是十分简单易用的,所以我们创建一个应用程序,它运用AVFoundation进行查找脸部,然后在使用CoreImage来对寻找照片中的微笑和闭眼。

本章的实例程序能够在github上面进行访问,访问地址:github.com/ShinobiControls/iOS7-day-by-day

Face detection with AVFoundation - 运用AVFoundation进行面部识别

Day 16中,我们使用AVFoundation中的AVCatpureMetadataOutput类来寻找和解析二维码。面部识别也是使用同样的方式,和二维码一样都是metadata对象。我们将会创建一个AVCaptureMetadataOutput对象,但是它具有不懂的metadata类型:

1
2
3
4
5
6
7
AVCaptureMetadataObject *output = [[AVCaptureMetadataOutput alloc] init];
// Have to add the output before settig metadata types
[_session addOutput:output];
// We're only interested in faces
[output setMetadataObjectTypes:@[AVMetadataObjectTypeFace]];
// This VC is the delegate, Please call use on the main queue
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];

我们和之前一样来实现代理方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
- (void)captureOutput:(AVCpatureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
for (AVMetadataObject *metadataObject in metadataObjects) {
if ([metadataObject.type isEqualToString:AVMetadataObjectTypeFace]) {
// Take an image of the face and pass to CoreImage for detection
AVCaptureConnection *stillConnection = [_stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[_stillImageOutput captureStillImageSaynchronouslyFromConnection:stillConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (error) {
NSLog(@"There was a problem");
return;
}

NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

UIImage *smileImage = [UIImage imageWithData:jpegData];
_previewLayer.hidden = YES;
[_session stopRunning];
self.imageView.hidden = NO;
self.imageView.image = smileyImage;
self.activityView.hidden = NO;
self.statusLabel.text = @"Processing";
self.statusLabel.hidden = NO;

CIImage *image = [CIImage imageWithData:jpegData];
[self imageContainsSmiles:image callback:^(BOOL happyFace) {
if (happyFace) {
self.statusLabel.text = @"Happy Face Found!";
}else {
self.statusLabel.text = @"Not a good photo...";
}
self.activityView.hidden = YES;
self.retakeButton.hidden = NO;
}];
}];
}
}
}

和二维码一样都是十分相似的,只是现在我们在会话中添加了一个新的输出类型(AVCaptureStillImageOutput)。这就允许我们把拍的照片作为输入源,这就是captureStillImageAsynchronouslyFromConnection:completionHandler:所做的。我们当我们通知AVFoundation来进行脸部识别的时候,我们需要在当前的输入中保持一个image,然后停止会话.

我们创建一个JPEG代表捕获到的图像:

1
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

现在我们把它放置到一个UIImageView中,然后创建一个CIImage版本,为CoreImage来进行面部特征识别做准备。我们将会来看看你imageContainsSmiles:callback:的方法。

Feature finding with CoreImage - 运用CoreImage来寻找面部特性

CoreImage需要一个CIContext和一个CIDetector:

1
2
3
4
5
6
7
if (!_ciContext) {
_ciContext = [CIContext contextWithOptions:nil];
}

if (!_faceDetector) {
_faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:_ciContext options:nil];
}

我们得到查询器来执行它的检索,我们需要调用featuresInImage:options:方法:

1
NSArray *features = [_faceDetector featuresInImage:image options:@{CIDetectorEyeBlink: @YES, CIDetectorSmile: @YES, CIDetectorImageOrientation: @5];

为了能够让查询器来执行微笑和眨眼的检索,我们需要指定查询器的参数(CIDetectorEyeBlink和CIDetectorSmile)。CoreImage面部查询器是需要指定方向的,因此,我们需要设置查询器的方向来匹配程序中的设计.

现在我们需要循环特性数组(它包含CIFaceFeature对象),然后我们需要辨别寻找出哪个包含了微笑或者眨眼的动作:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
BOOL happyPicture = NO;
if ([features count] > 0) {
happyPicture = YES;
}
for (CIFeature *feature in features) {
if ([feature isKindOfClass:[CIFaceFeature class]]) {
CIFaceFeature *faceFeature = (CIFaceFeature *)feature;
if (!faceFeature.hasSmile) {
happyPicture = NO;
}
if (faceFeature.leftEyeClosed || faceFeature.rightEyeClosed) {
happyPicture = NO;
}
}
}

最后我们在主线程上面执行一个回调函数:

1
2
3
dispatch_async(dispatch_get_main_queue(), ^{
callback(happyPicture);
});

我们的回调函数更新标签来描述是不是一个好的图片:

1
2
3
4
5
6
7
8
9
[self imageContainsSmiles:image callback:^(BOOL happyFace) {
if (happyFace) {
self.statusLabel.text = @"Happy Face Found!";
}else {
self.statusLabel.text = @"Not a good photo...";
}
self.activityView.hidden = YES;
self.retakeButton.hidden = NO;
}];

如果你运行应用程序,将会发现CoreImage是来进行面部特性的检测是多少的牛X:

除了这些属性之外,还可以找到不同的面部特征的位置,如眼睛和嘴巴。

Conclusion - 总结

虽然不是一个突破性的API,CoreImage面部查询器添加一个漂亮的询问你的面部图像的能力。它可以使一个不错的添加到摄影应用,帮助用户把所有他们所需要的不一样的功能。

本文翻译自:iOS7 Day-by-Day :: Day 18 :: Detecting Facial Features with CoreImage

文章目录
  1. 1. Introduction - 介绍
  2. 2. Face detection with AVFoundation - 运用AVFoundation进行面部识别
  3. 3. Feature finding with CoreImage - 运用CoreImage来寻找面部特性
  4. 4. Conclusion - 总结
,