我正在尝试制作一个简单的相机应用程序,前置摄像头可以检测到脸部.
这应该很简单:
这应该很简单:
>创建一个继承自UIImage的CameraView类并将其放在UI中.确保它实现了AVCaptureVideoDataOutputSampleBufferDelegate,以便实时处理来自摄像机的帧.
class CameraView: UIImageView,AVCaptureVideoDataOutputSampleBufferDelegate
>在实例化CameraView时调用的函数handleCamera中,设置AVCapture会话.添加相机输入.
override init(frame: CGRect) { super.init(frame:frame) handleCamera() } func handleCamera () { camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera,mediaType: AVMediaTypeVideo,position: .front) session = AVCaptureSession() // Set recovered camera as an input device for the capture session do { try input = AVCaptureDeviceInput(device: camera); } catch _ as NSError { print ("ERROR: Front camera can't be used as input") input = nil } // Add the input from the camera to the capture session if (session?.canAddInput(input) == true) { session?.addInput(input) }
>创建输出.创建一个串行输出队列以传递AVCaptureVideoDataOutputSampleBufferDelegate(在本例中为类本身)处理的数据.将输出添加到会话.
output = AVCaptureVideoDataOutput() output?.alwaysDiscardsLateVideoFrames = true outputQueue = DispatchQueue(label: "outputQueue") output?.setSampleBufferDelegate(self,queue: outputQueue) // add front camera output to the session for use and modification if(session?.canAddOutput(output) == true){ session?.addOutput(output) } // front camera can't be used as output,not working: handle error else { print("ERROR: Output not viable") }
>设置摄像头预览视图并运行会话
// Setup camera preview with the session input previewLayer = AVCaptureVideoPreviewLayer(session: session) previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait previewLayer?.frame = self.bounds self.layer.addSublayer(previewLayer!) // Process the camera and run it onto the preview session?.startRunning()
>在委托运行的captureOutput函数中,将接收的样本缓冲区转换为CIImage以检测面.如果找到面部,请提供反馈.
func captureOutput(_ captureOutput: AVCaptureOutput!,didDrop sampleBuffer: CMSampleBuffer!,from connection: AVCaptureConnection!) { let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!) let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh] let faceDetector = CIDetector(ofType: CIDetectorTypeFace,context: nil,options: accuracy) let faces = faceDetector?.features(in: cameraImage) for face in faces as! [CIFaceFeature] { print("Found bounds are \(face.bounds)") let faceBox = UIView(frame: face.bounds) faceBox.layer.borderWidth = 3 faceBox.layer.borderColor = UIColor.red.cgColor faceBox.backgroundColor = UIColor.clear self.addSubview(faceBox) if face.hasLeftEyePosition { print("Left eye bounds are \(face.leftEyePosition)") } if face.hasRightEyePosition { print("Right eye bounds are \(face.rightEyePosition)") } } }
我的问题:我可以让相机运行但是我从互联网上尝试了大量不同的代码,我从来没有能够获得captureOutput来检测一张脸.应用程序没有进入函数或由于变量不起作用而崩溃,最常见的是sampleBuffer变量为nul.
我究竟做错了什么?
解决方法
您需要将captureOutput函数参数更改为以下内容:func captureOutput(_ captureOutput:AVCaptureOutput!,didOutputSampleBuffer sampleBuffer:CMSampleBuffer!,from connection:AVCaptureConnection!)
您的captureOutput函数在缓冲区丢失时调用,而不是在从摄像机获取时调用.
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。