在 CoreModel 文件之间切换

如何解决在 CoreModel 文件之间切换

几乎所有的东西我都试过了,@EnvironmentObject@StateObject、全局变量等等。

我有一个枚举来跟踪和返回我拥有的模型的名称:

enum MLNameModels: String,CaseIterable {
    case audi = "Audi"
    case bmw = "BMW"
    var mlModel: MLModel {
        switch self {
        case .audi:
            return try! Audi(configuration: MLModelConfiguration()).model
        case .bmw:
            return try! BMW(configuration: MLModelConfiguration()).model
        }
    }
}

我正在使用,在环顾四周时尝试使用实时检测,并且有一个 CameraView 来处理视图的所有设置,甚至是 Vision 框架。

struct CameraView : UIViewControllerRepresentable {
    func makeUIViewController(context: UIViewControllerRepresentableContext<CameraView>) -> UIViewController {
        let controller = CameraViewController()

        return controller
    }
    
    func updateUIViewController(_ uiViewController: CameraView.UIViewControllerType,context: UIViewControllerRepresentableContext<CameraView>) { }
}

我的 CameraViewController 中有一行设置对象检测:

 var objectDetector = Object_Detector(modelWithName: "audi")

我知道它说的奥迪是因为我刚刚恢复到正常工作。

物体检测器类内部如下:

class Object_Detector {
    
    // MARK: Properties
    var requests = [VNRequest]()
    
    var boundingBox = CGRect()
    var objectType: ObservationTypeEnum?
    var firstObservation = VNRecognizedObjectObservation()
    
    init(modelWithName modelName: String) {
        self.setupModel(withFilename: modelName)
    }
    
    // MARK: Methods
    private func setupModel(withFilename modelName: String) {
        // Get model URL
        guard let modelURL = Bundle.main.url(forResource: modelName,withExtension: "mlmodelc") else {
            NSLog("Error: Unable to find model with name\(modelName),in \(Bundle.main.bundlePath)")
            
            return
        }
        
        // Create desired model
        guard let model = try? VNCoreMLModel(for: MLModel(contentsOf: modelURL)) else {
            NSLog("Error: Failed to create model->line:\(#line)")
            
            return
        }
        
        // Perform a request using ML Model
        let objectRecognizerRequests = VNCoreMLRequest(model: model) { (request,err) in
            if let error = err {
                NSLog("Error: \(error.localizedDescription)")
                
                return
            }
            
            // Get observation results
            guard let results = request.results as? [VNRecognizedObjectObservation] else {
                NSLog("Error: Failed to extract request results as [VNRecognizedObjectObservation]")
                
                return
            }
            
            // Get first observation result (one with the greatest confidence)
            guard let firstResult = results.first else { return }
            
            self.firstObservation = firstResult
            self.objectType = ObservationTypeEnum(fromRawValue: firstResult.labels.first!.identifier)
            self.boundingBox = firstResult.boundingBox
        }
        
        // Save requests
        self.requests = [objectRecognizerRequests]
    }
}

我已经测试过是否发生更改,甚至在选择一个选项后调用 setUpModel() 函数,这确实表明 modelName 参数已更新但由于某种原因我的应用程序似乎只是立即转到最后一个模型,在这种情况下是宝马。

我首先提示用户选择制造商,然后,似乎没有任何效果。仅当我对值进行硬编码时才有效,如上所示。

澄清一下,我不想合并结果等。我只是想让应用知道用户选择了哪个制造商,然后抓取相应的模型并继续识别。

编辑 - 用户提示

在应用程序加载时,会显示一个工作表,迭代所有模型案例,我希望它有足够的帮助:

public var selectedModelName = String()
struct ContentView: View {
    @State private var isPresented: Bool = false
    var model = MLNameModels.allCases
    var body: some View {
        ZStack(alignment: .top) {
            if selectedModelName.isEmpty {
            CameraView().edgesIgnoringSafeArea(.all)
            }
            VStack(alignment: .leading){
                Spacer()
                HStack {
                Button {
                    isPresented = true
                    print("Tapped")
                } label: {
                    Image(systemName: "square.and.arrow.up")
                        .resizable()
                        .frame(width: 24,height: 30)
                        .padding()
                        .background(Color.secondary.clipShape(Circle()))
                        
                }
                    Spacer()
                }
            }.padding()
        }
        .slideOverCard(isPresented: $isPresented) {
            VStack {
                ForEach(model,id: \.self) { modelName in
                    Text(modelName.rawValue)
                        .onTapGesture {
               
                            selectedModelName = modelName.rawValue
                            isPresented = false
                        }
                }

            }
            .frame(width: UIScreen.main.bounds.width * 0.85)
        }
        .onAppear {
            if selectedModelName.isEmpty {
                isPresented = true
            }
        }
    }
}

因此,我声明:

var objectDetector = Object_Detector(modelWithName: selectedModelName)

但是好像只能加载其他模型,根本无法切换模型。

这是我的 CameraViewController:

class CameraViewController : UIViewController,AVCaptureVideoDataOutputSampleBufferDelegate {
    var bufferSize: CGSize = .zero
    var rootLayer: CALayer! = nil
    
    private var detectionOverlay: CALayer! = nil
    
    private let session = AVCaptureSession()
    private let videoDataOutput = AVCaptureVideoDataOutput()
    private let videoOutputQueue = DispatchQueue(label: "Video_Output")
    
    private var previewLayer: AVCaptureVideoPreviewLayer! = nil
    
    // Initializing Model
    var objectDetector = Object_Detector(modelWithName: selectedModelName)
    
    override func viewDidLoad() {
        super.viewDidLoad()
        
        loadCamera()
        setupLayers()
        updateLayerGeometry()
        
        self.session.startRunning()
    }
    
    func loadCamera() {
        
        guard let videoDevice = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera],mediaType: .video,position: .back).devices.first else { return }
        
        guard let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice) else {
            print("NO CAMERA DETECTED")
            return
        }
        
        // Begin session config
        self.session.beginConfiguration()
        self.session.sessionPreset = .hd1920x1080
        
        guard self.session.canAddInput(videoDeviceInput) else {
            NSLog("Could not add video device input to the session")
            self.session.commitConfiguration()
            return
        }
        // Add video input
        self.session.addInput(videoDeviceInput)
        
        if session.canAddOutput(self.videoDataOutput) {
            // Add a video data output
            self.session.addOutput(videoDataOutput)
            videoDataOutput.alwaysDiscardsLateVideoFrames = true
            videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)]
            videoDataOutput.setSampleBufferDelegate(self,queue: self.videoOutputQueue)
        } else {
            NSLog("Could not add video data output to the session")
            session.commitConfiguration()
            return
        }
        
        guard let captureConnection = videoDataOutput.connection(with: .video) else { return }
        
        // Always process the frames
        captureConnection.isEnabled = true
        
        do {
            try videoDevice.lockForConfiguration()
            
            let dimensions = CMVideoFormatDescriptionGetDimensions((videoDevice.activeFormat.formatDescription))
            // Read frame dimensions
            self.bufferSize.width = CGFloat(dimensions.width)
            self.bufferSize.height = CGFloat(dimensions.height)
            
            videoDevice.unlockForConfiguration()
        } catch {
            NSLog(error.localizedDescription)
        }
        
        // Save session config
        session.commitConfiguration()
        
        previewLayer = AVCaptureVideoPreviewLayer(session: session)
        previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
        rootLayer = view.layer
        previewLayer.frame = rootLayer.bounds
        rootLayer.addSublayer(previewLayer)
    }
    
    func captureOutput(_ output: AVCaptureOutput,didOutput sampleBuffer: CMSampleBuffer,from connection: AVCaptureConnection) {
        // Get buffer with image data
        guard let buffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
            NSLog("Error: Failed to get image buffer->\(#line)")
            
            return
        }
        
        // Get device orientation
        let deviceOrientation = self.exifOrientationFromDeviceOrientation()
        
        // Create an image request handler
        let requestHandler = VNImageRequestHandler(cvPixelBuffer: buffer,orientation: deviceOrientation,options: [:])
        
        do {
            try requestHandler.perform(self.objectDetector.requests)
            
            // Adding bounding box
            let boundingBox = self.objectDetector.boundingBox
            let objectType = self.objectDetector.objectType
        
            if !boundingBox.isEmpty && objectType != nil {
                DispatchQueue.main.async {
                    CATransaction.begin()
                    CATransaction.setValue(kCFBooleanTrue,forKey: kCATransactionDisableActions)
                    self.detectionOverlay.sublayers = nil
                    
                    let objectBounds = VNImageRectForNormalizedRect(boundingBox,Int(self.bufferSize.width),Int(self.bufferSize.height))
                    
                    let shapeLayer = self.createBoundingBox(withBounds: objectBounds)
                    
                    let textLayer = self.createTextBox(withBounds: objectBounds)
                    
                    shapeLayer.addSublayer(textLayer)
                    self.detectionOverlay.addSublayer(shapeLayer)
                    
                    self.updateLayerGeometry()
                    CATransaction.commit()
                }
            }
        } catch {
            NSLog("Error: Unable to perform requests")
        }
    }
    
    func createBoundingBox(withBounds bounds: CGRect) -> CALayer {
        let shapeLayer = CALayer()
        let borderColor = self.objectDetector.objectType?.getColor()
        
        shapeLayer.bounds = bounds
        shapeLayer.position = CGPoint(x: bounds.midX,y: bounds.midY)
        shapeLayer.name = "Found Object"
        shapeLayer.borderColor = borderColor
        shapeLayer.borderWidth = 2.5
        shapeLayer.cornerRadius = 5.0
        
        return shapeLayer
    }
    
    func createTextBox(withBounds bounds: CGRect) -> CATextLayer {
        let textLayer = CATextLayer()
        textLayer.name = "Object Label"
        let formattedString = NSMutableAttributedString(string: String(format: "\(self.objectDetector.firstObservation.labels[0].identifier)"))
        let backgroundColor = UIColor(cgColor: self.objectDetector.objectType!.getColor())
        let largeFont = UIFont(name: "AvenirNext-Medium",size: 40.0)!
        
        formattedString.addAttributes([NSAttributedString.Key.font: largeFont,NSAttributedString.Key.foregroundColor: UIColor.white,NSAttributedString.Key.backgroundColor: backgroundColor],range: NSRange(location: 0,length: self.objectDetector.firstObservation.labels[0].identifier.count))
        
        textLayer.string = formattedString
        textLayer.bounds = CGRect(x: 0,y: 0,width: bounds.size.height,height: 50)
        textLayer.position = CGPoint(x: bounds.minX - 25,y: bounds.maxY)
        textLayer.contentsScale = 2.0
        textLayer.cornerRadius = 5.0
        
        textLayer.setAffineTransform(CGAffineTransform(rotationAngle: CGFloat(.pi / 2.0)).scaledBy(x: 1.0,y: -1.0))
        
        return textLayer
    }
    
    func setupLayers() {
        detectionOverlay = CALayer()
        detectionOverlay.name = "DetectionOverlay"
        detectionOverlay.bounds = CGRect(x: 0.0,y: 0.0,width: bufferSize.width,height: bufferSize.height)
        detectionOverlay.position = CGPoint(x: rootLayer.bounds.midX,y: rootLayer.bounds.midY)
        rootLayer.addSublayer(detectionOverlay)
    }
    
    func updateLayerGeometry() {
        let bounds = rootLayer.bounds
        var scale: CGFloat
        
        let xScale: CGFloat = bounds.size.width / bufferSize.height
        let yScale: CGFloat = bounds.size.height / bufferSize.width
        
        scale = fmax(xScale,yScale)
        if scale.isInfinite {
            scale = 1.0
        }
        
        CATransaction.begin()
        CATransaction.setValue(kCFBooleanTrue,forKey: kCATransactionDisableActions)
        
        // Rotate the layer into screen orientation and scale and mirror
        detectionOverlay.setAffineTransform(CGAffineTransform(rotationAngle: CGFloat(.pi / 2.0)).scaledBy(x: scale,y: -scale))
        
        // Center the layer
        detectionOverlay.position = CGPoint(x: bounds.midX,y: bounds.midY)
        
        CATransaction.commit()
    }
    
    // Specify device orientation
    private func exifOrientationFromDeviceOrientation() -> CGImagePropertyOrientation {
        let curDeviceOrientation = UIDevice.current.orientation
        let exifOrientation: CGImagePropertyOrientation
        
        switch curDeviceOrientation {
        case UIDeviceOrientation.portraitUpsideDown:  // Device oriented vertically,home button on the top
            exifOrientation = .left
        case UIDeviceOrientation.landscapeLeft:       // Device oriented horizontally,home button on the right
            exifOrientation = .upMirrored
        case UIDeviceOrientation.landscapeRight:      // Device oriented horizontally,home button on the left
            exifOrientation = .down
        case UIDeviceOrientation.portrait:            // Device oriented vertically,home button on the bottom
            exifOrientation = .up
        default:
            exifOrientation = .up
        }
        return exifOrientation
    }
}

解决方法

正如您在问题/评论中提到的,您目前只是采用初始值并基于它设置模型,以后从不响应或听取任何更改。

我无法重新创建您的所有代码,因为缺少很多类型等,但是以下内容应该可以让您了解如何通过视图和对象完成传播状态更改。查看内嵌评论。


enum MLNameModels: String,CaseIterable { //simplified just for the example
    case audi = "Audi"
    case bmw = "BMW"
}

struct ContentView : View {
    @State var model : String //@State,so that the view knows to update
    
    var body: some View {
        VStack {
            CameraView(modelName: model) //note that it gets passed to CameraView here
            VStack {
                ForEach(MLNameModels.allCases,id: \.self) { modelName in
                    Text(modelName.rawValue)
                        .onTapGesture {
                            model = modelName.rawValue
                        }
                }
            }
        }
    }
}

struct CameraView : UIViewControllerRepresentable {
    var modelName : String //gets updated when the parent state changes
    
    func makeUIViewController(context: Context) -> CameraViewController {
        return CameraViewController(modelName: modelName) //initial value
    }
    
    func updateUIViewController(_ uiViewController: CameraViewController,context: Context) {
        uiViewController.modelName = modelName //gets called when modelName changes or the parent re-renders
    }
}

class CameraViewController : UIViewController {
    var objectDetector : Object_Detector
    
    var modelName : String = "" {
        didSet {
            objectDetector.modelName = modelName //update modelName on the objectDetector when a new modelName is passed through
        }
    }
    
    init(modelName: String) {
        self.objectDetector = Object_Detector(modelWithName: modelName)
        super.init(nibName: nil,bundle: nil)
    }
    
    required init?(coder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }
}

class Object_Detector {
    var modelName : String = "" {
        didSet {
            self.setupModel(withFilename: modelName) //call setupModel when there is a new modelName
        }
    }
    
    init(modelWithName modelName: String) {
        self.modelName = modelName
    }
    
    private func setupModel(withFilename modelName: String) {
        //do setup
    }
}

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-