参数'img'的预期Ptr <cv :: UMat>可通过TF和OpenCV读取

如何解决参数'img'的预期Ptr <cv :: UMat>可通过TF和OpenCV读取

我有here的这段代码,并对here进行了一些修改。

from imutils.video import VideoStream
from imutils.video import FPS
import numpy as np
import argparse
import imutils
import time
import cv2


# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()

ap.add_argument("-c","--confidence",type=float,default=0.8,help="minimum probability to filter weak detections")
args = vars(ap.parse_args())


classes_90 = [ "person","bicycle","car","motorcycle","airplane","bus","train","truck","boat","traffic light","fire hydrant","unknown","stop sign","parking meter","bench","bird","cat","dog","horse","sheep","cow","elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife","spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza","donut","cake","chair","couch","potted plant","bed","dining table","toilet","tv","laptop","mouse","remote","keyboard","cell phone","microwave","oven","toaster","sink","refrigerator","book","clock","vase","scissors","teddy bear","hair drier","toothbrush" ] 
# Leemos las clases disponibles en openImages
CLASSES = classes_90  #New list of classess with 90 classess.
print(CLASSES)

# Le damos colores a las cajas para cada clase
COLORS = np.random.uniform(0,255,size=(len(CLASSES),3)) 

# Importamos el modelo de red
cvNet = cv2.dnn.readNetFromTensorflow('faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb','faster_rcnn_inception_v2_coco_2018_01_28/resnet.pbtxt')

# Leemos una imagen
img = cv2.VideoCapture('people.mp4')  


while img.isOpened():
    ret,frame = img.read()
    
    if not ret:
        
   

 break


#img = cv2.imread(args["image"])

# Obtenemos las dimensiones de la imagen
h = frame.shape[0] # Alto
w = frame.shape[1] # Ancho
img = np.array(img)
cvNet.setInput(cv2.dnn.blobFromImage(img,size=(h,w),swapRB=True,crop=False))
detections = cvNet.forward()


# loop over the detections
for i in np.arange(0,detections.shape[2]):
    # extract the confidence (i.e.,probability) associated with
    # the prediction
    confidence = detections[0,i,2]

    # filter out weak detections by ensuring the `confidence` is
    # greater than the minimum confidence
    if confidence > args["confidence"]:
        # extract the index of the class label from the
        # `detections`,then compute the (x,y)-coordinates of
        # the bounding box for the object
        idx = int(detections[0,1])
        print(idx   )
        box = detections[0,3:7] * np.array([w,h,w,h])
        (startX,startY,endX,endY) = box.astype("int")

        # draw the prediction on the frame
        label = "{}: {:.2f}%".format(CLASSES[idx],confidence * 100)
        cv2.rectangle(img,(startX,startY),(endX,endY),COLORS[idx],2)
        y = startY - 15 if startY - 15 > 15 else startY + 15
        cv2.putText(img,label,y),cv2.FONT_HERSHEY_SIMPLEX,0.5,2)

        print(label)


out_img = cv2.resize(img,(640,480))
out.write(out_img)
cv2.imshow('img',img)
#cv2.waitKey()
if cv2.waitKey(25) & 0xFF == ord('q'):

    cv2.destroyAllWindows()
    cap.release()
    out.release()

并得到此错误 Expected Ptr<cv::UMat> for argument 'img',在查找了该问题的大多数可用解决方案之后,似乎一开始输入不是一个数组,所以更改为np.array但不起作用,打印该图像表明:存在图像,这是来自视频的一帧,因此图像在那里。

因此,我无法确定到底是什么导致了此问题。如果仅使用cv2.imread()传递单个图像,则添加此代码也可以正常工作。

解决方法

我不正确地将框架传递给数组,我的意思是我正在创建一个包含一些不同变量的空数组,而不是从cv2.VideoCapture()方法获得的框架。

以下是更新的代码:

from imutils.video import VideoStream
from imutils.video import FPS
import numpy as np
import argparse
import imutils
import time
import cv2


# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()

ap.add_argument("-c","--confidence",type=float,default=0.8,help="minimum probability to filter weak detections")
args = vars(ap.parse_args())


classes_90 = [ "person","bicycle","car","motorcycle","airplane","bus","train","truck","boat","traffic light","fire hydrant","unknown","stop sign","parking meter","bench","bird","cat","dog","horse","sheep","cow","elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife","spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza","donut","cake","chair","couch","potted plant","bed","dining table","toilet","tv","laptop","mouse","remote","keyboard","cell phone","microwave","oven","toaster","sink","refrigerator","book","clock","vase","scissors","teddy bear","hair drier","toothbrush" ] 
# Leemos las clases disponibles en openImages
CLASSES = classes_90  #New list of classess with 90 classess.
print(CLASSES)

# Le damos colores a las cajas para cada clase
COLORS = np.random.uniform(0,255,size=(len(CLASSES),3)) 

# Importamos el modelo de red
cvNet = cv2.dnn.readNetFromTensorflow('faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb','faster_rcnn_inception_v2_coco_2018_01_28/resnet.pbtxt')

# Leemos una imagen
img = cv2.VideoCapture('people.mp4')  


while img.isOpened():
    ret,frame = img.read()
    
    if not ret:
        
   

 break


#img = cv2.imread(args["image"])

# Obtenemos las dimensiones de la imagen
h = frame.shape[0] # Alto
w = frame.shape[1] # Ancho
img = np.array(frame)
cvNet.setInput(cv2.dnn.blobFromImage(img,size=(h,w),swapRB=True,crop=False))
detections = cvNet.forward()


# loop over the detections
for i in np.arange(0,detections.shape[2]):
    # extract the confidence (i.e.,probability) associated with
    # the prediction
    confidence = detections[0,i,2]

    # filter out weak detections by ensuring the `confidence` is
    # greater than the minimum confidence
    if confidence > args["confidence"]:
        # extract the index of the class label from the
        # `detections`,then compute the (x,y)-coordinates of
        # the bounding box for the object
        idx = int(detections[0,1])
        print(idx   )
        box = detections[0,3:7] * np.array([w,h,w,h])
        (startX,startY,endX,endY) = box.astype("int")

        # draw the prediction on the frame
        label = "{}: {:.2f}%".format(CLASSES[idx],confidence * 100)
        cv2.rectangle(img,(startX,startY),(endX,endY),COLORS[idx],2)
        y = startY - 15 if startY - 15 > 15 else startY + 15
        cv2.putText(img,label,y),cv2.FONT_HERSHEY_SIMPLEX,0.5,2)

        print(label)


out_img = cv2.resize(img,(640,480))
out.write(out_img)
cv2.imshow('img',img)
#cv2.waitKey()
if cv2.waitKey(25) & 0xFF == ord('q'):

    cv2.destroyAllWindows()
    cap.release()
    out.release()

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-