如何解决优化视频帧的曼哈顿距离计算
与视频的第一帧相比,我正在尝试为视频的每一帧的特定裁剪部分生成曼哈顿距离。这些是高帧率视频,每个视频约有5000+帧。目前,每个视频大约需要120秒才能执行此分析,并生成帧列表及其关联的“曼哈顿距离”。我有几件事可以优化速度-使用内置的scipy.spacial.distance
cdist函数,使用np.linalg.norm
尝试欧几里得距离,并在循环外将视频作为视频进行灰度转换,处理步骤。这些更改均未对计算时间造成重大影响。有什么方法可以大大加快此过程(下面的函数中的while循环)?
def compare_images_master():
current_frame = 0
numFrames = count_frames_automatic(resized_videocap)
GrayOriginalFrame = cv2.cvtColor(DetectionPlate,cv2.COLOR_BGR2GRAY)
originalFrameMap = GrayOriginalFrame.astype(float)
m_norm_list = [0] * numFrames
frame_num_list = list(range(numFrames))
start_compare_images = time.time()
print("calculating Manhattan distances...")
for current_frame in frame_num_list:
resized_videocap.set(1,current_frame) #set current frame to the frame to be computed
ret,currentFrameImage = resized_videocap.read() #read video file and open current
currentFrameImageCropped = currentFrameImage[ref_pts[0][1]:ref_pts[1][1],ref_pts[0][0]: ref_pts[1][0]] #crop image to detection plate
GrayCurrentFrame = cv2.cvtColor(currentFrameImageCropped,cv2.COLOR_BGR2GRAY) #convert to grayscale
currentFrameMap = GrayCurrentFrame.astype(float) #convert from image to pixel frame map
diff = originalFrameMap - currentFrameMap
m_norm = np.sum(np.abs(diff)) # Manhattan norm
m_norm_list[current_frame] = m_norm
end_compare_images = time.time()
print("Completed calculating Manhattan distances...")
print ('time taken to generate manhattan distances',end_compare_images - start_compare_images)
# time taken - 120seconds
return m_norm_list,frame_num_list
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。