如何解决Yolov2冻结图到Tensorrt图
我使用以下代码将yolov2冻结图转换为tftrt图。
OUTPUT_NAME = ["models/convolutional23/BiasAdd"]
# read Tensorflow frozen graph
with gfile.FastGFile('./yolov2_frozen-graph.pb','rb') as tf_model:
tf_graphf = tensorflow.GraphDef()
tf_graphf.ParseFromString(tf_model.read())
# convert (optimize) frozen model to TensorRT model
trt_graph = trt.create_inference_graph(input_graph_def=tf_graphf,outputs=OUTPUT_NAME,max_batch_size=1,max_workspace_size_bytes=2 * (10 ** 9),precision_mode="FP32")
# write the TensorRT model to be used later for inference
with gfile.FastGFile("Yolo_TensorRT_modelFP16.pb",'wb') as f:
f.write(trt_graph.SerializeToString())
print("TensorRT model is successfully stored!")
然后,我使用以下代码运行推理。
with tf.Session() as sess:
img = cv2.imread("image3.jpg")
img = cv2.resize(img,(608,608))
# read TensorRT frozen graph
with gfile.FastGFile('Yolo_TensorRT_modelFP16.pb','rb') as trt_model:
trt_graph = tf.GraphDef()
trt_graph.ParseFromString(trt_model.read())
# obtain the corresponding input-output tensor
tf.import_graph_def(trt_graph,name='')
input = sess.graph.get_tensor_by_name('models/net1:0')
output = sess.graph.get_tensor_by_name('models/convolutional23/BiasAdd:0')
for i in range(100):
start = time.time()
# perform inference
sess.run(output,feed_dict={input: [np.asarray(img)]})
end = time.time() - start
print("infernce time: ",end)
因此,即使我在FP16 yolov2冻结的tftrt图上进行推断,它也可以提供与普通yolov2冻结的图完全相同的性能。您能告诉我为提高tftrt图的性能该怎么做吗?
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。