opencv-123-DNN 为模型运行设置目标设备与计算后台

知识点

OpenCV中加载网络模型之后,可以设置计算后台与计算目标设备,OpenCV DNN模块支持这两个设置的相关API如下:

1
2
3
4
5
6
7
cv::dnn::Net::setPreferableBackend(
int backendId
)
backendId 表示后台计算id,

- DNN_BACKEND_DEFAULT (DNN_BACKEND_INFERENCE_ENGINE)表示默认使用intel的预测推断库(需要下载安装Intel® OpenVINO™ toolkit, 然后重新编译OpenCV源码,在CMake时候enable该选项方可), 可加速计算!
- DNN_BACKEND_OPENCV 一般情况都是使用opencv dnn作为后台计算,
1
2
3
4
5
6
7
8
9
void cv::dnn::Net::setPreferableTarget(
int targetId
)
常见的目标设备id如下:

- DNN_TARGET_CPU其中表示使用CPU计算,默认是的
- DNN_TARGET_OPENCL 表示使用OpenCL加速,一般情况速度都很扯
- DNN_TARGET_OPENCL_FP16 可以尝试
- DNN_TARGET_MYRIAD 树莓派上的

代码(python)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
"""
DNN 为模型运行设置目标设备与计算后台
"""

import cv2 as cv
import numpy as np

bin_model = "bvlc_googlenet.caffemodel"
protxt = "bvlc_googlenet.prototxt"

# load names of classes
classes = None
with open("classification_classes_ILSVRC2012.txt", 'rt') as f:
classes = f.read().rstrip('\n').split('\n')

# load CNN model
net = cv.dnn.readNetFromCaffe(protxt, bin_model)
net.setPreferableBackend(cv.dnn.DNN_BACKEND_OPENCV)
net.setPreferableTarget(cv.dnn.DNN_TARGET_CPU)

# read input data
image = cv.imread("images/airplane.jpg")
blob = cv.dnn.blobFromImage(image, 1.0, (224, 224), (104, 117, 123),
False, False)
result = np.copy(image)
cv.imshow("input", image)

# run a model
net.setInput(blob)
out = net.forward()

# get a class with a highest score
out = out.flatten()
classId = np.argmax(out)
confidence = out[classId]

# put efficiency information
t, _ = net.getPerfProfile()
label = 'Inference time: %.2f ms' % (t * 1000.0 / cv.getTickFrequency())
cv.putText(result, label, (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0))

# print predicted class
label = '%s : %.4f' % (classes[classId], confidence)
cv.putText(result, label, (50, 50), cv.FONT_HERSHEY_SIMPLEX, 0.75, (0, 0, 255), 2)
cv.imshow("googlenet-demo", result)

cv.waitKey(0)
cv.destroyAllWindows()

结果

代码地址

github