You need to complete these steps:
- Import the modules:
import cv2import numpy as np
- Load the model and set the confidence threshold:
model = cv2.dnn.readNetFromCaffe('../data/face_detector/deploy.prototxt', '../data/face_detector/res10_300x300_ssd_iter_140000.caffemodel')CONF_THR = 0.5
- Open the video:
video = cv2.VideoCapture('../data/faces.mp4')while True: ret, frame = video.read() if not ret: break
- Detect the faces in the current frame:
h, w = frame.shape[0:2] blob = cv2.dnn.blobFromImage(frame, 1, (300*w//h,300), (104,177,123), False) model.setInput(blob) output = model.forward()
- Visualize the results:
for i in range(output.shape[2]): conf = output[0,0,i,2] if conf > CONF_THR: label = output[0,0,i,1] x0,y0,x1,y1 ...