Deep Video Object Contour Extraction Using Fully Convolutional Network
Abstract— Complex scene object contour extraction problem has become an important topic in computer vision. To solve the problem, a deep fully convolutional neural network model is established in this paper. The model tackles the task of semi-supervised video contour extract. In our model, the interactive segmentation method and Mask-RCNN algorithm are respectively applied to the first frame of a video to obtain the target semantic information and segmentation mask. Then the binary object mask is processed by the edge detection algorithm to generate a object contour mask. Next the video and the first frame contour mask are input to network of One-Shot Video Object Segmentation algorithm, and contour features and semantic information of the object are learned by the network. Finally the contour semantic information is automatically passed to subsequent frames, and the contours that particular objects in each frame of video are extracted. Experiments show that this model can detect and locate one or more objects quickly and accurately in a video sequence for various complex scene. Compared with the general edge detection operator, our algorithm does not need to extract redundant background edges and texture details and can be better applied to pose estimation and target recognition.
Index Terms— Contour extraction; convolutional neural network; interactive segmentation; edge detection; Video Object Segmentation
Die Li, Murong Jiang, Guocai Du, Chunna Zhao, Yinghua Li
School of Information Science and Engineering, Yunnan University, CHINA
Cite: Die Li, Murong Jiang, Guocai Du, Chunna Zhao, Yinghua Li, "Deep Video Object Contour Extraction Using Fully Convolutional Network," Proceedings of 2019 the 9th International Workshop on Computer Science and Engineering, pp. 164-170, Hong Kong, 15-17 June, 2019.