WCSE 2023
ISBN: 978-981-18-7950-0 DOI: 10.18178/wcse.2023.06.012

Self-Learning Target Posture Estimation Method Based on RGBD Data

Fu Hai, Zhao Xincan, Wu Depei

Abstract—A self-learning target pose estimation method based on RGBD images is proposed to improve the robustness and applicability of 6DoF object pose estimation for augmented reality assembly guidance. The method is built on a self-supervised approach, using a dataset annotation module to create pseudo-labels for actual data and fine-tune the posture estimate model to accommodate changes in the realistic data distribution. The dataset annotation module uses an Iterative Closest Point (ICP) to solve for the target pose of a single frame and SLAM to locate the camera pose to infer the global pose of the target, in addition to the local pose in various camera spaces. The pose estimation model uses target detection techniques to coarsely segment the image and a dense fusion network to extract multisource features, predicting the target’s pose and semantic segmentation results. The labeling rate of the dataset in real scenarios is 36 frames per minute, and the AUC of 6DoF pose estimation is 3.72% higher than that of existing algorithms. According to experimental findings, the self-learning pose estimate method can well adjust to new environments.

Index Terms—6DoF pose estimation, data annotation, deep learning, feature fusion

Fu Hai, Zhao Xincan, Wu Depei
School of Computer and Artificial Intelligence, Zhengzhou University, CHINA

[Download]


Cite: Fu Hai, Zhao Xincan, Wu Depei, "Self-Learning Target Posture Estimation Method Based on RGBD Data" Proceedings of 2023 the 13th International Workshop on Computer Science and Engineering (WCSE 2023), pp. 74-82, June 16-18, 2023.