Document Type
Article
Publication Date
12-3-2024
Abstract
This paper presents a comprehensive approach to enhancing autonomous docking maneuvers through machine visual perception and sim-to-real transfer learning. By leveraging relative vectoring techniques, we aim to replicate the human ability to execute precise docking operations. Our study focuses on autonomous aerial refueling as a use case, demonstrating significant advancements in relative navigation and object detection. We introduce a novel method for aligning digital twins using fiducial targets and motion capture data, which facilitates accurate pose estimation from real-world imagery. Additionally, we develop cost-efficient annotation automation techniques for generating high-quality You Only Look Once training data. Experimental results indicate that our transfer learning methodologies enable accurate and reliable relative vectoring in real-world conditions, achieving error margins of less than 3 cm at contact (when vehicles are approximately 4 m from the camera) and maintaining performance at over 56 fps. The research findings underscore the potential of augmented reality and scene augmentation in improving model generalization and performance, bridging the gap between simulation and real-world applications. This work lays the groundwork for deploying autonomous docking systems in complex and dynamic environments, minimizing human intervention and enhancing operational efficiency.
Source Publication
Neural Computing and Applications (ISSN 0941-0643 | e-ISSN 1433-3058)
Recommended Citation
Worth, D., Choate, J., Raettig, R., Nykl, S., & Taylor, C. (2024). Machine visual perception from sim-to-real transfer learning for autonomous docking maneuvers. Neural Computing and Applications. https://doi.org/10.1007/s00521-024-10543-1
Comments
This article was published online at the Springer website, ahead of inclusion in a future issue of Neural Computing and Applications.
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.