https://doi.org/10.1007/s00138-025-01768-8">
 

Document Type

Article

Publication Date

12-19-2025

Abstract

Estimating the position and orientation of a rigid object from an image is critical for situational awareness in robotics and autonomous systems. This study explores relative pose estimation using an ultra-wide fisheye camera for unmanned aircraft inspection vehicles. Ultra-wide fisheye lenses introduce radial distortion and capture features beyond the rectilinear image plane, rendering rectilinear Perspective-n-Point (PnP) algorithms inadequate. Designing a bespoke ultra-wide fisheye localization algorithm requires consideration of both the feature detection method and the pose estimator itself. This study proposes a novel method that combines (1) a fisheye-to-cubemap reprojection, (2) a You Only Look Once (YOLO) convolutional neural network trained for arbitrary airborne perspectives, and (3) an Angle-Agnostic and Multiple-Frame PnP (AMP) pose estimation algorithm. Our pipeline achieves a 97% success rate for valid pose estimates, with a mean absolute translational error of less than 12 cm on real ultra-wide fisheye imagery, outperforming conventional techniques, including OpenCV.

Comments

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Supplementary material: A supplementary file (MP4) is available at the publisher's abstract page for this article, reachable using the DOI link below.

The article was published as a digital article of Machine Vision and Applications in December 2025, ahead of inclusion in volume 37 as cited below.

Source Publication

Machine Vision and Applications (ISSN 0932-8092 | eISSN 1432-1769)

Share

COinS