Feature Detection and Matching on Atmospheric Nuclear Detonation Video
Automated feature matching of nuclear detonations (NUDETs) enables three-dimensional point cloud reconstruction, and establishment a volume-based model to reduce uncertainty in estimating the yield of NUDETs solely from video. Establishing a volume-based model requires feature correspondences between wide viewpoints of 58°–110° that are larger than scale-invariant feature transform-based techniques can reliably match. The presented technique detects relative bright features in the NUDET known as ‘hotspots,’ and matches them across wide viewpoints using a spherical based object model. Results show that hotspots can be detected with a 71.95% hit rate and 86.03% precision. Hotspots are matched to films from different viewpoints with 76.6% correctness and a standard deviation of 16.4%. Hotspot descriptors are also matched in time sequence with 99.6% correctness and a standard deviation of 1.07%. The results demonstrate that a spherical model can serve as a viable descriptor model for matching across wide viewpoints when the object is known to be spherical. It also demonstrates an automated feature detection and matching combination that enables features to be matched from unsynchronised video across wide viewpoints of 58°–110° on spherical objects where state-of-the-art techniques are insufficient.
IET Computer Vision
Schmitt, D. T., & Peterson, G. L. (2016). Feature detection and matching on atmospheric nuclear detonation video. IET Computer Vision, 10(5), 359–365. https://doi.org/10.1049/iet-cvi.2015.0145