10.1109/AIPR.2014.7041902">
 

Document Type

Conference Proceeding

Publication Date

10-2014

Abstract

During the 1950s and 1960s the United States conducted and filmed over 200 atmospheric nuclear tests establishing the foundations of atmospheric nuclear detonation behavior. Each explosion was documented with about 20 videos from three or four points of view. Synthesizing the videos into a 3D video will improve yield estimates and reduce error factors. The videos were captured at a nominal 2500 frames per second, but range from 2300-3100 frames per second during operation. In order to combine them into one 3D video, individual video frames need to be correlated in time with each other. When the videos were captured a timing system was used that shined light in a video every 5 milliseconds creating a small circle exposed in the frame. This paper investigates several method of extracting the timing from images in the cases when the timing marks are occluded and washed out, as well as when the films are exposed as expected. Results show an improvement over past techniques. For normal videos, occluded videos, and washed out videos, timing is detected with 99.3%, 77.3%, and 88.6% probability with a 2.6%, 11.3%, 5.9% false alarm rate, respectively.

Comments

© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

AFIT Scholar furnishes the accepted version of this conference paper. The published version of record is available from IEEE via subscription at the DOI link in the citation below.

Source Publication

2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)

Share

COinS