Using a Plenoptic Camera for Real-Time Depth Estimation

Ryan J. Anderson

Abstract

The plenoptic camera collects samples of the 4D light field, which allows for the collection of imagery and depth information simultaneously. The plenoptic camera differs from stereoscopic systems because the light field is captured by a single lens and sensor rather than two or more. This translates to less size, weight, and power (SWAP) which is ideal for space missions where imagery and depth information is needed such as proximity operations and docking missions. The main objective of this research is to design and evaluate performance of a method to autonomously output the depth of key elements of a scene in real-time. In this research, the depth is estimated using a gradient method and the key elements of the scene are selected using the Hough transform. A major finding of this research is that in order for this to run near real-time, only a small portion of the light field can be analyzed due to size of the data set. This results in the potential to miss important information that the light field has to offer. The average error of the Lytro Illum was ~7% while the First Generations was ~17% error with a decrease in accuracy as the range increases. The average run time for the Illum and First Generation was approximately five seconds and three seconds respectively using the Hough transform to reduce the size of the light fields. The Hough transform is the most significant portion of the run time, but it still reduced the run time by than it increased it. This work lays the groundwork for using a plenoptic camera to autonomously output the depth information about a scene in real-time by developing a depth estimation method for specific features in light fields and concluding that the Hough transform is a good method for this, especially if multiple features are desired.