Atmospheric compensation of long-wave infrared (LWIR) hyperspectral imagery is investigated in this article using set representations learned by a neural network. This approach relies on synthetic at-sensor radiance data derived from collected radiosondes and a diverse database of measured emissivity spectra sampled at a range of surface temperatures. The network loss function relies on LWIR radiative transfer equations to update model parameters. Atmospheric predictions are made on a set of diverse pixels extracted from the scene, without knowledge of blackbody pixels or pixel temperatures. The network architecture utilizes permutation-invariant layers to predict a set representation, similar to the work performed in point cloud classification. When applied to collected hyperspectral image data, this method shows comparable performance to Fast Line-of-Sight Atmospheric Analysis of Hypercubes-Infrared (FLAASH-IR), using an auto- mated pixel selection approach. Additionally, inference time is significantly reduced compared to FLAASH-IR with predictions made on average in 0.24 s on a 128 pixel by 5000 pixel data cube using a mobile graphics card. This computational speed-up on a low-power platform results in an autonomous atmospheric compensation method effective for real-time, onboard use, while only requiring a diversity of materials in the scene.
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
N. Westing, K. C. Gross, B. J. Borghetti, J. Martin and J. Meola, "Learning Set Representations for LWIR In-Scene Atmospheric Compensation," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 1438-1449, 2020, doi: 10.1109/JSTARS.2020.2980750.