Through-the-wall Object Reconstruction via Reinforcement Learning
Document Type
Article
Publication Date
8-2024
Abstract
This paper addresses the problem of characterizing and localizing objects via through-the-wall radar imaging. We consider two separate problems. First, we assume a single object is located in a room and we use a convolutional neural network (CNN) to classify the shape of the object. Second, we assume multiple objects are located in a room and use a U-net CNN to determine the location of the object via pixel-by-pixel classification. For both problems, we use numerical methods to simulate the electromagnetic field assuming known room parameters and object location. The simulated data is used to train and evaluate both the CNN and U-net CNN. In the case of single objects, we achieve 90% accuracy in classifying the shape of the object. In the case of multiple objects, we show that the U-Net outputs an image segmentation heat map of the domain space, enabling visual analysis to identify the characteristics of multiple unknown objects. Given sufficient data, the U-net heat map highlights object pixels which provide the location and shape of the unknown objects, with precision and recall accuracy exceeding 80%.
Source Publication
Results in Applied Mathematics
Recommended Citation
Pomerico, D., Wood, A., & Cho, P. (2024). Through-the-wall object reconstruction via reinforcement learning. Results in Applied Mathematics, 23, 100465. https://doi.org/10.1016/j.rinam.2024.100465
Comments
The "Link to Full Text" on this page opens the article at the publisher website.
This is an Open Access article published by Elsevier and distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License, which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way. CC BY-NC-ND 4.0
See also: A similar article by Dr. Wood from 2020. Click facpub/791