Date of Award

3-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Department of Operational Sciences

First Advisor

Nathan Gaw, PhD

Abstract

The main contributions of this research is to add to the growing library of literature on the use of deep learning algorithms for the spatiotemporal prediction of dangerous atmospheric and hydrologic phenomena. Specifically, we develop novel attention-based and non-attention-based recurrent neural network frameworks to produce short-range sequential forecasts for lightning and tornado occurrences. Additionally, we introduce methods that account for and include error in the model tuning process to generate more reliable models. Furthermore, we have created a lightweight spatiotemporal tornadic prediction dataset that we plan to make publicly available. The first component of this research develops three novel spatiotemporal recurrent deep-learning models for short-range lightning prediction. The novel neural networks — a self-attention-based Convolutional Long-Short Term Memory (ConvLSTM), a hybrid ConvLSTM-Conv U-Net, and a self-attention-based version of the hybrid network — are employed in a toy experiment using a benchmark dataset and in a realworld experiment using multiple types of common, remotely sensed weather data. The models undergo a unique tuning and training procedure in both experiments to balance model accuracy with model uncertainty. After training, the performance of each model is compared to other deep learning model architectures seen in lightning prediction literature for statistical differences. The results indicate that the Attention-ConvLSTM-Conv U-Net (A-CCU) model is statistically superior to models seen in previous research on the benchmark dataset. Additionally, the lightning prediction capabilities of the A-CCU are comparable to those of prior literature. The second component of this research develops a robust yet lightweight dataset from a high-resolution radar dataset and climatic tornado records and then engineers six attention-based and recurrent neural networks for short-range tornadic prediction. After tuning and training the models using a mixture of optimization techniques, the trained models are compared against the National Weather Services (NWS) average tornado warning metrics. Our experiments show that the models attain comparable, if not better, accuracy metrics when compared against current NWS averages at a spatial scale less than 5 × 5 miles. dditionally, the models demonstrate the ability to make more accurate forecasts with more lead time than the current NWS average lead time on tornado warnings. The third component of this research expands upon the second by developing the novel Block-current U-Net Vision Transformer (BRUT) framework. This work also introduces two designed experiments into the model tuning process, showing that the experiments can effectively eliminate hyperparameters from the tuning process, enabling more effective use of computational resources. After tuning and training, BRUT’s performance is compared against prior work and the NWS’s average tornado warning metrics. The comparison reveals that BRUT achieves higher accuracy values than the current NWS tornado warning average and outperforms the models from previous work.

AFIT Designator

AFIT-ENS-DS-25-M-203

Comments

An embargo was observed for posting this work.

Distribution Statement A: Distribution Unlimited. Approved for public release. PA case number: 88ABW-2025-0152

Share

COinS