Date of Award

9-2021

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Department of Operational Sciences

First Advisor

Jeffery D. Weir, PhD

Abstract

This dissertation studies the underlying optimization problem encountered during the early-learning stages of convolutional neural networks and introduces a training algorithm competitive with existing state-of-the-art methods. First, a Design of Experiments method is introduced to systematically measure empirical second-order Lipschitz upper bound and region size estimates for local regions of convolutional neural network loss surfaces experienced during the early-learning stages. This method demonstrates that architecture choices can significantly impact the local loss surfaces traversed during training. Next, a Design of Experiments method is used to study the effects convolutional neural network architecture hyperparameters have on different optimization routines' abilities to effectively train and find solutions that generalize well during early-learning, demonstrating a relationship between routine selection and network architecture. Finally, a method to accelerate the early-learning of non-adaptive, first-order optimization routines is developed. The method decomposes the neural network training problem into a series of unconstrained optimization problems within localized trailing Euclidean trust-regions and allows non-adaptive methods to exhibit training results which are competitive with adaptive methods.

AFIT Designator

AFIT-ENS-DS-21-S-049

DTIC Accession Number

AD1151637

Share

COinS